id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.00647 | On the Runtime of Chemical Reaction Networks Beyond Idealized Conditions | This paper studies the (discrete) \emph{chemical reaction network (CRN)}
computational model that emerged in the last two decades as an abstraction for
molecular programming. The correctness of CRN protocols is typically
established under one of two possible schedulers that determine how the
execution advances: (1) a \emph{stochastic scheduler} that obeys the
(continuous time) Markov process dictated by the standard model of stochastic
chemical kinetics; or (2) an \emph{adversarial scheduler} whose only commitment
is to maintain a certain fairness condition. The latter scheduler is justified
by the fact that the former one crucially assumes ``idealized conditions'' that
more often than not, do not hold in real wet-lab experiments. However, when it
comes to analyzing the \emph{runtime} of CRN protocols, the existing literature
focuses strictly on the stochastic scheduler, thus raising the research
question that drives this work: Is there a meaningful way to quantify the
runtime of CRNs without the idealized conditions assumption?
The main conceptual contribution of the current paper is to answer this
question in the affirmative, formulating a new runtime measure for CRN
protocols that does not rely on idealized conditions. This runtime measure is
based on an adapted (weaker) fairness condition as well as a novel scheme that
enables partitioning the execution into short \emph{rounds} and charging the
runtime for each round individually (inspired by definitions for the runtime of
asynchronous distributed algorithms). Following that, we turn to investigate
various fundamental computational tasks and establish (often tight) bounds on
the runtime of the corresponding CRN protocols operating under the adversarial
scheduler. This includes an almost complete chart of the runtime complexity
landscape of predicate decidability tasks. | Anne Condon, Yuval Emek, Noga Harlev | 2023-07-02T19:51:02Z | http://arxiv.org/abs/2307.00647v1 | # On the Runtime of Chemical Reaction Networks Beyond Idealized Conditions
###### Abstract
This paper studies the (discrete) _chemical reaction network (CRN)_ computational model that emerged in the last two decades as an abstraction for molecular programming. The correctness of CRN protocols is typically established under one of two possible schedulers that determine how the execution advances: (1) a _stochastic scheduler_ that obeys the (continuous time) Markov process dictated by the standard model of stochastic chemical kinetics; or (2) an _adversarial scheduler_ whose only commitment is to maintain a certain fairness condition. The latter scheduler is justified by the fact that the former one crucially assumes "idealized conditions" that more often than not, do not hold in real wet-lab experiments. However, when it comes to analyzing the _runtime_ of CRN protocols, the existing literature focuses strictly on the stochastic scheduler, thus raising the research question that drives this work: Is there a meaningful way to quantify the runtime of CRNs without the idealized conditions assumption?
The main conceptual contribution of the current paper is to answer this question in the affirmative, formulating a new runtime measure for CRN protocols that does not rely on idealized conditions. This runtime measure is based on an adapted (weaker) fairness condition as well as a novel scheme that enables partitioning the execution into short _rounds_ and charging the runtime for each round individually (inspired by definitions for the runtime of asynchronous distributed algorithms). Following that, we turn to investigate various fundamental computational tasks and establish (often tight) bounds on the runtime of the corresponding CRN protocols operating under the adversarial scheduler. This includes an almost complete chart of the runtime complexity landscape of predicate decidability tasks.
chemical reaction networks, adversarial runtime, weak fairness, predicate decidability 2012 acmcopyrightmargin=5pt, innerleftmargin=5pt, innertopmargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt,margin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt,margin=5pt, innermargin=marginmargin=5pt, innermargin=5pt,margin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=marginmarginmarginmargin=5pt, innermargin=marginmarginmargin=5pt, innermarginmargin=marginmarginmargin=5pt, innermargin=marginmargin=marginmarginmarginmargin=marginmargin=marginmarginmargin=marginmargin=marginmarginmargin=marginmarginmargin=marginmargin=marginmargin=marginmarginmargin=marginmarginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmarginmargin=marginmarginmarginmargin=marginmarginmarginmargin=marginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmarginmarginmarginmargin=margin
nets [28], and vector addition systems [26].1
Footnote 1: To simplify the discussions, we subsequently stick to the CRN terminology even when citing literature that was originally written in terms of these related models.
The standard model of stochastic chemical kinetics [24], referred to hereafter as the _standard stochastic model_, dictates that the execution of a CRN protocol (operating under fixed environmental conditions) advances as a continuous time Markov process, where the rate of each reaction is determined by the molecular counts of its reactants as well as a reaction specific rate coefficient. This model crucially assumes that the system is "well-mixed", and so any pair of distinct molecules is equally likely to interact, and that the rate coefficients remain fixed.2 Under the standard stochastic model, CRNs can simulate Turing machines if a small error probability is tolerated [16]. The correctness of some protocols, including Turing machine simulations, depends sensitively on the "idealized conditions" of fixed rate coefficients and a well-mixed system.
Footnote 2: We follow the common assumption that each reaction involves at most two reactants.
However, correctness of many other CRN protocols, such as those which stably compute predicates and functions [2, 5, 13, 22, 10, 12], is premised on quite different assumptions: correct output should be produced on _all_ "fair executions" of the protocol, which means that the correctness of these protocols does not depend on idealized conditions. These protocols operate under a notion of fairness, adopted originally in [2], requiring that reachable configurations are not starved; in the current paper, we refer to this fairness notion as _strong fairness_. A celebrated result of Angluin et al. [2, 5] states that with respect to strong fairness, a predicate can be decided by a CRN if and only if it is semilinear.
As the "what can be computed by CRNs?" question reaches a conclusion, the focus naturally shifts to its "how fast?" counterpart. The latter question is important as the analysis of CRN runtime complexity enables the comparison between different CRN protocols and ultimately guides the search for better ones. Even for CRNs designed to operate on all (strongly) fair executions, the existing runtime analyses assume that reactions are _scheduled stochastically_, namely, according to the Markov process of the standard stochastic model, consistent with having the aforementioned idealized conditions. However, such conditions may well not hold in real wet-lab experiments, where additional factors can significantly affect the order at which reactions proceed [34]. For example, temperature can fluctuate, or molecules may be temporarily unavailable, perhaps sticking to the side of a test tube or reversibly binding to another reactant. Consequently, our work is driven by the following research question: _Is there a meaningful way to quantify the runtime of CRNs when idealized conditions do not necessarily hold?_
The Quest for an Adversarial Runtime Measure.We search for a runtime measure suitable for _adversarially scheduled_ executions, namely, executions that are not subject to the constraints of the aforementioned idealized conditions. This is tricky since the adversarial scheduler may generate (arbitrarily) long execution intervals during which no progress can be made, even if those are not likely to be scheduled stochastically. Therefore, the "adversarial runtime measure" should neutralize the devious behavior of the scheduler by ensuring that the protocol is not unduly penalized from such bad execution intervals. To guide our search, we look for inspiration from another domain of decentralized computation that faced a similar challenge: distributed network algorithms.
While it is straightforward to measure the runtime of (idealized) synchronous distributed protocols, early on, researchers identified the need to define runtime measures also for (ad
versarially scheduled) asynchronous distributed protocols [7, 20]. The adversarial runtime measures that were formulated in this regard share the following two principles: (P1) partition the execution into _rounds_, so that in each round, the protocol has an opportunity to make progress; and (P2) calibrate the runtime charged to the individual rounds so that if the adversarial scheduler opts to generate the execution according to the idealized benchmark, then the adversarial runtime measure coincides with the idealized one.
Specifically, in the context of asynchronous message passing protocols, Awerbuch [7] translates principle (P1) to the requirement that no message is delayed for more than a single round, whereas in the context of self-stabilizing protocols controlled by the distributed daemon, Dolev et al. [20] translate this principle to the requirement that each node is activated at least once in every round. For principle (P2), both Awerbuch and Dolev et al. take the "idealized benchmark" to be a synchronous execution in which every round costs one time unit.
When it comes to formulating an adversarial runtime measure for CRN protocols, principle (P2) is rather straightforward: we should make sure that on stochastically generated executions (playing the "idealized benchmark" role), the adversarial runtime measure agrees (in expectation) with that of the corresponding continuous time Markov process. Interpreting principle (P1), however, seems more difficult as it is not clear how to partition the execution into rounds so that in each round, the protocol "has an opportunity to make progress".
The first step towards resolving this difficulty is to introduce an alternative notion of fairness, referred to hereafter as _weak fairness_: An execution is weakly fair if a continuously applicable reaction (i.e., one for which the needed reactants are available) is not starved; such a reaction is either eventually scheduled or the system reaches a configuration where the reaction is inapplicable. Using a graph theoretic characterization, we show that any CRN protocol whose correctness is guaranteed on weakly fair executions is correct also on strongly fair executions (see Cor. 4), thus justifying the weak vs. strong terminology choice. It turns out that for predicate decidability, strong fairness is actually not strictly stronger: protocols operating under the weak fairness assumption can decide all (and only) semilinear predicates (see Thm. 12).
It remains to come up with a scheme that partitions an execution of CRN protocols into rounds in which the weakly fair adversarial scheduler can steer the execution in a nefarious direction, but also the protocol has an opportunity to make progress. A naive attempt at ensuring progress would be to end the current round once every applicable reaction is either scheduled or becomes inapplicable; the resulting partition is too coarse though since in general, a CRN protocol does not have to "wait" for _all_ its applicable reactions in order to make progress. Another naive attempt is to end the current round once _any_ reaction is scheduled; this yields a partition which is too fine, allowing the adversarial scheduler to charge the protocol's run-time for (arbitrarily many) "progress-less rounds".
So, which reaction is necessary for the CRN protocol to make progress? We do not have a good answer for this question, but we know who does...
Runtime and Skipping Policies.Our adversarial runtime measure is formulated so that it is the protocol designer who decides which reaction is necessary for the CRN protocol to make progress. This is done by means of a _runtime policy_\(\varrho\), used solely for the runtime analysis, that maps each configuration \(\mathbf{c}\) to a _target_ reaction \(\varrho(\mathbf{c})\). (Our actual definition of runtime policies is more general, mapping each configuration to a set of target reactions; see Sec. 4.) Symmetrically to the protocol designer's runtime policy, we also introduce a
skipping policy_\(\sigma\), chosen by the adversarial scheduler, that maps each step \(t\geq 0\) to a step \(\sigma(t)\geq t\).
These two policies partition a given execution \(\eta\) into successive rounds based on the following inductive scheme: Round 0 starts at step \(t(0)=0\). Assuming that round \(i\geq 0\) starts at step \(t(i)\), the prefix of round \(i\) is determined by the adversarial skipping policy \(\sigma\) so that it lasts until step \(\sigma(t(i))\); let \(\mathbf{e}^{i}\) denote the configuration in step \(\sigma(t(i))\), referred to as the round's _effective configuration_. Following that, the suffix of round \(i\) is determined by the protocol designer's runtime policy \(\varrho\) so that it lasts until the earliest step in which the target reaction \(\varrho(\mathbf{e}^{i})\) of the round's effective configuration \(\mathbf{e}^{i}\) is either scheduled or becomes inapplicable. That is, in each round, the adversarial scheduler determines (by means of the skipping policy) the round's effective configuration, striving to ensure that progress from this configuration is slow, whereas the runtime policy determines when progress has been made from the effective configuration. This scheme is well defined by the choice of weak fairness; we emphasize that this would not be the case with strong fairness.
The partition of execution \(\eta\) into rounds allows us to ascribe a runtime to \(\eta\) by charging each round with a _temporal cost_ and then accumulating the temporal costs of all rounds until \(\eta\) terminates.3 The temporal cost of round \(i\) is defined to be the expected (continuous) time until the target reaction \(\varrho(\mathbf{e}^{i})\) of its effective configuration \(\mathbf{e}^{i}\) is either scheduled or becomes inapplicable in an imaginary execution that starts at \(\mathbf{e}^{i}\) and proceeds according to the stochastic scheduler.4 In other words, the protocol's runtime is _not_ charged for the prefix of round \(i\) that lasts until the (adversarially chosen) effective configuration is reached; the temporal cost charged for the round's suffix, emerging from the effective configuration, is the expected time that this suffix would have lasted in a stochastically scheduled execution (i.e., the idealized benchmark).
Footnote 3: The exact meaning of termination in this regard is made clear in Sec. 2.
Footnote 4: Here, it is assumed that the stochastic scheduler operates with no rate coefficients and with a linear volume (a.k.a. “parallel time”), see Sec. 2.
The asymptotic runtime of the CRN protocol is defined by minimizing over all runtime policies \(\varrho\) and then maximizing over all weakly fair executions \(\eta\) and skipping policies \(\sigma\). Put differently, the protocol designer first commits to \(\varrho\) and only then, the (weakly fair) adversarial scheduler determines \(\eta\) and \(\sigma\).
Intuitively, the challenge in constructing a good runtime policy \(\varrho\) (the challenge one faces when attempting to up-bound a protocol's runtime) is composed of two, often competing, objectives (see, e.g., Fig. 1): On the one hand, \(\varrho(\mathbf{c})\) should be selected so that every execution \(\eta\) is guaranteed to gain "significant progress" by the time a round whose effective configuration is \(\mathbf{c}\) ends, thus minimizing the number of rounds until \(\eta\) terminates. On the other hand, \(\varrho(\mathbf{c})\) should be selected so that the temporal cost of such a round is small, thus minimizing the contribution of the individual rounds to \(\eta\)'s runtime. In the typical scenarios, a good runtime policy \(\varrho\) results in partitioning \(\eta\) into \(n^{\Theta(1)}\) rounds, each contributing a temporal cost between \(\Theta(1/n)\) and \(\Theta(n)\), where \(n\) is the molecular count of \(\eta\)'s initial configuration (these scenarios include "textbook examples" such as the classic leader election and rumor spreading protocols as well as all protocols presented in Sec. 4.2-5.2).
To verify that our adversarial runtime measure is indeed compatible with the aforementioned principle (P2), we show that if the (adversarial) scheduler opts to generate the execution \(\eta\) stochastically, then our runtime measure coincides (in expectation) with that of the corresponding continuous time Markov process (see Lem. 7). The adversarial scheduler however can be more malicious than that: simple examples show that in general, the runtime
of a CRN protocol on adversarially scheduled executions may be significantly larger than on stochastically scheduled executions (see Fig. 2 and 3).
While runtime analyses of CRNs in the presence of common defect modes can be insightful, a strength of our adversarial model is that it is not tied to specific defects in actual CRNs or their biomolecular implementations. In particular, if the adversarial runtime of a CRN matches its stochastic runtime, then we would expect the CRN to perform according to its stochastic runtime even in the presence of defect modes that we may not anticipate. Moreover, in cases where stochastic runtime analysis is complex (involving reasoning about many different executions of a protocol and their likelihoods), it may in fact be easier to determine the adversarial runtime since it only requires stochastic analysis from rounds' effective configurations. For similar reasons, notions of adversarial runtime have proven to be valuable in design of algorithms in both centralized and decentralized domains more broadly, even when they do not capture realistic physical scenarios. Finally, while the analysis task of finding a good runtime policy for a given CRN may seem formidable at first, our experience in analyzing the protocols in this paper is that such a runtime policy is quite easy to deduce, mirroring intuition about the protocol's strengths and weaknesses.
The Runtime of Predicate Decidability.After formulating the new adversarial runtime measure, we turn our attention to CRN protocols whose goal is to decide whether the initial configuration satisfies a given predicate, indicated by the presence of designated Boolean ('yes' and 'no') _voter_ species in the output configuration. As mentioned earlier, the predicates that can be decided in that way are exactly the semilinear predicates, which raises the following two questions: What is the optimal adversarial runtime of protocols that decide semilinear predicates in general? Are there semilinear predicates that can be decided faster?
A notion that plays an important role in answering these questions is that of CRN _speed faults_, introduced in the impressive work of Chen et al. [12]. This notion captures a (reachable) configuration from which any path to an output configuration includes a (bimolecular) reaction both of whose reactants appear in \(O(1)\) molecular counts. The significance of speed faults stems from the fact that any execution that reaches such a "pitfall configuration" requires \(\Omega(n)\) time (in expectation) to terminate under the standard stochastic model.5 The main result of [12] states that a predicate can be decided by a speed fault free CRN protocol (operating under the strongly fair adversarial scheduler) if and only if it belongs to the class of detection predicates (a subclass of semilinear predicates).
Footnote 5: The definition of runtime in [12] is based on a slightly different convention which results in scaling the runtime expressions by a \(1/n\) factor.
The runtime measure introduced in the current paper can be viewed as a quantitative generalization of the fundamentally qualitative notion of speed faults (the quest for such a generalization was, in fact, the main motivation for this work). As discussed in Sec. 4.1, in our adversarial setting, a speed fault translates to an \(\Omega(n)\) runtime lower bound, leading to an \(\Omega(n)\) runtime lower bound for the task of deciding any non-detection semilinear predicate. On the positive side, we prove that this bound is tight: any semilinear predicate (in particular, the non-detection ones) can be decided by a CRN protocol operating under the weakly fair adversarial scheduler whose runtime is \(O(n)\) (see Thm. 12). For detection predicates, we establish a better upper bound (which is also tight): any detection predicate can be decided by a CRN protocol operating under the weakly fair adversarial scheduler whose runtime is \(O(\log n)\) (see Thm. 33). Refer to Sec. 9 for an additional discussion and to Table 1 for a summary of the adversarial runtime complexity bounds established for pre
dicate decidability tasks; for comparison, Table 2 presents a similar summary of the known stochastic runtime complexity bounds.
Amplifying the Voter Signal.By definition, a predicate deciding CRN protocol accepts (resp., rejects) a given initial configuration by including 1-voter (resp., 0-voter) species in the output configuration. This definition merely requires that the "right" voter species are present in the output configuration in a positive molecular count, even if this molecular count is small. In practice, the signal obtained from species with a small molecular count may be too weak, hence we aim towards _vote amplified_ protocols, namely, protocols with the additional guarantee that the fraction of non-voter molecules in the output configuration is arbitrarily small.
To this end, we introduce a generic compiler that takes any predicate decidability protocol and turns it into a vote amplified protocol. The core of this compiler is a (standalone) computational task, referred to as _vote amplification_, which is defined over four species classes: _permanent_ 0- and 1-voters and _fluid_ 0- and 1-voters. A vote amplification protocol is correct if for \(v\in\{0,1\}\), starting from any initial configuration \(\mathbf{c}^{0}\) with a positive molecular count of permanent \(v\)-voters and no permanent \((1-v)\)-voters, the execution is guaranteed to terminate in a configuration that includes only (permanent and fluid) \(v\)-voters; this guarantee holds regardless of the molecular counts of the fluid voters in \(\mathbf{c}^{0}\). As it turns out, the runtime of the vote amplification protocol is the dominant component in the runtime overhead of the aforementioned compiler.
A vote amplification protocol whose runtime is \(O(n)\) is presented in [3] (using the term "random-walk broadcast"), however this protocol is designed to operate under the stochastic scheduler and, as shown in Appendix C, its correctness breaks once we switch to the weakly fair adversarial scheduler. One of the main technical contributions of the current paper is a vote amplification protocol whose (adversarial) runtime is also \(O(n)\), albeit under the weakly fair adversarial scheduler (see Thm. 40).
Paper's Outline.The rest of the paper is organized as follows. The CRN model used in this paper is presented in Sec. 2. In Sec. 3, we link the correctness of a CRN protocol to certain topological properties of its configuration digraph. Our new runtime notion for adversarially scheduled executions is introduced in Sec. 4, where we also establish the soundness of this notion, formalize its connection to speed faults, and provide a toolbox of useful techniques for protocol runtime analysis. Sec. 5 presents our results for predicate deciding CRNs including the protocols that decide semilinear and detection predicates. The generic vote amplification compiler is introduced in Sec. 6. In Sec. 7, we consider four "natural restrictions" for the definition of the runtime policy and show that they actually lead to (asymptotic) inefficiency in terms of the resulting runtime bounds. Sec. 8 demonstrates that the adversarial runtime of a CRN protocol may be significantly larger than its expected runtime under the standard stochastic model. We conclude in Sec. 9 with additional related work and some open questions.
## 2 Chemical Reaction Networks
In this section, we present the _chemical reaction network (CRN)_ computational model. For the most part, we adhere to the conventions of the existing CRN literature (e.g., [16, 15, 11]), but we occasionally deviate from them for the sake of simplifying the subsequent discussions. (Refer to Fig. 1-4 for illustrations of the notions presented in this section.)
A _CRN_ is a protocol \(\Pi\) specified by the pair \(\Pi=(\mathcal{S},\mathcal{R})\), where \(\mathcal{S}\) is a fixed set of _species_ and \(\mathcal{R}\subset\mathbb{N}^{\mathcal{S}}\times\mathbb{N}^{\mathcal{S}}\) is a fixed set of _reactions_.6 For a reaction \(\alpha=(\mathbf{r},\mathbf{p})\in\mathcal{R}\), the vectors \(\mathbf{r}\in\mathbb{N}^{\mathcal{S}}\) and \(\mathbf{p}\in\mathbb{N}^{\mathcal{S}}\) specify the stoichiometry of \(\alpha\)'s _reactants_ and _products_, respectively.7 Specifically, the entry \(\mathbf{r}(A)\) (resp., \(\mathbf{p}(A)\)) indexed by a species \(A\in\mathcal{S}\) in the vector \(\mathbf{r}\) (resp., \(\mathbf{p}\)) encodes the number of molecules of \(A\) that are consumed (resp., produced) when \(\alpha\) is applied. Species \(A\) is a _cataly_ for the reaction \(\alpha=(\mathbf{r},\mathbf{p})\) if \(\mathbf{r}(A)=\mathbf{p}(A)>0\).
Footnote 6: Throughout this paper, we denote \(\mathbb{N}=\{z\in\mathbb{Z}\mid z\geq 0\}\).
Footnote 7: We stick to the convention of identifying vectors in \(\mathbb{N}^{\mathcal{S}}\) with multisets over \(\mathcal{S}\) expressed as a “molecule summation”.
We adhere to the convention (see, e.g., [13, 21, 17, 12]) that each reaction \((\mathbf{r},\mathbf{p})\in\mathcal{R}\) is either _unimolecular_ with \(\|\mathbf{r}\|=1\) or _bimolecular_ with \(\|\mathbf{r}\|=2\),8 forbidding higher order reactions is justified as more than two molecules are not likely to directly interact. Note that if all reactions \((\mathbf{r},\mathbf{p})\in\mathcal{R}\) are bimolecular and _density preserving_, namely, \(\|\mathbf{r}\|=\|\mathbf{p}\|\), then the CRN model is equivalent to the extensively studied _population protocols_ model [2, 6, 27] assuming that the population protocol agents have a constant state space.
Footnote 8: The notation \(\|\cdot\|\) denotes the 1-norm \(\ell_{1}\).
For a vector (or multiset) \(\mathbf{r}\in\mathbb{N}^{\mathcal{S}}\) with \(1\leq\|\mathbf{r}\|\leq 2\), let \(\mathcal{R}(\mathbf{r})=(\{\mathbf{r}\}\times\mathbb{N}^{\mathcal{S}})\cap \mathcal{R}\) denote the subset of reactions whose reactants correspond to \(\mathbf{r}\). In the current paper, it is required that none of these reaction subsets is empty, i.e., \(|\mathcal{R}(\mathbf{r})|\geq 1\) for every \(\mathbf{r}\in\mathbb{N}^{\mathcal{S}}\) with \(1\leq\|\mathbf{r}\|\leq 2\). Some of the reactions in \(\mathcal{R}\) may be _void_, namely, reactions \((\mathbf{r},\mathbf{p})\) satisfying \(\mathbf{r}=\mathbf{p}\); let \(\mathrm{NV}(\mathcal{R})=\{(\mathbf{r},\mathbf{p})\in\mathcal{R}\mid\mathbf{r }\neq\mathbf{p}\}\) denote the set of non-void reactions in \(\mathcal{R}\). To simplify the exposition, we assume that if \(\alpha=(\mathbf{r},\mathbf{r})\in\mathcal{R}\) is a void reaction, then \(\mathcal{R}(\mathbf{r})=\{\alpha\}\); this allows us to describe protocol \(\Pi\) by listing only its non-void reactions. We further assume that \(\|\mathbf{r}\|\leq\|\mathbf{p}\|\) for all reactions \((\mathbf{r},\mathbf{p})\in\mathcal{R}\).9
Footnote 9: The last two assumptions, are not fundamental to our CRN setup and are made only for the sake of simplicity.
Configurations.A _configuration_ of a CRN \(\Pi=(\mathcal{S},\mathcal{R})\) is a vector \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) that encodes the _molecular count_\(\mathbf{c}(A)\) of species \(A\) in the solution for each \(A\in\mathcal{S}\).10 The molecular count notation is extended to species (sub)sets \(\Lambda\subseteq\mathcal{S}\), denoting \(\mathbf{c}(\Lambda)=\sum_{A\in\Lambda}\mathbf{c}(A)\). We refer to \(\mathbf{c}(\mathcal{S})=\|\mathbf{c}\|\) as the _molecular count_ of the configuration \(\mathbf{c}\). Let \(\mathbf{c}|_{\Lambda}\in\mathbb{N}^{\Lambda}\) denote the restriction of a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) to a species subset \(\Lambda\subseteq\mathcal{S}\).
Footnote 10: Note that we consider the discrete version of the CRN model, where the configuration encodes integral molecular counts. This is in contrast to the continuous CRN model, where a configuration is given by real species densities.
A reaction \(\alpha=(\mathbf{r},\mathbf{p})\in\mathcal{R}\) is said to be _applicable_ to a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) if \(\mathbf{r}(A)\leq\mathbf{c}(A)\) for every \(A\in\mathcal{S}\). Let \(\mathrm{app}(\mathbf{c})\subseteq\mathcal{R}\) denote the set of reactions which are applicable to \(\mathbf{c}\) and let \(\overline{\mathrm{app}}(\mathbf{c})=\mathcal{R}-\mathrm{app}(\mathbf{c})\), referring to the reactions in \(\overline{\mathrm{app}}(\mathbf{c})\) as being _inapplicable_ to \(\mathbf{c}\). We restrict our attention to configurations \(\mathbf{c}\) with molecular count \(\|\mathbf{c}\|\geq 1\), which ensures that \(\mathrm{app}(\mathbf{c})\) is never empty. For a reaction \(\alpha\in\mathrm{app}(\mathbf{c})\), let \(\alpha(\mathbf{c})=\mathbf{c}-\mathbf{r}+\mathbf{p}\) be the configuration obtained by applying \(\alpha\) to \(\mathbf{c}\).11
Footnote 11: Unless stated otherwise, all vector arithmetic is done component-wise.
Given two configurations \(\mathbf{c},\mathbf{c}^{\prime}\in\mathbb{N}^{\mathcal{S}}\), the binary relation \(\mathbf{c}\rightharpoonup\mathbf{c}^{\prime}\) holds if there exists a reaction \(\alpha\in\mathrm{app}(\mathbf{c})\) such that \(\alpha(\mathbf{c})=\mathbf{c}^{\prime}\). We denote the reflexive transitive closure of \(\rightharpoonup\) by \(\rightharpoonup\) and say that \(\mathbf{c}^{\prime}\) is _reachable_ from \(\mathbf{c}\) if \(\mathbf{c}\rightharpoonup\mathbf{c}^{\prime}\). Given a configuration set \(Z\subseteq\mathbb{N}^{\mathcal{S}}\), let
\[\mathrm{stab}(Z)\,\triangleq\,\left\{\mathbf{c}\in Z\mid\mathbf{c} \overset{*}{\rightharpoonup}\mathbf{c}^{\prime}\Longrightarrow\mathbf{c}^{\prime} \in Z\right\}\quad\text{and}\quad\mathrm{halt}(Z)\,\triangleq\,\left\{ \mathbf{c}\in Z\mid\mathbf{c}\overset{*}{\rightharpoonup}\mathbf{c}^{ \prime}\Longrightarrow\mathbf{c}^{\prime}=\mathbf{c}\right\}\,;\]
that is, \(\mathrm{stab}(Z)\) consists of every configuration \(\mathbf{c}\in Z\) all of whose reachable configurations are also in \(Z\) whereas \(\mathrm{halt}(Z)\) consists of every configuration \(\mathbf{c}\in Z\) which is halting in the
sense that the only configuration reachable from \(\mathbf{c}\) is \(\mathbf{c}\) itself, observing that the latter set is a (not necessarily strict) subset of the former.
For the sake of simplicity, we restrict this paper's focus to protocols that _respect finite density_[21], namely, \(\mathbf{c}\stackrel{{*}}{{\rightharpoonup}}\mathbf{c}^{\prime}\) implies that \(\|\mathbf{c}^{\prime}\|\leq O(\|\mathbf{c}\|)\).12 We note that density preserving CRNs inherently respect finite density, however we also allow for reactions that have more products than reactants as long as the CRN protocol is designed so that the molecular count cannot increase arbitrarily. This means, in particular, that although the configuration space \(\mathbb{N}^{\mathcal{S}}\) is inherently infinite, the set \(\{\mathbf{c}^{\prime}\in\mathbb{N}^{\mathcal{S}}\mid\mathbf{c}\stackrel{{ *}}{{\rightharpoonup}}\mathbf{c}^{\prime}\}\) is finite (and bounded as a function of \(\|\mathbf{c}\|\)) for any configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\).
Footnote 12: This restriction is not fundamental to our CRN setup and can be swapped for a weaker one.
Executions.An _execution_\(\eta\) of the CRN \(\Pi\) is an infinite sequence \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) of \(\langle\text{configuration},\text{reaction}\rangle\) pairs such that \(\alpha^{t}\in\text{app}(\mathbf{c}^{t})\) and \(\mathbf{c}^{t+1}=\alpha^{t}(\mathbf{c}^{t})\) for every \(t\geq 0\). It is convenient to think of \(\eta\) as progressing in discrete _steps_ so that configuration \(\mathbf{c}^{t}\) and reaction \(\alpha^{t}\) are associated with step \(t\geq 0\). We refer to \(\mathbf{c}^{0}\) as the _initial configuration_ of \(\eta\) and, unless stated otherwise, denote the molecular count of \(\mathbf{c}^{0}\) by \(n=\|\mathbf{c}^{0}\|\). Given a configuration set \(Z\subseteq\mathbb{N}^{\mathcal{S}}\), we say that \(\eta\)_stabilizes_ (resp., _halts_) into \(Z\) if there exists a step \(t\geq 0\) such that \(\mathbf{c}^{t}\in\text{stab}(Z)\) (resp., \(\mathbf{c}^{t}\in\text{halt}(Z)\)) and refer to the earliest such step \(t\) as the execution's _stabilization step_ (resp., _halting step_) with respect to \(Z\).
In this paper, we consider an _adversarial scheduler_ that knows the CRN protocol \(\Pi\) and the initial configuration \(\mathbf{c}^{0}\) and determines the execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) in an arbitrary (malicious) way. The execution \(\eta\) is nonetheless subject to the following _fairness_ condition: for every \(t\geq 0\) and for every \(\alpha\in\text{app}(\mathbf{c}^{t})\), there exists \(t^{\prime}\geq t\) such that either (I) \(\alpha^{t^{\prime}}=\alpha\); or (II) \(\alpha\notin\text{app}(\mathbf{c}^{t^{\prime}})\). In other words, the scheduler is not allowed to (indefinitely) "starve" a continuously applicable reaction. We emphasize that the mere condition that a reaction \(\alpha\in\mathcal{R}\) is applicable infinitely often does not imply that \(\alpha\) is scheduled infinitely often.
Note that the fairness condition adopted in the current paper differs from the one used in the existing CRN (and population protocols) literature [2, 5, 13, 11]. The latter, referred to hereafter as _strong fairness_, requires that if a configuration \(\mathbf{c}\) appears infinitely often in the execution \(\eta\) and a configuration \(\mathbf{c}^{\prime}\) is reachable from \(\mathbf{c}\), then \(\mathbf{c}^{\prime}\) also appears infinitely often in \(\eta\). Strictly speaking, a strongly fair execution \(\eta\) is not necessarily fair according to the current paper's notion of fairness (in particular, \(\eta\) may starve void reactions). However, as we show in Sec. 3, protocol correctness under the current paper's notion of fairness implies protocol correctness under strong fairness (see Cor. 4), where the exact meaning of correctness is defined soon. Consequently, we refer hereafter to the notion of fairness adopted in the current paper as _weak fairness_.
Interface and Correctness.The CRN notions introduced so far are independent of any particular computational task. To correlate between a CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) and concrete computational tasks, we associate \(\Pi\) with a (task specific) _interface_\(\mathcal{I}=(\mathcal{U},\mu,\mathcal{C})\) whose semantics is as follows: \(\mathcal{U}\) is a fixed set of _interface values_ that typically encode the input and/or output associated with the species; \(\mu:\mathcal{S}\rightarrow\mathcal{U}\) is an _interface mapping_ that maps each species \(A\in\mathcal{S}\) to an interface value \(\mu(A)\in\mathcal{U}\); and \(\mathcal{C}\subseteq\mathbb{N}^{\mathcal{U}}\times\mathbb{N}^{\mathcal{U}}\) is a _correctness relation_ that determines the correctness of an execution as explained soon.13
Footnote 13: The _stochastic scheduler_ is a _stochastic scheduler_, which is a _stochastic scheduler_.
Hereafter, we refer to the vectors in \(\mathbb{N}^{\mathcal{U}}\) as _interface vectors_. The interface of a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) in terms of the input/output that \(\mathbf{c}\) encodes (if any) is captured by the interface vector
\[\mu(\mathbf{c})\,\triangleq\,\langle\mathbf{c}\,(\{A\in\mathcal{S}\mid\mu(A)=u \})\rangle_{u\in\mathcal{U}}\.\]
The abstract interface \(\mathcal{I}=(\mathcal{U},\mu,\mathcal{C})\) allows us to define what it means for a protocol to be correct. To this end, for each configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), let \(Z_{\mathcal{I}}(\mathbf{c})=\{\mathbf{c}^{\prime}\in\mathbb{N}^{\mathcal{S}} \mid(\mu(\mathbf{c}),\mu(\mathbf{c}^{\prime}))\in\mathcal{C}\}\) be the set of configurations which are mapped by \(\mu\) to interface vectors that satisfy the correctness relation with \(\mu(\mathbf{c})\). A configuration \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) is a _valid initial configuration_ with respect to \(\mathcal{I}\) if \(Z_{\mathcal{I}}(\mathbf{c}^{0})\neq\emptyset\); an execution is _valid_ (with respect to \(\mathcal{I}\)) if it emerges from a valid initial configuration. A valid execution \(\eta\) is said to be _stably correct_ (resp., _haltingly correct_) with respect to \(\mathcal{I}\) if \(\eta\) stabilizes (resp., halts) into \(Z_{\mathcal{I}}(\mathbf{c}^{0})\).
The protocol \(\Pi\) is said to be _stably correct_ (resp., _haltingly correct_) with respect to \(\mathcal{I}\) if every weakly fair valid execution is guaranteed to be stably (resp., haltingly) correct.14 When the interface \(\mathcal{I}\) is not important or clear from the context, we may address the stable/halting correctness of executions and protocols without explicitly mentioning \(\mathcal{I}\).
Footnote 14: Both notions of correctness have been studied in the CRN literature, see, e.g., [11].
The Stochastic Scheduler.While the current paper focuses on the (weakly fair) adversarial scheduler, another type of scheduler that receives a lot of attention in the literature is the _stochastic scheduler_. Here, we present the stochastic scheduler so that it can serve as a "benchmark" for the runtime definition introduced in Sec. 4. To this end, we define the _propensity_ of a reaction \(\alpha=(\mathbf{r},\mathbf{p})\in\mathcal{R}\) in a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), denoted by \(\pi_{\mathbf{c}}(\alpha)\), as
\[\pi_{\mathbf{c}}(\alpha)\,=\,\begin{cases}\mathbf{c}(A)\cdot\frac{1}{\mathcal{ I}(\mathbf{r})\mid}\,,&\mathbf{r}=A\\ \frac{1}{\varphi}\cdot\binom{(A)}{2}\cdot\frac{1}{|\mathcal{R}(\mathbf{r})|}\,& \mathbf{r}=2A\\ \frac{1}{\varphi}\cdot\mathbf{c}(A)\cdot\mathbf{c}(B)\cdot\frac{1}{|\mathcal{ R}(\mathbf{r})|}\,&\mathbf{r}=A+B,A\neq B\end{cases}\,\]
where \(\varphi>0\) is a (global) _volume_ parameter.15 Notice that reaction \(\alpha\) is applicable to \(\mathbf{c}\) if and only if \(\pi_{\mathbf{c}}(\alpha)>0\). The propensity notation is extended to reaction (sub)sets \(Q\subseteq\mathcal{R}\) by defining \(\pi_{\mathbf{c}}(Q)=\sum_{\alpha\in Q}\pi_{\mathbf{c}}(\alpha)\). Recalling that \(\mathcal{R}(\mathbf{r})\neq\emptyset\) for each \(\mathbf{r}\in\mathbb{N}^{\mathcal{S}}\) with \(1\leq\|\mathbf{r}\|\leq 2\), we observe that
Footnote 15: In the standard stochastic model [24], the propensity expression is multiplied by a reaction specific rate coefficient. In the current paper, that merely uses the stochastic scheduler as a benchmark, we make the simplifying assumption that all rate coefficients are set to 1 (c.f. [13, 12]).
The stochastic scheduler determines the execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) by scheduling a reaction \(\alpha\in\text{app}(\mathbf{c}^{t})\) in step \(t\), setting \(\alpha^{t}=\alpha\), with probability proportional to \(\alpha\)'s propensity \(\pi_{\mathbf{c}^{t}}(\alpha)\) in \(\mathbf{c}^{t}\). The assumption that the CRN protocol respects finite density implies that \(\eta\) is (weakly and strongly) fair with probability 1. We define the _time span_ of step \(t\geq 0\) to be \(1/\pi_{\mathbf{c}^{t}}\), i.e., the normalizing factor of the reaction selection probability.16 Given a step
\(t^{*}\geq 0\), the _stochastic runtime_ of the execution prefix \(\eta^{*}=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{0\leq t<t^{*}}\) is defined to be the accumulated time span \(\sum_{t=0}^{t^{*}-1}1\big{/}\pi_{\mathbf{c}^{t}}\).
We adopt the convention that the volume is proportional to the initial molecular count \(n=\|\mathbf{c}^{0}\|\)[21]. The assumption that the CRN protocol respects finite density ensures that \(\varphi=\Theta(\|\mathbf{c}^{t}\|)\) for every \(t\geq 0\) which means that the volume is sufficiently large to contain all molecules throughout the (stochastic) execution \(\eta\). This also means that the time span of each step \(t\geq 0\) is
\[1/\pi_{\mathbf{c}^{t}}\,=\,\tfrac{\varphi}{\varphi\cdot\|\mathbf{c}^{t}\|+ \left(\tfrac{\|\mathbf{c}^{t}\|}{2}\right)}\,=\,\Theta(1/\|\mathbf{c}^{t}\|)\,= \,\Theta(1/n)\,, \tag{1}\]
hence the stochastic runtime of an execution prefix that lasts for \(t^{*}\) steps is \(\Theta(t^{*}/n)\).
## 3 Correctness Characterization via the Configuration Digraph
It is often convenient to look at CRN protocols through the lens of the following abstract directed graph (a.k.a. digraph): The _configuration digraph_ of a protocol \(\Pi=(\mathcal{S},\mathcal{R})\) is a digraph, denoted by \(D^{\Pi}\), whose edges are labeled by reactions in \(\mathcal{R}\). The nodes of \(D^{\Pi}\) are identified with the configurations in \(\mathbb{N}^{\mathcal{S}}\); the edge set of \(D^{\Pi}\) includes an \(\alpha\)-labeled edge from \(\mathbf{c}\) to \(\alpha(\mathbf{c})\) for each configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) and reaction \(\alpha\in\operatorname{app}(\mathbf{c})\) (thus the outdegree of \(\mathbf{c}\) in \(D^{\Pi}\) is \(|\operatorname{app}(\mathbf{c})|\)). Observe that the self-loops of \(D^{\Pi}\) are exactly the edges labeled by (applicable) void reactions. Moreover, a configuration \(\mathbf{c}^{\prime}\) is reachable, in the graph theoretic sense, from a configuration \(\mathbf{c}\) if and only if \(\mathbf{c}\xrightarrow{*}\mathbf{c}^{\prime}\). For a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), let \(D^{\Pi}_{\mathbf{c}}\) be the digraph induced by \(D^{\Pi}\) on the set of configurations reachable from \(\mathbf{c}\) and observe that \(D^{\Pi}_{\mathbf{c}}\) is finite as \(\Pi\) respects finite density. (Refer to Fig. 0(b)-0(b) for illustrations of the notions presented in this section.)
By definition, there is a one-to-one correspondence between the executions \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) of \(\Pi\) and the infinite paths \(P=(\mathbf{c}^{0},\mathbf{c}^{1},\dots)\) in \(D^{\Pi}\), where the edges of \(P\) are labeled by the reaction sequence \((\alpha^{0},\alpha^{1},\dots)\). We say that an infinite path in \(D^{\Pi}\) is _weakly fair_ (resp., _strongly fair_) if its corresponding execution is weakly (resp., strongly) fair.
The _(strongly connected) components_ of the configuration digraph \(D^{\Pi}\) are the equivalence classes of the "reachable from each other" relation over the configurations in \(\mathbb{N}^{\mathcal{S}}\). We say that a reaction \(\alpha\in\mathcal{R}\)_escapes_ from a component \(S\) of \(D^{\Pi}\) if every configuration in \(S\) admits an outgoing \(\alpha\)-labeled edge to a configuration not in \(S\); i.e., \(\alpha\in\operatorname{app}(\mathbf{c})\) and \(\alpha(\mathbf{c})\notin S\) for every \(\mathbf{c}\in S\) (see, e.g., Fig. 0(b)). The notion of escaping reactions allows us to state the following key lemma.
Consider a component \(S\) of \(D^{\Pi}\). The digraph \(D^{\Pi}\) admits a weakly fair infinite path all of whose nodes are in \(S\) if and only if none of the reactions in \(\mathcal{R}\) escapes from \(S\).
By definition, if \(S\) admits an escaping reaction \(\alpha\in\mathcal{R}\), then every weakly fair infinite path \(P\) in \(D^{\Pi}\) that visits \(S\) cannot stay in \(S\) indefinitely without starving \(\alpha\), hence \(P\) must eventually leave \(S\). In the converse direction, assume that none of the reactions in \(\mathcal{R}\) escapes from \(S\) and let \(D^{\Pi}(S)\) be the digraph induced by \(D^{\Pi}\) on \(S\). For each reaction \(\alpha\in\mathcal{R}\), let \(e_{\alpha}=(\mathbf{c},\mathbf{c}^{\prime})\) be an edge in \(D^{\Pi}(S)\) that satisfies either (1) \(e_{\alpha}\) is labeled by \(\alpha\); or (2) \(\alpha\) is inapplicable to \(\mathbf{c}\). (Such an edge \(e_{\alpha}\) is guaranteed to exist as \(\alpha\) does not escape from \(S\).) Since \(D^{\Pi}(S)\) is a strongly connected digraph, it follows that there exists a (not necessarily simple) cycle \(C\) in \(D^{\Pi}(S)\) that includes the edges \(e_{\alpha}\) for all \(\alpha\in\mathcal{R}\). By repeatedly traversing \(C\), we obtain a weakly fair infinite path in \(D^{\Pi}\).
We can now express the stable/halting correctness of CRNs in terms of their configuration digraphs: Lem.2 follows from Lem.1 by the definitions of stable correctness and halting correctness.
A CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) is stably (resp., haltingly) correct with respect to an interface \(\mathcal{I}=(\mathcal{U},\mu,\mathcal{C})\) under a weakly fair scheduler if and only if for every valid initial configuration \(\mathbf{c}^{0}\in\mathbb{N}^{S}\), every component \(S\) of \(D^{\mathrm{II}}_{\mathbf{c}^{0}}\) satisfies (at least) one of the following two conditions: (1) \(S\) admits some (at least one) escaping reaction; or (2) \(S\subseteq\mathrm{stab}(Z_{\mathcal{I}}(\mathbf{c}^{0}))\) (resp., \(S\subseteq\mathrm{halt}(Z_{\mathcal{I}}(\mathbf{c}^{0}))\)), where \(Z_{\mathcal{I}}(\mathbf{c}^{0})=\{\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\mid( \mu(\mathbf{c}^{0}),\mu(\mathbf{c}))\in\mathcal{C}\}\).
To complement Lem.2, we also express the stable/halting correctness of CRNs in terms of their configuration digraphs under a strongly fair scheduler: Lem.3 follows from the same line of arguments as Lemma1 in [2] by the definitions of stable correctness and halting correctness.
A CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) is stably (resp., haltingly) correct with respect to an interface \(\mathcal{I}=(\mathcal{U},\mu,\mathcal{C})\) under a strongly fair scheduler if and only if for every valid initial configuration \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\), every component \(S\) of \(D^{\mathrm{II}}_{\mathbf{c}^{0}}\) satisfies (at least) one of the following two conditions: (1) \(S\) admits some (at least one) edge outgoing to another component; or (2) \(S\subseteq\mathrm{stab}(Z_{\mathcal{I}}(\mathbf{c}^{0}))\) (resp., \(S\subseteq\mathrm{halt}(Z_{\mathcal{I}}(\mathbf{c}^{0}))\)), where \(Z_{\mathcal{I}}(\mathbf{c}^{0})=\{\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\mid( \mu(\mathbf{c}^{0}),\mu(\mathbf{c}))\in\mathcal{C}\}\).
Combining Lem.2 and 3, we obtain the following corollary.
If a CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) is stably (resp., haltingly) correct with respect to an interface \(\mathcal{I}\) under a weakly fair scheduler, then \(\Pi\) is also stably (resp., haltingly) correct with respect to \(\mathcal{I}\) under a strongly fair scheduler.
Two Protocols in One Test Tube.A common technique in the design of CRN (and population) protocols is to simulate two protocols \(\Pi_{1}=(\mathcal{S}_{1},\mathcal{R}_{1})\) and \(\Pi_{2}=(\mathcal{S}_{2},\mathcal{R}_{2})\) running "in the same test tube". This is often done by constructing a "combined" protocol \(\Pi_{\times}=(\mathcal{S}_{\times},\mathcal{R}_{\times})\) whose species set \(\mathcal{S}_{\times}\) is the Cartesian product \(\mathcal{S}_{1}\times\mathcal{S}_{2}\) so that each reaction \(\alpha\in\mathcal{R}_{\times}\) operates independently on the two "sub-species". While this design pattern is very effective with strong fairness (it is used, e.g., in [2, 3]), it turns out that the weakly fair adversarial scheduler may exploit the Cartesian product construction to introduce "liverlocks", preventing \(\Pi_{\times}\) from stabilizing/halting; an example that demonstrates this phenomenon is presented in AppendixB.
Consequently, the current paper uses a different type of construction when we wish to simulate \(\Pi_{1}\) and \(\Pi_{2}\) in the same test tube: We simply produce two separated sets of molecules, one for the \(\Pi_{1}\) species and the other for the \(\Pi_{2}\) species, and allow the two protocols to run side-by-side. Care must be taken though with regard to reactions that involve species from both \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) as the weakly fair adversarial scheduler may exploit those to interfere with the executions of the individual protocols; see our treatment of this issue in Sec.4.2.2, 6.1, and 5.1.3.
## 4 The Runtime of Adversarially Scheduled Executions
So far, the literature on CRN (and population) protocols operating under an adversarial scheduler focused mainly on computability, leaving aside, for the most part, complexity considerations.17 This is arguably unavoidable when working with the strong fairness con
dition which is inherently oblivious to the chain of reactions that realizes the reachability of one configuration from another. In the current paper, however, we adopt the weak fairness condition which facilitates the definition of a quantitative measure for the runtime of adversarially scheduled executions, to which this section is dedicated. (Refer to Fig. 1c-4c for illustrations of the notions presented in this section.)
Consider a stably (resp., haltingly) correct CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) and recall that every weakly fair valid execution of \(\Pi\) is guaranteed to stabilize (resp., halt). We make extensive use of the following operator: Given a weakly fair execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\), a step \(t\geq 0\), and a reaction (sub)set \(Q\subseteq\mathcal{R}\), let \(\tau(\eta,t,Q)\) be the earliest step \(s>t\) such that at least one of the following two conditions is satisfied:
(I) \(\alpha^{s-1}\in Q\); or
(II) \(Q\subseteq\bigcup_{t\leq t^{\prime}\leq s}\overline{\mathrm{app}}(\mathbf{c} ^{t^{\prime}})\).
(This operator is well defined by the weak fairness of \(\eta\).)
Intuitively, we think of the operator \(\tau(\eta,t,Q)\) as a process that tracks \(\eta\) from step \(t\) onward and stops once any \(Q\) reaction is scheduled (condition (I)). This by itself is not well defined as the scheduler may avoid scheduling the \(Q\) reactions from step \(t\) onward. However, the scheduler must prevent the starvation of any continuously applicable reaction in \(Q\), so we also stop the \(\tau\)-process once the adversary "fulfills this commitment" (condition (II)).
The Policies.Our runtime measure is based on partitioning a given weakly fair execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) into _rounds_. This is done by means of two policies: a _runtime policy_\(\varrho\), determined by the protocol designer, that maps each configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) to a non-void reaction (sub)set \(\varrho(\mathbf{c})\subseteq\mathrm{NV}(\mathcal{R})\), referred to as the _target reaction set_ of \(\mathbf{c}\) under \(\varrho\); and a _skipping policy_\(\sigma\), determined by the adversarial scheduler (in conjunction with the execution \(\eta\)), that maps each step \(t\geq 0\) to a step \(\sigma(t)\geq t\).
Round \(i=0,1,\dots\) spans the step interval \([t(i),t(i+1))\) and includes a designated _effective step_\(t(i)\leq t_{\mathrm{e}}(i)<t(i+1)\). The partition of execution \(\eta\) into rounds is defined inductively by setting
\[t(i)\,=\,\begin{cases}0\,,&i=0\\ \tau\left(\eta,t_{\mathrm{e}}(i-1),\varrho\left(\mathbf{c}^{t_{\mathrm{e}}(i- 1)}\right)\right)\,,&i>0\end{cases}\qquad\text{and}\qquad t_{\mathrm{e}}(i)\,= \,\sigma(t(i))\,.\]
Put differently, for every round \(i\geq 0\) with initial step \(t(i)\), the adversarial scheduler first determines the round's effective step \(t_{\mathrm{e}}(i)=\sigma(t(i))\geq t(i)\) by means of the skipping policy \(\sigma\). Following that, we apply the runtime policy \(\varrho\) (chosen by the protocol designer) to the configuration \(\mathbf{e}^{i}=\mathbf{c}^{t_{\mathrm{e}}(i)}\), referred to as the round's _effective configuration_, and obtain the target reaction set \(Q=\varrho(\mathbf{e}^{i})\). The latter is then plugged into the operator \(\tau\) to determine \(t(i+1)=\tau(\eta,t_{\mathrm{e}}(i),Q)\). Round \(i\) is said to be _target-accomplished_ if \(\alpha^{t(i+1)-1}\in Q\); otherwise, it is said to be _target-deprived_.
Our definition of the runtime policy \(\varrho\) does not require that the reactions included in the target reaction set \(\varrho(\mathbf{c})\) are applicable to the configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\). Notice though that if \(\varrho(\mathbf{c})\subseteq\overline{\mathrm{app}}(\mathbf{c})\) i.e., all target reactions are inapplicable to \(\mathbf{c}\) (which is bound to be the case if \(\mathbf{c}\) is halting), then a round whose effective configuration is \(\mathbf{c}\) is destined to be target deprived and end immediately after the effective step, regardless of the reaction scheduled in that step. In Sec. 7, we investigate several other "natural restrictions" of the runtime policy definition, including fixed policies and singleton target reaction sets, showing that they all lead to significant efficiency loss.
Temporal Cost.We define the _temporal cost_ of a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) under a runtime policy \(\varrho\), denoted by \(\operatorname{TC}^{\varrho}(\mathbf{c})\), as follows: Let \(\eta_{r}=\langle\mathbf{c}_{r}^{t},\alpha_{r}^{t}\rangle_{t\geq 0}\) be a stochastic execution emerging from the initial configuration \(\mathbf{c}_{r}^{0}=\mathbf{c}\) and define
\[\operatorname{TC}^{\varrho}(\mathbf{c})\,\triangleq\,\mathbb{E}\left(\sum_{t=0 }^{\tau(p_{r},0,\varrho(\mathbf{c}))-1}1/\pi_{\mathbf{c}_{r}^{t}}\right)\,=\, \Theta(1/\|\mathbf{c}\|)\cdot\mathbb{E}\left(\tau(\eta_{r},0,\varrho(\mathbf{c }))\right)\,,\]
where the expectation is over the random choice of \(\eta_{r}\) and the second transition is due to (1). That is, the temporal cost of \(\mathbf{c}\) under \(\varrho\) is defined to be the expected stochastic runtime of round \(0\) of \(\eta_{r}\) with respect to the runtime policy \(\varrho\) and the identity skipping policy \(\sigma_{\mathrm{id}}\) that maps each step \(t\geq 0\) to \(\sigma_{\mathrm{id}}(t)=t\) (which means that the effective step of each round is its initial step). The following observation stems from the Markovian nature of the stochastic scheduler.
Fix an (arbitrary) runtime policy \(\varrho\). Let \(\eta_{r}=\langle\mathbf{c}_{r}^{t},\alpha_{r}^{t}\rangle_{t\geq 0}\) be a stochastic execution and let \(t(i)\) be the initial step of round \(i\geq 0\) under \(\varrho\) and the identity skipping policy \(\sigma_{\mathrm{id}}\). For each \(i\geq 0\), conditioned on \(\mathbf{c}_{r}^{t(i)}\), the expected stochastic runtime of \(\langle\mathbf{c}_{r}^{t},\alpha_{r}^{t}\rangle_{t(i)\leq t<t(i+1)}\) is equal to \(\operatorname{TC}^{\varrho}(\mathbf{c}_{r}^{t(i)})\).
Execution Runtime.Consider a runtime policy \(\varrho\) and a skipping policy \(\sigma\). Let \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) be a weakly fair valid execution and let \(t(i)\), \(t_{\mathbf{c}}(i)\), and \(\mathbf{e}^{i}=\mathbf{c}^{t_{\mathbf{c}}(i)}\) be the initial step, effective step, and effective configuration, respectively, of round \(i\geq 0\) under \(\varrho\) and \(\sigma\). Fix some step \(t^{*}\geq 0\) and consider the execution prefix \(\eta^{*}=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{0\leq t<t^{*}}\). We define the _(adversarial) runtime_ of \(\eta^{*}\) under \(\varrho\) and \(\sigma\), denoted by \(\operatorname{RT}^{\varrho,\sigma}(\eta^{*})\), by taking \(i^{*}=\min\{i\geq 0\mid t(i)\geq t^{*}\}\) and setting
\[\operatorname{RT}^{\varrho,\sigma}(\eta^{*})\,\triangleq\,\sum_{i=0}^{i^{*}-1 }\operatorname{TC}^{\varrho}\left(\mathbf{e}^{i}\right)\,.\]
The _stabilization runtime_ (resp., _halting runtime_) of the (entire) execution \(\eta\) under \(\varrho\) and \(\sigma\), denoted by \(\operatorname{RT}^{\varrho,\sigma}_{\mathrm{stab}}(\eta)\) (resp., \(\operatorname{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\)), is defined to be \(\operatorname{RT}^{\varrho,\sigma}\left(\langle\mathbf{c}^{t},\alpha^{t} \rangle_{0\leq t<t^{*}}\right)\), where \(t^{*}\geq 0\) is the stabilization (resp., halting) step of \(\eta\). In other words, we use \(\varrho\) and \(\sigma\) to partition \(\eta\) into rounds and mark the effective steps. Following that, we charge each round \(i\) that starts before step \(t^{*}\) according to the temporal cost (under \(\varrho\)) of its effective configuration \(\mathbf{e}^{i}\).
Looking at it from another angle, by employing its skipping policy \(\sigma\), the adversarial scheduler determines the sequence \(\mathbf{e}^{0},\mathbf{e}^{1},\dots\) of effective configurations according to which the temporal cost \(\operatorname{TC}^{\varrho}(\mathbf{e}^{i})\) of each round \(i\geq 0\) is calculated. By choosing an appropriate runtime policy \(\varrho\), the protocol designer may (1) ensure that progress is made from one effective configuration to the next, thus advancing \(\eta\) towards round \(i^{*}=\min\{i\geq 0\mid t(i)\geq t^{*}\}\); and (2) bound the contribution \(\operatorname{TC}^{\varrho}(\mathbf{e}^{i})\) of each round \(0\leq i<i^{*}\) to the stabilization runtime \(\operatorname{RT}^{\varrho,\sigma}_{\mathrm{stab}}(\eta)\) (resp., halting runtime \(\operatorname{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\)). The crux of our runtime definition is that this contribution depends only on the effective configuration \(\mathbf{e}^{i}\), irrespectively of how round \(i\) actually develops (see, e.g., Fig. 0(c)).
Using this viewpoint, it is interesting to revisit the definitions of Awerbuch [7] and Dolev et al. [20] for the runtime of an asynchronous distributed protocol \(\mathcal{P}\). Following the discussion in Sec. 1, this runtime is defined as the length of the longest sequence \(\mathbf{e}^{0},\mathbf{e}^{1},\dots,\mathbf{e}^{i^{*}-1}\) of "non-terminal" configurations (of \(\mathcal{P}\)) such that \(\mathbf{e}^{i}\) is reachable from \(\mathbf{e}^{i-1}\) by an execution interval that lasts for at least one round (according to the respective definitions of [7] and [20]). Our adversarial runtime notion is defined in the same manner, taking \(\mathbf{e}^{0},\mathbf{e}^{1},\dots,\mathbf{e}^{i^{*}-1}\) to be the first \(i^{*}\) effective (CRN) configurations, only that we charge each configuration \(\mathbf{e}^{i}\) according to its temporal cost (rather than one "runtime unit" as in [7] and [20]). This difference is consistent with the different "idealized benchmarks": a synchronous
schedule in [7] and [20] vs. a stochastic execution in the current paper. The skipping policy \(\sigma\) plays a key role in adversarially generating the sequence \(\mathbf{e}^{0},\mathbf{e}^{1},\ldots,\mathbf{e}^{i^{*}-1}\) as it "decouples" between the last step of round \(i\), determined by the runtime policy \(\varrho\), and the effective configuration \(\mathbf{e}^{i+1}\) of round \(i+1\) (see, e.g., Fig. 3(c)).
The Runtime Function.For \(n\geq 1\), let \(\mathcal{F}(n)\) denote the set of weakly fair valid executions \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) of initial molecular count \(\|\mathbf{c}^{0}\|=n\). The _stabilization runtime_ (resp., _halting runtime_) of the CRN protocol \(\Pi\) for executions in \(\mathcal{F}(n)\), denoted by \(\operatorname{RT}^{\Pi}_{\mathrm{stab}}(n)\) (resp., \(\operatorname{RT}^{\Pi}_{\mathrm{halt}}(n)\)), is defined to be
\[\operatorname{RT}^{\Pi}_{\mathrm{x}}(n)\,\triangleq\,\min_{\varrho}\max_{\eta \in\mathcal{F}(n),\,\sigma}\,\operatorname{RT}^{\varrho,\sigma}_{\mathrm{x}}( \eta)\,,\]
where \(\mathrm{x}\) serves as a placeholder for \(\mathrm{stab}\) (resp., halt). This formalizes the responsibility of the protocol designer to specify a runtime policy \(\varrho\), in conjunction with the protocol \(\Pi\), used for up-bounding \(\Pi\)'s stabilization (resp., halting) runtime (see, e.g., Fig. 0(c)).
The following two lemmas establish the soundness of our adversarial runtime definition: Lem. 3 ensures that the stabilization (resp., halting) runtime function is well defined;18 its proof relies on some tools introduced in Sec. 4.2.1 and is therefore deferred to that section. In Lem. 3, we show that if the scheduler generates the execution stochastically, then our (adversarial) runtime measure agrees in expectation with the stochastic runtime measure.
Footnote 18: Note that in Lem. 3 we use a universal runtime policy that applies to all choices of the initial molecular count \(n\). This is stronger in principle than what the runtime definition actually requires.
Consider a stably (resp., haltingly) correct protocol \(\Pi=(\mathcal{S},\mathcal{R})\). There exists a runtime policy \(\varrho\) such that for every integer \(n\geq 1\), execution \(\eta\in\mathcal{F}(n)\), and skipping policy \(\sigma\), the stabilization runtime \(\operatorname{RT}^{\varrho,\sigma}_{\mathrm{stab}}(\eta)\) (resp., halting runtime \(\operatorname{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\)) is up-bounded as a function of \(n\).
Consider a stably (resp., haltingly) correct protocol \(\Pi=(\mathcal{S},\mathcal{R})\). Let \(\eta_{r}=\langle\mathbf{c}^{t}_{r},\alpha^{t}_{r}\rangle_{t\geq 0}\) be a stochastic execution emerging from a valid initial configuration \(\mathbf{c}^{0}_{r}\) and let \(t^{*}\geq 0\) be the stabilization (resp., halting) step of \(\eta_{r}\). Then,
\[\min_{\varrho}\mathbb{E}_{\eta_{r}}\left(\max_{\sigma}\operatorname{RT}^{ \varrho,\sigma}_{\mathrm{x}}(\eta_{r})\right)\,=\,\mathbb{E}_{\eta_{r}}\left( \sum_{t=0}^{t^{*}-1}1/\pi_{\mathbf{c}^{t}_{r}}\right)\,,\]
where \(\mathrm{x}\) serves as a placeholder for \(\mathrm{stab}\) (resp., \(\mathrm{halt}\)).
Proof. Let \(\varrho_{\mathrm{f}}\) be the "full" runtime policy that maps each configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) to \(\varrho_{\mathrm{f}}(\mathbf{c})=\operatorname{NV}(\mathcal{R})\) and let \(\sigma_{\mathrm{id}}\) be the identity skipping policy. We establish the assertion by proving the following three claims:
(C1) \(\mathbb{E}_{\eta_{r}}\left(\operatorname{RT}^{\varrho_{\mathrm{f}},\sigma_{ \mathrm{id}}}_{\mathrm{x}}(\eta_{r})\right)=\mathbb{E}_{\eta_{r}}\left(\sum_{ t=0}^{t^{*}-1}1/\pi_{\mathbf{c}^{t}_{r}}\right)\);
(C2) \(\operatorname{RT}^{\varrho_{\mathrm{f}},\sigma_{\mathrm{id}}}_{\mathrm{x}}( \eta)\geq\operatorname{RT}^{\varrho_{\mathrm{f}},\sigma}_{\mathrm{x}}(\eta)\) for every execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\in\mathcal{F}(n)\) and skipping policy \(\sigma\); and
(C3) \(\mathbb{E}_{\eta_{r}}\left(\operatorname{RT}^{\varrho,\sigma_{ \mathrm{id}}}_{\mathrm{x}}(\eta_{r})\right)\geq\mathbb{E}_{\eta_{r}}\left(\sum_ {t=0}^{t^{*}-1}1/\pi_{\mathbf{c}^{t}_{r}}\right)\) for every runtime policy \(\varrho\).
Indeed, claims (C1) and (C2) imply that
\[\min_{\varrho}\mathbb{E}_{\eta_{r}}\left(\max_{\sigma}\operatorname {RT}^{\varrho,\sigma}_{\mathrm{x}}(\eta_{r})\right)\,\leq \mathbb{E}_{\eta_{r}}\left(\max_{\sigma}\operatorname{RT}^{ \varrho_{\mathrm{f}},\sigma}_{\mathrm{x}}(\eta_{r})\right)\] \[= \mathbb{E}_{\eta_{r}}\left(\operatorname{RT}^{\varrho_{\mathrm{f}},\sigma_{\mathrm{id}}}_{\mathrm{x}}(\eta_{r})\right)\,=\,\mathbb{E}_{\eta_{r}} \left(\sum_{t=0}^{t^{*}-1}1/\pi_{\mathbf{c}^{t}_{r}}\right)\,,\]
whereas claim (C3) yields
\[\min_{\varrho}\mathbb{E}_{\eta_{r}}\left(\max_{\sigma}\mathrm{RT}_{\mathrm{x}}^{ \varrho,\sigma}(\eta_{r})\right)\,\geq\,\min_{\varrho}\mathbb{E}_{\eta_{r}} \left(\mathrm{RT}_{\mathrm{x}}^{\varrho,\sigma_{\mathrm{ad}}}(\eta_{r})\right) \,\geq\,\mathbb{E}_{\eta_{r}}\left(\sum_{t=0}^{t^{*}-1}1/\pi_{\mathrm{e}^{ \prime}_{r}}\right)\,.\]
To prove the three claims, we start by deducing that claim (C3) follows from Obs. 5, observing that the inequality may become strict (only) due to the excessive contribution to \(\mathrm{RT}_{\mathrm{x}}^{\varrho,\sigma_{\mathrm{ad}}}(\eta_{r})\) of the temporal cost charged to the (unique) round \(i\geq 0\) that satisfies \(t(i)<t^{*}<t(i-1)\) (if such a round exists). As the target reaction sets exclude void reactions, we conclude by the definition of operator \(\tau\) that under \(\varrho_{\mathrm{f}}\) and \(\sigma_{\mathrm{id}}\), there must exist a round \(i\geq 0\) such that \(t^{*}=t(i)\), thus obtaining claim (C1). For claim (C2), it suffices to observe that under \(\varrho_{\mathrm{f}}\), it holds that
\[t(i+1)\,=\,\min\{t^{\prime}>\mathpzc{t}_{\mathrm{e}}(i)\mid\alpha^{t^{\prime}- 1}\in\mathrm{NV}(\mathcal{R})\}\,=\,\min\{t^{\prime}>\mathpzc{t}_{\mathrm{e}}( i)\mid\mathbf{c}^{t^{\prime}}\neq\mathbf{c}^{\mathpzc{t}_{\mathrm{e}}(i)}\}\]
for every round \(i\geq 0\).
### Speed Faults
Consider a CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) which is stably (resp., haltingly) correct with respect to an interface \(\mathcal{I}=(\mathcal{U},\mu,\mathcal{C})\). For a valid initial configuration \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\), let \(Z_{\mathcal{I}}(\mathbf{c}^{0})=\{\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\mid( \mu(\mathbf{c}^{0}),\mu(\mathbf{c}))\in\mathcal{C}\}\) and recall that if a weakly fair execution \(\eta\) of \(\Pi\) emerges from \(\mathbf{c}^{0}\), then \(\eta\) is guaranteed to reach \(\mathrm{stab}(Z_{\mathcal{I}}(\mathbf{c}^{0}))\) (resp., \(\mathrm{halt}(Z_{\mathcal{I}}(\mathbf{c}^{0}))\)).
Given a parameter \(s>0\), a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) is said to be a _stabilization \(s\)-pitfall_ (resp., a _halting \(s\)-pitfall_) of the valid initial configuration \(\mathbf{c}^{0}\) if \(\mathbf{c}^{0}\stackrel{{*}}{{\rightharpoonup}}\mathbf{c}\) and every path from \(\mathbf{c}\) to \(\mathrm{stab}(Z_{\mathcal{I}}(\mathbf{c}^{0}))\) (resp., \(\mathrm{halt}(Z_{\mathcal{I}}(\mathbf{c}^{0}))\)) in the digraph \(D^{\Pi}\) includes (an edge labeled by) a reaction whose propensity is at most \(s/\varphi\) (see, e.g., Fig. 2c and 4c). When \(s=O(1)\), we often omit the parameter and refer to \(\mathbf{c}\) simply as a _stabilization pitfall_ (resp., _halting pitfall_). Following the definition of Chen et al. [12], we say that an infinite family \(\mathbf{C}^{0}\) of valid initial configurations has a _stabilization speed fault_ (resp., _halting speed fault_) if for every integer \(n_{0}>0\), there exists a configuration \(\mathbf{c}^{0}\in\mathbf{C}^{0}\) of molecular count \(\|\mathbf{c}^{0}\|=n\geq n_{0}\) that admits a stabilization (resp., halting) pitfall.
If an infinite family \(\mathbf{C}^{0}\) of valid initial configurations has a stabilization (resp., halting) speed fault, then for every integer \(n_{0}>0\), there exist a configuration \(\mathbf{c}^{0}\in\mathcal{C}^{0}\) of molecular count \(\|\mathbf{c}^{0}\|=n\geq n_{0}\), a weakly fair execution \(\eta\) emerging from \(\mathbf{c}^{0}\), and a skipping policy \(\sigma\), such that \(\mathrm{RT}_{\mathrm{x}}^{\varrho,\sigma}(\eta)\geq\Omega(n)\) for every runtime policy \(\varrho\), where \(\mathrm{x}\) serves as a placeholder for \(\mathrm{stab}\) (resp., halt).19
Footnote 19: As discussed in [12], a speed fault does not imply an \(\Omega(n)\) lower bound on the (stochastic) runtime of stochastically scheduled executions since the probability of reaching a pitfall configuration may be small.
Proof.: Let \(\mathbf{c}^{0}\) be a configuration in \(\mathcal{C}^{0}\) of molecular count \(\|\mathbf{c}^{0}\|=n\geq n_{0}\) that admits a stabilization (resp., halting) pitfall \(\mathbf{c}\). As observed by Chen et al. [12], a stochastically scheduled execution emerging from \(\mathbf{c}\) needs, in expectation, at least \(\Omega(n)\) time to stabilize (resp., halt). Therefore, Lem. 7 implies that there exists a weakly fair execution \(\eta_{\mathbf{c}}\) emerging from \(\mathbf{c}\) and a skipping policy \(\sigma_{\mathbf{c}}\) such that \(\mathrm{RT}_{\mathrm{x}}^{\varrho,\sigma_{\mathbf{c}}}(\eta_{\mathbf{c}})\geq \Omega(n)\) for every runtime policy \(\varrho\). As \(\mathbf{c}^{0}\stackrel{{*}}{{\rightharpoonup}}\mathbf{c}\), we can devise \(\eta\) and \(\sigma\) so that \(\mathrm{RT}_{\mathrm{x}}^{\varrho,\sigma}(\eta)=\mathrm{RT}_{\mathrm{x}}^{ \varrho,\sigma_{\mathbf{c}}}(\eta_{\mathbf{c}})\), thus establishing the assertion.
### Useful Toolbox for Runtime Analyses
#### Bounding the Temporal Cost by means of \(\varrho\)-Avoiding Paths
The following definition plays a key role in the runtime analysis of the CRN protocols presented in the sequel. Given a runtime policy \(\varrho\) and two configurations \(\mathbf{c},\mathbf{c}^{\prime}\in\mathbb{N}^{\mathcal{S}}\), we say that \(\mathbf{c}^{\prime}\) is reachable from \(\mathbf{c}\) via a _\(\varrho\)-avoiding path_, denoted by \(\mathbf{c}\stackrel{{*}}{{\rightharpoonup}}_{\langle\varrho\rangle} \mathbf{c}^{\prime}\), if there exists a path \(P\) from \(\mathbf{c}\) to \(\mathbf{c}^{\prime}\) in the configuration digraph \(D^{\Pi}\) of \(\Pi\) that satisfies (1) all edges in \(P\) are labeled by the reactions in \(\mathcal{R}-\varrho(\mathbf{c})\); and (2) there exists some (at least one) reaction \(\alpha\in\varrho(\mathbf{c})\) such that \(\alpha\in\operatorname{app}(\hat{\mathbf{c}})\) for every configuration \(\hat{\mathbf{c}}\) in \(P\). Equivalently, the relation \(\mathbf{c}\stackrel{{*}}{{\rightharpoonup}}_{\langle\varrho\rangle} \mathbf{c}^{\prime}\) holds if (and only if) there exists a weakly fair execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\), a skipping policy \(\sigma\), and a round \(i\geq 0\) (defined with respect to \(\varrho\) and \(\sigma\)) such that \(\mathbf{c}=\mathbf{c}^{t_{\mathrm{e}}(i)}\) and \(\mathbf{c}^{\prime}=\mathbf{c}^{t^{\prime}}\) for some \(t_{\mathrm{e}}(i)\leq t^{\prime}<t(i+1)\).
The usefulness of the notion of reachability via avoiding paths is manifested in the following important lemma. Its proof is fairly straightforward under the continuous time Markov process formulation of the standard stochastic model [24]; for completeness, we provide, in Appendix A, a proof for the discrete scheduler interpretation adopted in the current paper.
Consider a runtime policy \(\varrho\) and a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) and assume that \(\pi_{\mathbf{c}^{\prime}}(\varrho(\mathbf{c}))\geq p\) for every configuration \(\mathbf{c}^{\prime}\in\mathbb{N}^{\mathcal{S}}\) such that \(\mathbf{c}\stackrel{{*}}{{\rightharpoonup}}_{\langle\varrho\rangle} \mathbf{c}^{\prime}\). Then, the temporal cost of \(\mathbf{c}\) under \(\varrho\) is up-bounded as \(\operatorname{TC}^{\varrho}(\mathbf{c})\leq 1/p\).
Employing Lem. 10, we can now establish Lem. 11.
Proof of Lem. 11.: Let \(\mathcal{L}(n)\subset\mathbb{N}^{\mathcal{S}}\) denote the set of configurations \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) of molecular count \(\|\mathbf{c}^{0}\|=n\) that are valid as initial configurations (recall that \(\mathcal{F}(n)\) is the set of weakly fair executions emerging from initial configurations in \(\mathcal{L}(n)\)). Fix an integer \(n\geq 1\) and a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) which is reachable from some (at least one) valid initial configuration in \(\mathcal{L}(n)\) and let \(S\) be the component of \(\mathbf{c}\) in the configuration digraph \(D^{\Pi}\). If \(S\) does not admit any escaping reaction, then Lem. 1 implies that any execution in \(\mathcal{F}(n)\) that reaches \(\mathbf{c}\) has already stabilized (resp., halted). Therefore, we can take \(\varrho(\mathbf{c})\) to be an arbitrary reaction set as this choice does not affect \(\operatorname{RT}^{\varrho}_{\mathrm{stab}}(\eta)\) (resp., \(\operatorname{RT}^{\varrho}_{\mathrm{halt}}(\eta)\)).
So, assume that \(S\) admits a non-empty set \(Q\subseteq\mathcal{R}\) of escaping reactions. By setting \(\varrho(\mathbf{c})=Q^{\prime}\) for an arbitrary reaction set \(\emptyset\subset Q^{\prime}\subseteq Q\), we ensure that for any execution \(\eta\in\mathcal{F}(n)\), if \(\mathbf{c}\) is the effective configuration of a round of \(\eta\), then by the time the next round begins, \(\eta\) no longer resides in \(S\). The assumption that \(\Pi\) respects finite density implies that the number of components of \(D^{\Pi}\) that \(\eta\) goes through before it stabilizes (resp., halts) is up-bounded as a function of \(n\). As the propensity of any non-empty set of applicable reactions is at least \(1/\varphi=\Theta(1/n)\), we conclude by Lem. 10 that each such component contributes \(O(n)\) to \(\operatorname{RT}^{\varrho}_{\mathrm{stab}}(\eta)\) (resp., \(\operatorname{RT}^{\varrho}_{\mathrm{halt}}(\eta)\)), thus establishing the assertion.
#### The Ignition Gadget
It is often convenient to design CRN protocols so that the molecules present in the initial configuration belong to designated species whose role is to set the execution into motion by transforming into the actual species that participate in the protocol. As this design feature is widespread in the protocols presented in the sequel, we introduce it here as a standalone _ignition gadget_ so that it can be used subsequently in a "black box" manner.
Formally, the ignition gadget of a CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) is defined over a set \(\mathcal{S}_{\mathrm{ignt}}\subset\mathcal{S}\) of _ignition species_, referring to the species in \(\mathcal{S}-\mathcal{S}_{\mathrm{ignt}}\) as _working species_. Each ignition
species \(A\in\mathcal{S}_{\rm{ignt}}\) is associated with a unimolecular _ignition reaction_\(\iota_{A}:A\to W^{1}_{A}+\cdots+W^{k_{A}}_{A}\), where \(W^{1}_{A}+\cdots+W^{k_{A}}_{A}\in\mathbb{N}^{\mathcal{S}-\mathcal{S}_{\rm{ignt}}}\) is a multiset (or vector) of working species. The ignition gadget requires that besides \(\iota_{A}\), any reaction in which the ignition species \(A\) participates, as a reactant or as a product, is a void reaction; that is, \(\mathbf{r}(A)=\mathbf{p}(A)=0\) for every \((\mathbf{r},\mathbf{p})\in\mathrm{NV}(\mathcal{R})-\{\iota_{A}\}\).
Given a weakly fair execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) of protocol \(\Pi\), we say that the ignition gadget is _mature_ in step \(t\geq 0\) if \(\mathbf{c}^{t}(\mathcal{S}_{\rm{ignt}})=0\), observing that this means that the ignition gadget is mature in any step \(t^{\prime}\geq t\).
Let \(\varrho\) be a runtime policy for protocol \(\Pi\), designed so that \(\varrho(\mathbf{c})=\{\iota_{A}\mid A\in\mathcal{S}_{\rm{ignt}}\}\) for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) with \(\mathbf{c}(\mathcal{S}_{\rm{ignt}})>0\). Then, for every weakly fair execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) and skipping policy \(\sigma\), there exists a step \(t_{\rm{ignt}}\geq 0\) such that \(\eta\) is mature in step \(t_{\rm{ignt}}\). Moreover, it is guaranteed that \(\mathrm{RT}^{\varrho\sigma}(\langle\mathbf{c}^{t},\alpha^{t}\rangle_{0\leq t<t _{\rm{ignt}}})\leq O(\log n)\), where \(n=\|\mathbf{c}^{0}\|\) is \(\eta\)'s initial molecular count.
Proof.: The fact that step \(t_{\rm{ignt}}\) exists follows since the ignition reactions remain applicable as long as the molecular count of the ignition species is positive and since the ignition species are not produced by any non-void reaction.
To bound the runtime of the execution prefix \(\eta_{\rm{ignt}}=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{0\leq t<t_{\rm{ignt}}}\) under \(\varrho\) and \(\sigma\), let \(t(i)\) and \(\mathbf{e}^{i}\) be the initial step and effective configuration, respectively, of round \(i\geq 0\) and let \(i_{\rm{ignt}}=\min\{i\geq 0\mid t(i)\geq t_{\rm{ignt}}\}\). Fix some round \(0\leq i<i_{\rm{ignt}}\) and let \(\ell_{i}=\mathbf{e}^{i}(\mathcal{S}_{\rm{ignt}})\). The definition of the ignition gadget ensures that round \(i\) is target-accomplished with \(\ell_{i+1}<\ell_{i}\) and that \(\pi_{\mathbf{c}}(\varrho(\mathbf{e}^{i}))=\pi_{\mathbf{e}^{i}}(\varrho( \mathbf{e}^{i}))\) for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) reachable form \(\mathbf{e}^{i}\) via a \(\varrho\)-avoiding path. As \(\pi_{\mathbf{e}^{i}}(\varrho(\mathbf{e}^{i}))=\pi_{\mathbf{e}^{i}}(\{\iota_{A }\mid A\in\mathcal{S}_{\rm{ignt}}\})=\ell_{i}\), we can employ Lem. 9 to conclude that \(\mathrm{TC}^{\varrho}(\mathbf{e}^{i})\leq 1/\ell_{i}\). Since \(\ell_{0}\leq n\), it follows that the runtime of \(\eta_{\rm{ignt}}\) under \(\varrho\) and \(\sigma\) is bounded as
\[\mathrm{RT}^{\varrho,\sigma}(\eta_{\rm{ignt}})\,=\,\sum_{i=0}^{i_{\rm{ignt}}-1 }\mathrm{TC}^{\varrho}(\mathbf{e}^{i})\,\leq\,\sum_{\ell=1}^{n}1/\ell\,=\,O( \log n)\,,\]
thus establishing the assertion.
## 5 Predicate Decidability
An important class of CRN protocols is that of _chemical reaction decides (CRDs)_ whose goal is to determine whether the initial molecular counts of certain species satisfy a given predicate. In its most general form (see [12, 11]), a CRD is a CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) augmented with (1) a set \(\Sigma\subset\mathcal{S}\) of _input species_; (2) two disjoint sets \(\Upsilon_{0},\Upsilon_{1}\subset\mathcal{S}\) of _voter species_; (3) a designated _fuel species_\(F\in\mathcal{S}-\Sigma\); and (4) a fixed _initial context_\(\mathbf{k}\in\mathbb{N}^{\mathcal{S}-(\Sigma\cup\{F\})}\). To emphasize that the protocol \(\Pi\) is a CRD, we often write \(\Pi=(\mathcal{S},\mathcal{R},\Sigma,\Upsilon_{0},\Upsilon_{1},F,\mathbf{k})\). The CRD is said to be _leaderless_ if its initial context is the zero vector, i.e., \(\mathbf{k}=\mathbf{0}\).
A configuration \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) is valid as an initial configuration of the CRD \(\Pi\) if \(\mathbf{c}^{0}|_{\mathcal{S}-(\Sigma\cup\{F\})}=\mathbf{k}\); to ensure that the initial molecular count \(\|\mathbf{c}^{0}\|\) is always at least \(1\) (especially when the CRD is leaderless), we also require that \(\mathbf{c}^{0}(F)\geq 1\). In other words, a valid initial configuration \(\mathbf{c}^{0}\) can be decomposed into an _input vector_\(\mathbf{c}^{0}|_{\Sigma}=\mathbf{x}\in\mathbb{N}^{\Sigma}\), the initial context \(\mathbf{c}^{0}|_{\mathcal{S}-(\Sigma\cup\{F\})}=\mathbf{k}\), and any number \(\mathbf{c}^{0}(F)\geq 1\) of fuel molecules. We emphasize that in contrast to the initial context, the protocol designer has no control over the _exact_ molecular count of the fuel species in the initial configuration.
For \(v\in\{0,1\}\), let
\[\mathcal{D}_{v}\,=\,\left\{\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\mid\mathbf{c}( \Upsilon_{v})>0\wedge\mathbf{c}(\Upsilon_{1-v})=0\right\}\]
be the set of configurations that include a positive molecular count of \(v\)-voters and no \((1-v)\)-voters. An input vector \(\mathbf{x}\in\mathbb{N}^{\Sigma}\) is said to be _stably accepted_ (resp., _haltingly accepted_) by \(\Pi\) if for every valid initial configuration \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) with \(\mathbf{c}^{0}|_{\Sigma}=\mathbf{x}\), every weakly fair execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) emerging from \(\mathbf{c}^{0}\) stabilizes (resp., halts) into \(\mathcal{D}_{1}\); the input vector \(\mathbf{x}\in\mathbb{N}^{\Sigma}\) is said to be _stably rejected_ (resp., _haltingly rejected_) by \(\Pi\) if the same holds with \(\mathcal{D}_{0}\). The CRD \(\Pi\) is stably (resp., haltingly) correct if every input vector \(\mathbf{x}\in\mathbb{N}^{\Sigma}\) is either stably (resp., haltingly) accepted or stably (resp., haltingly) rejected by \(\Pi\). In this case, we say that \(\Pi\)_stably decides_ (resp., _haltingly decides_) the predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) defined so that \(\psi(\mathbf{x})=1\) if and only if \(\mathbf{x}\) is stably (resp., haltingly) accepted by \(\Pi\).
By definition, the molecular count of the fuel species \(F\) in the initial configuration \(\mathbf{c}^{0}\) does not affect the computation's outcome in terms of whether the execution stabilizes (resp., halts) with \(0\)- or \(1\)-voters. Consequently, one can increase the molecular count \(\mathbf{c}^{0}(F)\) of the fuel species in the initial configuration \(\mathbf{c}^{0}\), thus increasing the initial (total) molecular count \(n=\|\mathbf{c}^{0}\|\) for any given input vector \(\mathbf{x}\in\mathbb{N}^{\Sigma}\). Since the runtime of a CRN is expressed in terms of the initial molecular count \(n\), decoupling \(\mathbf{x}\) from \(n\) allows us to measure the asymptotic runtime of the protocol while keeping \(\mathbf{x}\) fixed. In this regard, the CRD \(\Pi\) is said to be _stabilization speed fault free_ (resp., _halting speed fault free_) [12] if for every input vector \(\mathbf{x}\in\mathbb{N}^{\Sigma}\), the family of valid initial configurations \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) with \(\mathbf{c}^{0}|_{\Sigma}=\mathbf{x}\) does not admit a stabilization (resp., halting) speed fault (as defined in Sec. 4.1).
Notice though that there is a caveat in the conception that \(\mathbf{c}^{0}(F)\) can be made arbitrarily large: we can artificially drive the runtime of \(\Pi\) (expressed as a function of \(n\)) towards \(\mathrm{RT}^{\Pi}(n)=\Theta(n)\) simply by introducing an inert fuel species \(F\) (i.e., a species that participates only in void reactions) and "pumping up" its initial molecular count \(\mathbf{c}^{0}(F)\). Indeed, this has the effect of (1) scaling the probability for choosing any "meaningful" reaction in a given step as \(1/n^{2}\); and (2) scaling the time span of each step as \(1/n\). Consequently, the temporal cost associated with each round scales linearly with \(n\), whereas the number of rounds necessary for termination is independent of \(n\).
As a remedy, we subsequently allow for arbitrarily large initial molecular counts \(\mathbf{c}^{0}(F)\) of the fuel species \(F\) only when we aim for sub-linear runtime (upper) bounds, that is, \(\mathrm{RT}^{\Pi}(n)=o(n)\). Otherwise, we restrict ourselves to _fuel bounded_ CRDs, namely, CRDs that are subject to the (additional) requirement that \(\mathbf{c}^{0}(F)\leq O(\|\mathbf{x}\|)\), ensuring that the fuel molecular count does not dominate (asymptotically) the initial molecular count \(n=\|\mathbf{c}^{0}\|\).
### Semilinear Predicates
A predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) is _linear_ if there exist a finite set \(\mathcal{A}=\mathcal{A}(\psi)\subset\mathbb{N}^{\Sigma}\) and a vector \(\mathbf{b}=\mathbf{b}(\psi)\in\mathbb{N}^{\Sigma}\) such that \(\psi(\mathbf{x})=1\) if and only if \(\mathbf{x}=\mathbf{b}+\sum_{\mathbf{a}\in\mathcal{A}}k_{\mathbf{a}}\mathbf{a}\) for some coefficients \(k_{\mathbf{a}}=k_{\mathbf{a}}(\mathbf{x})\in\mathbb{N}\), \(\mathbf{a}\in\mathcal{A}\). A predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) is _semilinear_ if it is the disjunction of finitely many linear predicates. The following theorem is established in the seminal work of Angluin et al. [2, 5].
**Theorem 11** ([2, 5]).: _Fix a predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\). If \(\psi\) is semilinear, then \(\psi\) can be haltingly decided under a strongly fair scheduler by a leaderless CRD. If \(\psi\) can be stably decided by a CRD under a strongly fair scheduler, then \(\psi\) is semilinear._
Our goal in this section is to extend Thm. 11 to weak fairness which allows us to bound the adversarial runtime of the corresponding CRDs and establish the following theorem; notice that the \(O(n)\) runtime bound is asymptotically tight for general semilinear predicates -- see the speed fault freeness discussion in Sec. 5.2.
**Theorem 12**.: _Fix a predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\). If \(\psi\) is semilinear, then \(\psi\) can be haltingly decided under a weakly fair scheduler by a leaderless CRD whose halting runtime is \(O(n)\). If \(\psi\) can be stably decided by a CRD under a weakly fair scheduler, then \(\psi\) is semilinear._
The second claim of Thm. 12 follows immediately from Cor. 4 and Thm. 11. For the first claim, we define the following two predicate families (that also play a crucial role in the proof of Thm. 11[2]): A predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) is a _threshold_ predicate if there exist a vector \(\mathbf{a}=\mathbf{a}(\psi)\in\mathbb{Z}^{\Sigma}\) and a scalar \(b=b(\psi)\in\mathbb{Z}\) such that \(\psi(\mathbf{x})=1\) if and only if \(\mathbf{a}\cdot\mathbf{x}<b\). A predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) is a _modulo_ predicate if there exist a vector \(\mathbf{a}=\mathbf{a}(\psi)\in\mathbb{Z}^{\Sigma}\) a scalar \(b=b(\psi)\in\mathbb{Z}\) and a scalar \(m=m(\psi)\in\mathbb{Z}_{>0}\) such that \(\psi(\mathbf{x})=1\) if and only if \(\mathbf{a}\cdot\mathbf{x}=b\bmod m\).
A folklore result (see, e.g., [25]) states that a predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) is semilinear if and only if it can be obtained from finitely many threshold and modulo predicates through conjunction, disjunction, and negation operations. Consequently, we establish Thm. 12 by proving the following three propositions.
**Proposition 13**.: _For every threshold predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\), there exists a leaderless CRD that haltingly decides \(\psi\) whose halting runtime is \(O(n)\)._
**Proposition 14**.: _For every modulo predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\), there exists a leaderless CRD that haltingly decides \(\psi\) whose halting runtime is \(O(n)\)._
**Proposition 15**.: _For \(j\in\{1,2\}\), let \(\Pi_{j}=(\mathcal{S}_{j},\mathcal{R}_{j},\Sigma,\Upsilon_{j,0},\Upsilon_{j,1}, F_{j},\mathbf{0})\) be a leaderless CRD that haltingly decides the predicate \(\psi_{j}:\mathbb{N}^{\Sigma}\to\{0,1\}\). Let \(\xi:\{0,1\}\times\{0,1\}\to\{0,1\}\) be a Boolean function and let \(\psi_{\xi}:\mathbb{N}^{\Sigma}\to\{0,1\}\) be the predicate defined by setting \(\psi_{\xi}(\mathbf{x})=\xi(\psi_{1}(\mathbf{x}),\psi_{2}(\mathbf{x}))\). Then, there exists a leaderless CRD \(\Pi_{\xi}=(\mathcal{S}_{\xi},\mathcal{R}_{\xi},\Sigma,\Upsilon_{\xi,0}, \Upsilon_{\xi,1},F_{\xi},\mathbf{0})\) that haltingly decides \(\psi_{\xi}\) whose halting runtime satisfies \(\mathrm{RT}^{\Pi_{\mathrm{L}_{\mathrm{L}_{\mathrm{L}}}}}_{\mathrm{halt}}(n) \leq O(\mathrm{RT}^{\Pi_{\mathrm{L}_{\mathrm{L}}}}_{\mathrm{halt}}(n)+\mathrm{ RT}^{\Pi_{\mathrm{L}_{\mathrm{L}}}}_{\mathrm{halt}}(n)+n)\). Moreover, \(\Pi_{\xi}\) uses \(|\mathcal{S}_{\xi}|=|\mathcal{S}_{1}|+|\mathcal{S}_{2}|+|\Sigma|+O(1)\) species._
Prop. 13, 14, and 15 are established in Sec. 5.1.1, 5.1.2, and 5.1.3, respectively. The proofs borrow many ideas from the existing literature (particularly [2]) although some adaptations are needed to accommodate the weak fairness condition as well as for the (adversarial) runtime analysis.
#### Threshold Predicates
In this section, we establish Prop. 13 by designing the promised CRD \(\Pi\). Specifically, given a vector \(\mathbf{a}\in\mathbb{Z}^{\Sigma}\) and a scalar \(b\in\mathbb{Z}\), the (leaderless) CRD \(\Pi=(\mathcal{S},\mathcal{R},\Sigma,\Upsilon_{0},\Upsilon_{1},F,\mathbf{0})\) haltingly decides the predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) defined so that \(\psi(\mathbf{x})=1\) if and only if \(\mathbf{a}\cdot\mathbf{x}<b\). Moreover, the halting runtime of \(\Pi\) is \(\mathrm{RT}^{\Pi}_{\mathrm{halt}}(n)=O(n)\).
Taking \(s=\max\left\{|b|+1,\max_{A\in\Sigma}|\mathbf{a}(A)|\right\}\), the species set of protocol \(\Pi\) is defined to be
\[\mathcal{S}\,=\,\Sigma\cup\{F\}\cup\{L_{u}\mid-s\leq u\leq s\}\cup\{Y_{-1},Y_ {0},Y_{+1}\}\]
The species in \(\Sigma\cup\{F\}\) are regarded as the ignition species of the ignition gadget presented in Sec. 4.2.2, taking the ignition reaction associated with species \(A\in\Sigma\) to be \(\iota_{A}\): \(A\to L_{\mathbf{a}(A)}\); and the ignition reaction associated with species \(F\) to be \(\iota_{F}\): \(F\to L_{0}\).
Semantically, we think of the molecules of the different species as carrying an abstract charge that may be positive, negative, or neutral (i.e., zero): each molecule of species \(A\in\Sigma\) carries \(\chi(A)=\mathbf{a}(A)\) units of charge; each fuel molecule carries \(\chi(F)=0\) units of charge;
each molecule of species \(L_{u}\), \(-s\leq u\leq s\), carries \(\chi(L_{u})=u\) units of charge; and each molecule of species \(Y_{j}\), \(j\in\{-1,0,+1\}\), carries \(\chi(Y_{j})=j\) units of charge. From this point of view, the ignition reactions can be interpreted as transferring the charge from the ignition species in \(\Sigma\cup\{F\}\) to the working species in \(\{L_{u}\mid-s\leq u\leq s\}\cup\{Y_{-1},Y_{0},Y_{+1}\}\).
We design the reaction set \(\mathcal{R}\) so that the total charge remains invariant throughout the execution (see Obs. 16). Moreover, when the execution halts, there is exactly one \(L\) molecule left (i.e., a leader) and we can determine whether or not the total charge is below the threshold \(b\) based solely on the charge of this \(L\) molecule. Following this logic, the voter species are defined as
\[\Upsilon_{0}\,=\,\{L_{u}\mid u\geq b\}\quad\text{ and }\quad\Upsilon_{1}\,=\,\{L_{u}\mid u<b\}\.\]
Concretely, the non-void reaction set \(\mathrm{NV}(\mathcal{R})\) of protocol \(\Pi\) includes the following reactions on top of the aforementioned ignition reactions:
* \(\beta_{u,u^{\prime}}\): \(L_{u}+L_{u^{\prime}}\to L_{u+u^{\prime}}+Y_{0}\) for every \(-s\leq u,u^{\prime}\leq s\) such that \(|u+u^{\prime}|\leq s\);
* \(\hat{\beta}_{u,u^{\prime}}\): \(L_{u}+L_{u^{\prime}}\to L_{\text{sign}(u+u^{\prime})\cdot s}+(|u+u^{\prime}|- s)\cdot Y_{\text{sign}(u+u^{\prime})}\) for every \(-s\leq u,u^{\prime}\leq s\) such that \(|u+u^{\prime}|>s\);
* \(\gamma\): \(Y_{-1}+Y_{+1}\to 2Y_{0}\); and
* \(\delta_{u,j}\): \(L_{u}+Y_{j}\to L_{u+j}+Y_{0}\) for every \(-s\leq u\leq s\) and \(j\in\{-1,+1\}\) such that \(|u+j|\leq s\). In other words, the \(\beta\) and \(\hat{\beta}\) reactions decrement the number of \(L\) molecules, where the latter reactions introduce an appropriate number of \(Y_{-1}\) or \(Y_{+1}\) molecules so as to maintain the total charge; reaction \(\gamma\) cancels a negative unit of charge with a positive unit of charge held by the \(Y\) molecules; and the \(\delta\) reactions shift a (negative or positive) unit of charge from the \(Y\) molecules to the \(L\) molecules.
Analysis.For the analysis of protocol \(\Pi\), fix some input vector \(\mathbf{x}\in\mathbb{N}^{\Sigma}\) and let \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) be a valid initial configuration of \(\Pi\) with \(\mathbf{c}^{0}|_{\Sigma}=\mathbf{x}\). Consider a weakly fair execution \(\eta=\langle\mathbf{c}^{t},\zeta^{t}\rangle_{t\geq 0}\) emerging from \(\mathbf{c}^{0}\).
For every step \(t\geq 0\), we have \(\sum_{A\in\mathcal{S}}\chi(A)\cdot\mathbf{c}^{t}(A)=\mathbf{a}\cdot\mathbf{x}\).
Proof.: Follows from the design of \(\mathcal{R}\) ensuring that (1) \(\sum_{A\in\mathcal{S}}\chi(A)\cdot\mathbf{c}^{0}(A)=\mathbf{a}\cdot\mathbf{x}\); and (2) \(\sum_{A\in\mathcal{S}}\chi(A)\cdot\mathbf{c}^{t}(A)=\sum_{A\in\mathcal{S}} \chi(A)\cdot\mathbf{c}^{t-1}(A)\) for every \(t>0\).
We make extensive use of the notation
\[\chi^{+}(\mathbf{c})\,=\,\mathbf{c}(Y_{+1})+\sum_{1\leq u\leq s}u\cdot\mathbf{c }(L_{u})\qquad\text{and}\qquad\chi^{-}(\mathbf{c})\,=\,\mathbf{c}(Y_{-1})+ \sum_{-s\leq u\leq-1}-u\cdot\mathbf{c}(L_{u})\,,\]
as well as \(\mathbf{c}(L)=\mathbf{c}(\{L_{u}\mid-s\leq u\leq s\})\), defined for each configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\). The following steps play a key role in the analysis:
* let \(t_{\text{ignt}}\geq 0\) be the earliest step in which the ignition gadget matures in \(\eta\) (as promised in Lem. 10);
* let \(t_{\text{sign}}\) be the earliest step \(t\geq t_{\text{ignt}}\) such that \(\chi^{+}(\mathbf{c}^{t})=0\) or \(\chi^{-}(\mathbf{c}^{t})=0\);
* let \(t_{\text{leader}}\) be the earliest step \(t\geq t_{\text{sign}}\) such that \(\mathbf{c}^{t}(L)=1\); and
* let \(t_{\text{halt}}\) be the earliest step \(t\geq t_{\text{leader}}\) such that \(\text{app}(\mathbf{c}^{t})\) does not include any \(\delta\) reaction. The existence of steps \(t_{\text{sign}}\), \(t_{\text{leader}}\), and \(t_{\text{halt}}\) is established (implicitly) in the sequel as part of the runtime analysis. Here, we prove the following three observations.
\(\chi^{+}(\mathbf{c}^{t})=0\) or \(\chi^{-}(\mathbf{c}^{t})=0\) for all \(t\geq t_{\text{sign}}\). In particular, reaction \(\gamma\) is inapplicable from step \(t_{\text{sign}}\) onward.
Proof.: Follows by noticing that \(\chi^{+}(\mathbf{c}^{t+1})\leq\chi^{+}(\mathbf{c}^{t})\) and \(\chi^{-}(\mathbf{c}^{t+1})\leq\chi^{-}(\mathbf{c}^{t})\) for every \(t\geq 0\).
**Observation 18**.: \(\mathbf{c}^{t}(L)=1\) _for all \(t\geq t_{\mathrm{leader}}\). In particular, the \(\beta\) and \(\hat{\beta}\) reactions are inapplicable from step \(t_{\mathrm{leader}}\) onward._
Proof.: Reaction \(\gamma\) and the \(\delta\) reactions do not change \(\mathbf{c}^{t}(L)\), whereas each application of a \(\beta\) or \(\hat{\beta}\) reaction decreases \(\mathbf{c}^{t}(L)\) while still producing one \(L\) molecule. The assertion is established by recalling that \(L_{0}\) is produced by the fuel ignition reaction \(\iota_{F}\), hence \(\mathbf{c}^{t_{\mathrm{light}}}(L)\geq\mathbf{c}^{t_{\mathrm{light}}}(L_{0})\geq 1\).
**Observation 19**.: _Exactly one of the following two properties holds: (1) \(\mathbf{c}^{t_{\mathrm{halt}}}(L_{u})=1\) for some \(-s+1\leq u\leq s-1\) and \(\mathbf{c}^{t_{\mathrm{halt}}}(Y_{-1})=\mathbf{c}^{t_{\mathrm{halt}}}(Y_{+1})=0\); or (2) \(\mathbf{c}^{t_{\mathrm{halt}}}(L_{u})=1\) for some \(u\in\{-s,+s\}\) and \(\mathbf{c}^{t_{\mathrm{halt}}}(Y_{-\mathrm{sign}(u)})=0\). In particular, the \(\delta\) reactions are inapplicable in step \(t_{\mathrm{halt}}\)._
Proof.: By Obs. 17 and 18, from step \(t_{\mathrm{leader}}\) onward, there is a single \(L\) molecule \(L_{u}\) present in the configuration and the only non-void reactions that may still be applicable are the \(\delta_{u,j}\) reactions for \(j=\mathrm{sign}(u)\).
Lem. 10 and Obs. 17, 18, and 19 imply that \(\eta\) halts in step \(t_{\mathrm{halt}}\), thus establishing Cor. 20 due to Obs. 16. The halting correctness of protocol \(\Pi\) follows by the choice of \(\Upsilon_{0}\) and \(\Upsilon_{1}\).
**Corollary 20**.: _Execution \(\eta\) halts in a configuration \(\mathbf{c}\) that includes a single \(L\) molecule \(L_{u}\) whose index \(u\) satisfies: (1) if \(|\mathbf{a}\cdot\mathbf{x}|\leq s\), then \(u=\mathbf{a}\cdot\mathbf{x}\); and (2) if \(|\mathbf{a}\cdot\mathbf{x}|>s\), then \(u=\mathrm{sign}(\mathbf{a}\cdot\mathbf{x})\cdot s\)._
For the halting runtime analysis, let \(n=\|\mathbf{c}^{0}\|\) denote the molecular count of the initial configuration and fix some skipping policy \(\sigma\). We prove that \(\mathrm{RT}^{\Pi}_{\mathrm{halt}}(n)\leq O(n)\) by presenting a runtime policy \(\varrho\) for \(\Pi\) (defined independently of \(\eta\) and \(\sigma\)) and showing that \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\leq O(n)\). Given a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), the runtime policy \(\varrho\) is defined as follows:
* if \(\mathbf{c}(\Sigma\cup\{F\})>0\), then \(\varrho(\mathbf{c})\) consists of the ignition reactions;
* else if \(\chi^{+}(\mathbf{c})>0\) and \(\chi^{-}(\mathbf{c})>0\), then \(\varrho(\mathbf{c})=\{\beta_{u,u^{\prime}}\mid\mathrm{sign}(u)\cdot\mathrm{ sign}(u^{\prime})=-1\}\cup\{\gamma\}\cup\{\delta_{u,j}\mid\mathrm{sign}(u)\cdot \mathrm{sign}(j)=-1\}\);
* else if \(\mathbf{c}(L)>1\), then \(\varrho(\mathbf{c})\) consists of the \(\beta\) and \(\hat{\beta}\) reactions;
* else \(\varrho(\mathbf{c})\) consists of the \(\delta\) reactions.
Let \(t_{\mathrm{e}}(i)\) and \(\mathbf{e}^{i}=\mathbf{c}^{t_{\mathrm{e}}(i)}\) be the effective step and effective configuration, respectively, of round \(i\geq 0\) under \(\varrho\) and \(\sigma\). We establish the desired upper bound on \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) by introducing the following four rounds:
* \(i_{\mathrm{ignt}}=\min\{i\geq 0\mid t_{\mathrm{e}}(i)\geq t_{\mathrm{ignt}}\}\);
* \(i_{\mathrm{sign}}=\min\{i\geq i_{\mathrm{ignt}}\mid t_{\mathrm{e}}(i)\geq t_{ \mathrm{sign}}\}\);
* \(i_{\mathrm{leader}}=\min\{i\geq i_{\mathrm{sign}}\mid t_{\mathrm{e}}(i)\geq t_{ \mathrm{leader}}\}\); and
* \(i_{\mathrm{halt}}=\min\{i\geq i_{\mathrm{leader}}\mid t_{\mathrm{e}}(i)\geq t _{\mathrm{halt}}\}\).
Lem. 10 guarantees that the total contribution of rounds \(0\leq i<i_{\mathrm{ignt}}\) to \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) is up-bounded by \(O(\log n)\).
For the contribution of the subsequent rounds to \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\), we need to define the following notation: Let \(K_{i}^{+}\in\{0,1\}^{s}\) and \(K_{i}^{-}\in\{0,1\}^{s}\) be the binary vectors defined so that \(K_{i}^{+}(u)=\mathbbm{1}_{\mathbf{e}^{i}(u)>0}\) and \(K_{i}^{-}(u)=\mathbbm{1}_{\mathbf{e}^{i}(-u)>0}\) for each \(1\leq u\leq s\). Let \(\prec\) denote the lexicographic (strict) order over \(\{0,1\}^{s}\) in decreasing index significance; that is, for every \(\mathbf{f},\mathbf{g}\in\{0,1\}^{s}\), the relation \(\mathbf{f}\prec\mathbf{g}\) holds if and only if there exists an integer \(1\leq u\leq s\) such that \(\mathbf{f}(u)<\mathbf{g}(u)\) and \(\mathbf{f}(u^{\prime})=\mathbf{g}(u^{\prime})\) for every \(u<u^{\prime}\leq s\). Define the binary relations
and \(\succeq\) over \(\{0,1\}^{s}\) so that \(\mathbf{f}\succ\mathbf{g}\Longleftrightarrow\mathbf{g}\prec\mathbf{f}\) and \(\mathbf{f}\succeq\mathbf{g}\Longleftrightarrow[\mathbf{f}\succ\mathbf{g} \lor\mathbf{f}=\mathbf{g}]\). We can now establish the following three lemmas.
The total contribution of rounds \(i_{\mathrm{ignt}}\leq i<i_{\mathrm{ign}}\) to \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) is up-bounded by \(O(n)\).
Proof.: Fix a round \(i_{\mathrm{ignt}}\leq i<i_{\mathrm{ign}}\) and recall that the runtime policy \(\varrho\) is designed so that \(\varrho(\mathbf{e}^{i})=Q=\{\beta_{u,u^{\prime}}\mid\mathrm{sign}(u)\cdot \mathrm{sign}(u^{\prime})=-1\}\cup\{\gamma\}\cup\{\delta_{u,j}\mid\mathrm{sign}( u)\cdot\mathrm{sign}(j)=-1\}\). Consider a configuration \(\mathbf{c}\) reachable from \(\mathbf{e}^{i}\) via a \(\varrho\)-avoiding path. Since each reactant of a \(Q\) reaction carries at least \(1\) and at most \(s\) units of charge, it follows that the propensity \(\pi_{\mathbf{c}}(Q)\) satisfies
\[\tfrac{\chi^{+}(\mathbf{e})\cdot\chi^{-}(\mathbf{e})}{\varphi\cdot s^{2}}\, \leq\,\pi_{\mathbf{c}}(Q)\,\leq\,\tfrac{\chi^{+}(\mathbf{e})\cdot\chi^{-}( \mathbf{e})}{\varphi}\,,\]
thus \(\pi_{\mathbf{c}}(Q)=\Theta\left(\tfrac{\chi^{+}(\mathbf{e})\cdot\chi^{-}( \mathbf{e})}{n}\right)\) as \(s=O(1)\). Inspecting the reactions in \(\mathcal{R}-Q\), we deduce that \(\chi^{+}(\mathbf{c})=\chi^{+}(\mathbf{e}^{i})\) and \(\chi^{-}(\mathbf{c})=\chi^{-}(\mathbf{e}^{i})\), hence we can employ Lem. 9 to conclude that the contribution of round \(i\) to \(\mathrm{RT}^{\varrho}_{\mathrm{halt}}(\eta)\) is \(\mathrm{TC}^{\varrho}(\mathbf{e}^{i})\leq O\left(\tfrac{n}{\chi^{+}(\mathbf{e }^{i})\cdot\chi^{-}(\mathbf{e}^{i})}\right)\).
If round \(i\) is target-accomplished, then \(\chi^{+}(\mathbf{e}^{i+1})<\chi^{+}(\mathbf{e}^{i})\) and \(\chi^{-}(\mathbf{e}^{i+1})<\chi^{-}(\mathbf{e}^{i})\) This is no longer guaranteed if round \(i\) is target-deprived, however, we argue that \(\chi^{+}(\mathbf{e}^{i^{\prime}})=\chi^{+}(\mathbf{e}^{i})\) or \(\chi^{-}(\mathbf{e}^{i^{\prime}})=\chi^{-}(\mathbf{e}^{i})\) for some \(i^{\prime}>i\), then \(i^{\prime}-i\leq 2^{s}=O(1)\).20 Indeed, if \(\chi^{+}(\mathbf{e}^{i+1})=\chi^{+}(\mathbf{e}^{i})\) and \(\chi^{-}(\mathbf{e}^{i+1})=\chi^{-}(\mathbf{e}^{i})\), then \(K^{+}_{i+1}\succeq K^{+}_{i}\) and \(K^{-}_{i+1}\succeq K^{-}_{i}\), while at least one of the two relations must be strict.
Footnote 20: Using a more delicate argument, one can improve this bound to \(i^{\prime}-i\leq O(s)\).
Taking \(\ell_{i}=\min\{\chi^{+}(\mathbf{e}^{i}),\chi^{-}(\mathbf{e}^{i})\}\) for each \(i_{\mathrm{ignt}}\leq i<i_{\mathrm{ign}}\), we conclude that
(1) \(\mathrm{TC}^{\varrho}(\mathbf{e}^{i})\leq O(n/\ell_{i}^{2})\);
(2) \(\ell_{i+1}\leq\ell_{i}\); and
(3) there exists a constant \(h\geq 1\) such that \(\ell_{i+h}<\ell_{i}\).
As \(\ell_{0}\leq n\cdot s\), we can bound the total contribution of rounds \(i_{\mathrm{ignt}}\leq i<i_{\mathrm{ign}}\) to \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) by
\[\sum_{j=1}^{n\cdot s}O(n/j^{2})\,\leq\,O(n)\cdot\sum_{j=1}^{\infty}1/j^{2}\, \leq\,O(n)\,,\]
thus establishing the assertion.
The total contribution of rounds \(i_{\mathrm{ign}}\leq i<i_{\mathrm{leader}}\) to \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) is up-bounded by \(O(n)\).
Proof.: Fix a round \(i_{\mathrm{ign}}\leq i<i_{\mathrm{leader}}\) and assume without loss of generality that \(\chi^{-}(\mathbf{e}^{i})=0\) (the case where \(\chi^{+}(\mathbf{e}^{i})=0\) is proved symmetrically). Recall that the runtime policy \(\varrho\) is designed so that \(\varrho(\mathbf{e}^{i})=Q=\{\beta_{u,u^{\prime}},\hat{\beta}_{u,u^{\prime}}\mid -s\leq u,u^{\prime}\leq s\}\). Consider a configuration \(\mathbf{c}\) reachable from \(\mathbf{e}^{i}\) via a \(\varrho\)-avoiding path and notice that the propensity \(\pi_{\mathbf{c}}(Q)\) satisfies \(\pi_{\mathbf{c}}(Q)\geq\Omega((\mathbf{c}(L))^{2}/n)\). Inspecting the reactions in \(\mathcal{R}-Q\), we deduce that \(\mathbf{c}(L)=\mathbf{e}^{i}(L)\), hence we can employ Lem. 9 to conclude that the contribution of round \(i\) to \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) is \(\mathrm{TC}^{\varrho}(\mathbf{e}^{i})\leq O(n/(\mathbf{e}^{i}(L))^{2})\).
If round \(i\) is target-accomplished, then \(\mathbf{e}^{i+1}(L)<\mathbf{e}^{i}(L)\). This is no longer the case if round \(i\) is target-deprived, however, we argue that if \(\chi^{+}(\mathbf{e}^{i^{\prime}})=\chi^{+}(\mathbf{e}^{i})\) for some \(i^{\prime}>i\), then \(i^{\prime}-i\leq 2^{s}=O(1)\).21 Indeed, if \(\chi^{+}(\mathbf{e}^{i+1})=\chi^{+}(\mathbf{e}^{i})\), then \(K^{+}_{i+1}\succ K^{+}_{i}\).
Footnote 21: Using a more delicate argument, one can improve this bound to \(i^{\prime}-i\leq O(s^{2})\).
Taking \(\ell_{i}=\mathbf{e}^{i}(L)\) for each \(i_{\mathrm{ign}}\leq i<i_{\mathrm{leader}}\), we conclude that
(1) \(\mathrm{TC}^{\varrho}(\mathbf{e}^{i})\leq O(n/\ell_{i}^{2})\);
(2) \(\ell_{i+1}\leq\ell_{i}\); and
(3) there exists a constant \(h\geq 1\) such that \(\ell_{i+h}<\ell_{i}\).
As \(\ell_{i_{\text{sign}}}\leq n\), we can bound the total contribution of rounds \(i_{\text{sign}}\leq i<i_{\text{leader}}\) to \(\operatorname{RT}_{\text{halt}}^{\circ,\sigma}(\eta)\) by
\[\sum_{j=1}^{n}O(n/j^{2})\,\leq\,O(n)\cdot\sum_{j=1}^{\infty}1/j^{2}\,\leq\,O( n)\,,\]
thus establishing the assertion.
The total contribution of rounds \(i_{\text{leader}}\leq i<i_{\text{halt}}\) to \(\operatorname{RT}_{\text{halt}}^{\circ,\sigma}(\eta)\) is up-bounded by \(O(n)\).
Proof.: Obs. 17 and 18 imply that \(\operatorname{app}(\mathbf{e}^{i})\) consists only of \(\delta\) reactions for each round \(i_{\text{leader}}\leq i<i_{\text{halt}}\). Since \(i_{\text{leader}}\geq i_{\text{sign}}\), it follows that there are at most \(s-1=O(1)\) such rounds. The assertion follows as each round contributes at most \(O(n)\) temporal cost to \(\operatorname{RT}_{\text{halt}}^{\circ,\sigma}(\eta)\).
Combining Lem. 10 with Lem. 21, 22, and 23, we conclude that \(\operatorname{RT}_{\text{halt}}^{\circ,\sigma}(\eta)=O(n)\), which yields Prop. 13.
#### Modulo Predicates
In this section, we establish Prop. 14 by designing the promised CRD \(\Pi\). Specifically, given a vector \(\mathbf{a}\in\mathbb{Z}^{\Sigma}\) and scalars \(b\in\mathbb{Z}\) and \(m\in\mathbb{Z}_{>0}\), the (leaderless) CRD \(\Pi=(\mathcal{S},\mathcal{R},\Sigma,\Upsilon_{0},\Upsilon_{1},F,\mathbf{0})\) haltingly decides the predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) defined so that \(\psi(\mathbf{x})=1\) if and only if \(\mathbf{a}\cdot\mathbf{x}=b\bmod m\). Moreover, the halting runtime of \(\Pi\) is \(\operatorname{RT}_{\text{halt}}^{\Pi}(n)=O(n)\).
The species set of protocol \(\Pi\) is defined to be
\[\mathcal{S}\,=\,\Sigma\cup\{F\}\cup\{L_{u}\mid 0\leq u\leq m-1\}\cup\{Y\}\.\]
The species in \(\Sigma\cup\{F\}\) are regarded as the ignition species of the ignition gadget presented in Sec. 4.2.2, taking the ignition reaction associated with species \(A\in\Sigma\) to be
\(\iota_{A}\): \(A\to L_{\mathbf{a}(A)\bmod m}\);
and the ignition reaction associated with species \(F\) to be
\(\iota_{F}\): \(F\to L_{0}\).
Semantically, we think of the molecules of species \(A\in\Sigma\) as carrying \(\mathbf{a}(A)\) units of an abstract charge. Each molecule of species \(L_{u}\), \(0\leq u\leq m-1\), encodes the consumption of \(\chi\) units of charge for some \(\chi=u\bmod m\), whereas the \(F\) and \(Y\) molecules carry a neutral charge. From this point of view, the ignition reactions can be interpreted as transferring the charge (modulo \(m\)) from the ignition species to the working species.
We design the reaction set \(\mathcal{R}\) so that the total charge remains invariant modulo \(m\) throughout the execution (see Obs. 24). Moreover, when the execution halts, there is exactly one \(L\) molecule left (i.e., a leader) and we can determine whether or not the total charge modulo \(m\) is \(b\) based solely on the species of the remaining \(L\) molecule. Following this logic, the voter species are defined as
\[\Upsilon_{0}\,=\,\{L_{u}\mid u\neq b\}\quad\text{and}\quad\Upsilon_{1}\,=\,\{L _{b}\}\.\]
Concretely, the non-void reaction set \(\operatorname{NV}(\mathcal{R})\) of protocol \(\Pi\) includes the following reactions on top of the aforementioned ignition reactions:
\(\beta_{u,u^{\prime}}\): \(L_{u}+L_{u^{\prime}}\to L_{u+u^{\prime}\bmod m}+Y\) for every \(0\leq u,u^{\prime}\leq m-1\).
In other words, the \(\beta\) reactions decrement the number of \(L\) molecules while maintaining the total charge modulo \(m\).
Analysis.For the analysis of protocol \(\Pi\), fix some input vector \(\mathbf{x}\in\mathbb{N}^{\Sigma}\) and let \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) be a valid initial configuration of \(\Pi\) with \(\mathbf{c}^{0}|_{\Sigma}=\mathbf{x}\). Consider a weakly fair execution \(\eta=\langle\mathbf{c}^{t},\zeta^{t}\rangle_{t\geq 0}\) emerging from \(\mathbf{c}^{0}\).
For every step \(t\geq 0\), we have \(\sum_{A\in\Sigma}\mathbf{a}(A)\cdot\mathbf{c}^{t}(A)+\sum_{0\leq u\leq m-1}u \cdot\mathbf{c}^{t}(L_{u})=\mathbf{a}\cdot\mathbf{x}\bmod m\).
Proof.: Follows from the design of \(\mathcal{R}\), ensuring that (1) \(\sum_{A\in\Sigma}\mathbf{a}(A)\cdot\mathbf{c}^{0}(A)=\mathbf{a}\cdot\mathbf{x} \bmod m\); and (2) \(\sum_{A\in\Sigma}\mathbf{a}(A)\cdot\mathbf{c}^{t}(A)+\sum_{0\leq u\leq m-1}u \cdot\mathbf{c}^{t}(L_{u})=\sum_{A\in\Sigma}\mathbf{a}(A)\cdot\mathbf{c}^{t-1 }(A)+\sum_{0\leq u\leq m-1}u\cdot\mathbf{c}^{t-1}(L_{u})\bmod m\) for every \(t>0\).
We subsequently use the notation \(\mathbf{c}(L)=\sum_{0\leq u\leq m-1}\mathbf{c}(L_{u})\) defined for each configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\). The following steps play a key role in the analysis:
* let \(t_{\text{ignt}}\geq 0\) be the earliest step in which the ignition gadget matures in \(\eta\) (as promised in Lem. 10); and
* let \(t_{\text{leader}}\) be the earliest step \(t\geq t_{\text{ignt}}\) such that \(\mathbf{c}^{t}(L)=1\).
The existence of step \(t_{\text{leader}}\) is established (implicitly) in the sequel as part of the runtime analysis. Here, we prove the following observation.
\(\mathbf{c}^{t}(L)=1\) for all \(t\geq t_{\text{leader}}\). In particular, the \(\beta\) reactions are inapplicable from step \(t_{\text{leader}}\) onward.
Proof.: Each application of a \(\beta\) reaction decreases \(\mathbf{c}^{t}(L)\) while still producing one \(L\) molecule. The assertion is established by recalling that \(L_{0}\) is produced by the fuel ignition reaction \(\iota_{F}\), hence \(\mathbf{c}^{t_{\text{ignt}}}(L)\geq\mathbf{c}^{t_{\text{ignt}}}(L_{0})\geq 1\).
Lem. 10 and Obs. 25 imply that \(\eta\) halts in step \(t_{\text{leader}}\), thus establishing Cor. 26 due to Obs. 24. The halting correctness of protocol \(\Pi\) follows by the choice of \(\Upsilon_{0}\) and \(\Upsilon_{1}\).
Execution \(\eta\) halts in a configuration \(\mathbf{c}\) that includes a single \(L\) molecule \(L_{u}\) whose index \(u\) satisfies \(u=\mathbf{a}\cdot\mathbf{x}\bmod m\).
For the halting runtime analysis, let \(n=\|\mathbf{c}^{0}\|\) denote the molecular count of the initial configuration and fix some skipping policy \(\sigma\). We prove that \(\operatorname{RT}^{\Pi}_{\text{halt}}(n)\leq O(n)\) by presenting a runtime policy \(\varrho\) for \(\Pi\) (defined independently of \(\eta\) and \(\sigma\)) and showing that \(\operatorname{RT}^{\varrho,\sigma}_{\text{halt}}(\eta)\leq O(n)\). Given a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), the runtime policy \(\varrho\) is defined as follows:
* if \(\mathbf{c}(\Sigma\cup\{F\})>0\), then \(\varrho(\mathbf{c})\) consists of the ignition reactions;
* else \(\varrho(\mathbf{c})\) consists of the \(\beta\) reactions.
Let \(t_{\text{e}}(i)\) and \(\mathbf{e}^{i}=\mathbf{c}^{t_{\text{e}}(i)}\) be the effective step and effective configuration, respectively, of round \(i\geq 0\) under \(\varrho\) and \(\sigma\). We establish the desired upper bound on \(\operatorname{RT}^{\varrho,\sigma}_{\text{halt}}(\eta)\) by introducing the following two rounds:
* \(i_{\text{ignt}}=\min\{i\geq 0\ |\ t_{\text{e}}(i)\geq t_{\text{ignt}}\}\); and
* \(i_{\text{leader}}=\min\{i\geq i_{\text{ignt}}\ |\ t_{\text{e}}(i)\geq t_{\text{leader}}\}\).
Lem. 10 guarantees that the total contribution of rounds \(0\leq i<i_{\text{ignt}}\) to \(\operatorname{RT}^{\varrho,\sigma}_{\text{halt}}(\eta)\) is up-bounded by \(O(\log n)\). For the contribution of the subsequent rounds to \(\operatorname{RT}^{\varrho,\sigma}_{\text{halt}}(\eta)\), we establish the following lemma.
The total contribution of rounds \(i_{\text{ignt}}\leq i<i_{\text{leader}}\) to \(\operatorname{RT}^{\varrho,\sigma}_{\text{halt}}(\eta)\) is up-bounded by \(O(n)\).
Proof.: Fix a round \(i_{\text{ignt}}\leq i\leq i_{\beta}\) and recall that the runtime policy \(\varrho\) is designed so that \(\varrho(\mathbf{e}^{i})=Q=\{\beta_{u,u^{\prime}}\mid 0\leq u,u^{\prime}\leq m-1\}\). Since \(Q=\operatorname{NV}(\operatorname{app}(\mathbf{e}^{i}))\), it follows that \(\mathbf{e}^{i}\) is the only configuration reachable from \(\mathbf{e}^{i}\) via a \(\varrho\)-avoiding path. Let \(\ell_{i}=\mathbf{e}^{i}(L)\). As \(\pi_{\mathbf{e}^{i}}(Q)=\frac{1}{\varphi}\cdot\binom{\ell_{i}}{2}\geq\Omega( \ell_{i}^{2}/n)\), we can employ Lem. 9 to conclude that the contribution of round \(i\) to \(\operatorname{RT}_{\text{halt}}^{\varrho,\sigma}(\eta)\) is \(\operatorname{TC}^{\varrho}(\mathbf{e}^{i})\leq O(n/\ell_{i}^{2})\). Observing that \(\ell_{i+1}<\ell_{i}\) and \(\ell_{0}\leq n\), we can bound the total contribution of rounds \(i_{\text{ignt}}\leq i<i_{\text{leader}}\) to \(\operatorname{RT}_{\text{halt}}^{\varrho,\sigma}(\eta)\) by
\[\sum_{\ell=2}^{n}O\left(\tfrac{n}{\ell^{2}}\right)\,\leq\,O(n)\cdot\sum_{\ell =1}^{\infty}\tfrac{1}{\ell^{2}}\,\leq\,O(n)\,,\]
thus establishing the assertion.
Combining Lem. 10 with Lem. 27, we conclude that \(\operatorname{RT}_{\text{halt}}^{\varrho,\sigma}(\eta)=O(n)\), which yields Prop. 14.
#### Closure under Boolean Operations
In this section, we establish Prop. 15 by designing the promised CRD \(\Pi_{\xi}\). Intuitively, we employ the ignition gadget to produce two copies of each input molecule, one for protocol \(\Pi_{1}\) and one for protocol \(\Pi_{2}\); following that, the two protocols run in parallel, each on its own molecules. The ignition gadget is further employed to produce "global voter" molecules whose role is to interact with the "local voter" molecules of \(\Pi_{1}\) and \(\Pi_{2}\), recording their votes. To ensure that the runtime overhead is \(O(1)\), we invoke a leader election process on the global voters so that a single global voter survives.
Formally, for \(j\in\{1,2\}\), let \(\Pi^{\prime}_{j}=(\mathcal{S}^{\prime}_{j},\mathcal{R}^{\prime}_{j}, \Sigma^{\prime}_{j},\Upsilon^{\prime}_{j,0},\Upsilon^{\prime}_{j,1},F^{\prime }_{j},\mathbf{0})\) be the (leaderless) CRD derived from \(\Pi_{j}\) by replacing each species \(A\in\mathcal{S}_{j}\) with a \(\Pi^{\prime}_{j}\) designated species \(A^{\prime}_{j}\in\mathcal{S}^{\prime}_{j}\); in particular, the CRD \(\Pi^{\prime}_{j}\) is defined over the input species \(\Sigma^{\prime}_{j}=\{A^{\prime}_{j}\mid A\in\Sigma\}\).
The species set \(\mathcal{S}_{\xi}\) of protocol \(\Pi_{\xi}\) is defined to be
\[\mathcal{S}_{\xi}\,=\,\Sigma\cup\{F_{\xi}\}\cup\mathcal{S}^{\prime}_{1}\cup \mathcal{S}^{\prime}_{2}\cup\{G_{0,0},G_{0,1},G_{1,0},G_{1,1},W\}\,.\]
The species in \(\Sigma\cup\{F_{\xi}\}\) are regarded as the ignition species of the ignition gadget presented in Sec. 4.2.2, taking the ignition reaction associated with species \(A\in\Sigma\) to be \(\iota_{A}\): \(A\to A^{\prime}_{1}+A^{\prime}_{2}\); and the ignition reaction associated with species \(F_{\xi}\) to be \(\iota_{F_{\xi}}\): \(F_{\xi}\to F^{\prime}_{1}+F^{\prime}_{2}+G_{0,0}\).
On top of the aforementioned ignition reactions, the non-void reaction set \(\operatorname{NV}(\mathcal{R}_{\xi})\) of protocol \(\Pi_{\xi}\) consists of the (non-void) reactions in \(\operatorname{NV}(\mathcal{R}^{\prime}_{1})\cup\operatorname{NV}(\mathcal{R}^ {\prime}_{2})\) as well as the following reactions:
* \(\beta_{u_{1},u_{2},w_{1},w_{2}}\): \(G_{u_{1},u_{2}}+G_{w_{1},w_{2}}\to G_{0,0}+W\) for every \(u_{1},u_{2},w_{1},w_{2}\in\{0,1\}\);
* \(\gamma^{V^{\prime}_{1},u_{2}}_{u_{1},u_{2}}\): \(G_{u_{1},u_{2}}+V^{\prime}_{1}\to G_{1-u_{1},u_{2}}+V^{\prime}_{1}\) for every \(u_{1},u_{2}\in\{0,1\}\) and \(V^{\prime}_{1}\in\Upsilon^{\prime}_{1,1-u_{1}}\); and
* \(\gamma^{V^{\prime}_{2}}_{u_{1},u_{2}}\): \(G_{u_{1},u_{2}}+V^{\prime}_{2}\to G_{u_{1},1-u_{2}}+V^{\prime}_{2}\) for every \(u_{1},u_{2}\in\{0,1\}\) and \(V^{\prime}_{2}\in\Upsilon^{\prime}_{2,1-u_{2}}\).
Finally, the voter species of \(\Pi_{\xi}\) are defined as
\[\Upsilon_{\xi,0}\,=\,\{G_{u_{1},u_{2}}\mid\xi(u_{1},u_{2})=0\}\qquad\text{and} \qquad\Upsilon_{\xi,1}\,=\,\{G_{u_{1},u_{2}}\mid\xi(u_{1},u_{2})=1\}\.\]
Analysis.For the analysis of protocol \(\Pi_{\xi}\), fix some input vector \(\mathbf{x}\in\mathbb{N}^{\Sigma}\) and let \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}_{\xi}}\) be a valid initial configuration with \(\mathbf{c}^{0}|_{\Sigma}=\mathbf{x}\). Consider a weakly fair execution \(\eta=\langle\mathbf{c}^{t},\zeta^{t}\rangle_{t\geq 0}\) of \(\Pi_{\xi}\) emerging from \(\mathbf{c}^{0}\).
Let \(t_{\text{ignt}}\geq 0\) be the earliest step in which the ignition gadget matures in \(\eta\) (as promised in Lem. 10). For \(j\in\{1,2\}\), let \(t_{j}\) be the earliest step \(t\geq t_{\text{ignt}}\) such that \(\operatorname{app}(\mathbf{c}^{t})\cap\operatorname{NV}(\mathcal{R}^{\prime}_{j} )=\emptyset\).
The halting correctness of \(\Pi^{\prime}_{j}\) and the fact that the species in \(\mathcal{S}^{\prime}_{j}\) are catalysts for any reaction in \(\mathcal{R}-\mathcal{R}^{\prime}_{j}\) ensure that \(t_{j}\) exists. They also yield the following observation.
For each \(j\in\{1,2\}\), we have \(\mathbf{c}^{t_{j}}(\Upsilon_{j,v})>0\) and \(\mathbf{c}^{t_{j}}(\Upsilon_{j,1-v})=0\), where \(v=\psi_{j}(\mathbf{x})\). Moreover, \(\mathbf{c}^{t}|_{\mathcal{S}^{\prime}_{j}}=\mathbf{c}^{t_{j}}|_{\mathcal{S}^{ \prime}_{j}}\) for every \(t\geq t_{j}\).
Let \(t_{\max}=\max\{t_{1},t_{2}\}\). The only non-void reactions that can be applicable from step \(t_{\max}\) onward are the \(\beta\) and \(\gamma\) reactions. Moreover, Obs. 28 implies that the number of \(\gamma\) reactions that can be scheduled between any two consecutive \(\beta\) reactions is up-bounded by a linear function of the molecular count of the \(G\) species. Since each \(\beta\) reaction decreases the molecular count of the \(G\) species and since this molecular count never increases, it follows that there exists a step \(t_{\mathrm{leader}}\geq t_{\max}\) such that \(\mathbf{c}^{t_{\mathrm{leader}}}(\{G_{0,0},G_{0,1},G_{1,0},G_{1,1}\})=1\).
From step \(t_{\mathrm{leader}}\) onward, the only non-void reactions that can be applicable are \(\gamma\) reactions and these can be scheduled at most twice (in total) until \(\eta\) reaches a halting configuration in step \(t^{*}\geq t_{\mathrm{leader}}\). Cor. 29 follows by the choice of \(\Upsilon_{\xi,0}\) and \(\Upsilon_{\xi,1}\).
Execution \(\eta\) halts in a configuration that includes a single \(G\) molecule that belongs to \(\Upsilon_{\xi,v}\), where \(v=\xi(\psi_{1}(\mathbf{x}),\psi_{2}(\mathbf{x}))\).
For the halting runtime analysis, let \(n=\|\mathbf{c}^{0}\|\) denote the molecular count of the initial configuration and fix some skipping policy \(\sigma\). For \(j\in\{1,2\}\), let \(\varrho_{j}\) be a runtime policy for the CRD \(\Pi_{j}\) that realizes \(\mathrm{RT}^{\Pi_{j}}_{\mathrm{halt}}(n)\) and let \(\varrho^{\prime}_{j}\) be the runtime policy for \(\Pi^{\prime}_{j}\) derived from \(\varrho_{j}\) by replacing each species \(A\in\mathcal{S}_{j}\) with the \(\Pi^{\prime}_{j}\) designated species \(A^{\prime}_{j}\). We shall bound \(\mathrm{RT}^{\Pi_{\xi}}_{\mathrm{halt}}(n)\) by introducing a runtime policy \(\varrho\) for \(\Pi_{\xi}\) (defined independently of \(\eta\) and \(\sigma\)) and showing that \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\leq O(\mathrm{RT}^{\Pi_{ \mathrm{halt}}}_{\mathrm{halt}}(n)+\mathrm{RT}^{\Pi_{\mathrm{2}}}_{\mathrm{halt }}(n)+n)\).
The runtime policy \(\varrho\) is defined as follows for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}_{\xi}}\):
* if \(\mathbf{c}(\Sigma\cup\{F_{\xi}\})>0\), then \(\varrho(\mathbf{c})\) consists of the ignition reactions;
* else if \(\mathbf{c}|_{\mathcal{S}^{\prime}_{1}}\) is not a halting configuration of \(\Pi^{\prime}_{1}\), then \(\varrho(\mathbf{c})=\varrho^{\prime}_{1}(\mathbf{c}|_{\mathcal{S}^{\prime}_{1 }})\);
* else if \(\mathbf{c}|_{\mathcal{S}^{\prime}_{2}}\) is not a halting configuration of \(\Pi^{\prime}_{2}\), then \(\varrho(\mathbf{c})=\varrho^{\prime}_{2}(\mathbf{c}|_{\mathcal{S}^{\prime}_{2 }})\);
* else if \(\mathbf{c}(\{G_{0,0},G_{0,1},G_{1,0},G_{1,1}\})>1\), then \(\varrho(\mathbf{c})\) consists of the \(\beta\) reactions;
* else \(\varrho(\mathbf{c})\) consists of the \(\gamma\) reactions.
Let \(t_{\mathrm{e}}(i)\) and \(\mathbf{e}^{i}=\mathbf{c}^{t_{\mathrm{e}}(i)}\) be the effective step and effective configuration, respectively, of round \(i\geq 0\) under \(\varrho\) and \(\sigma\). We establish the desired upper bound on \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) by introducing the following four rounds:
* \(i_{\mathrm{ignt}}=\min\{i\geq 0\mid t_{\mathrm{e}}(i)\geq t_{\mathrm{ignt}}\}\);
* \(i_{\max}=\min\{i\geq i_{\mathrm{ignt}}\mid t_{\mathrm{e}}(i)\geq t_{\max}\}\);
* \(i_{\mathrm{leader}}=\min\{i\geq i_{\mathrm{max}}\mid t_{\mathrm{e}}(i)\geq t_{ \mathrm{leader}}\}\); and
* \(i^{*}=\min\{i\geq i_{\mathrm{leader}}\mid t_{\mathrm{e}}(i)\geq t^{*}\}\).
Lem. 10 guarantees that the total contribution of rounds \(0\leq i<i_{\mathrm{ignt}}\) to \(\mathrm{RT}^{\varrho}_{\mathrm{halt}}(\eta)\) is up-bounded by \(O(\log n)\). For the contribution of the subsequent rounds to \(\mathrm{RT}^{\varrho}_{\mathrm{halt}}(\eta)\), we establish the following three lemmas.
The total contribution of rounds \(i_{\mathrm{ignt}}\leq i<i_{\max}\) to \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) is up-bounded by \(O(\mathrm{RT}^{\Pi_{\mathrm{halt}}}_{\mathrm{halt}}(n)+\mathrm{RT}^{\Pi_{ \mathrm{2}}}_{\mathrm{halt}}(n))\).
Proof.: Follows by the halting runtime bound of \(\Pi_{1}\) and \(\Pi_{2}\) and the fact that the species in \(\mathcal{S}^{\prime}_{1}\) and \(\mathcal{S}^{\prime}_{2}\) are catalysts for any reaction in \(\mathcal{R}-\mathcal{R}^{\prime}_{1}\) and \(\mathcal{R}-\mathcal{R}^{\prime}_{2}\), respectively.
The total contribution of rounds \(i_{\max}\leq i<i_{\mathrm{leader}}\) to \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) is up-bounded by \(O(n)\).
Proof.: Given a round \(i_{\max}\leq i<i_{\text{leader}}\), let \(\ell_{i}=\mathbf{e}^{i}(\{G_{0,0},G_{0,1},G_{1,0},G_{1,1}\})\) and recall that \(\varrho(\mathbf{e}^{i})=\{\beta_{u_{1},u_{2},u_{1},w_{2}}\mid u_{1},u_{2},w_{1}, w_{2}\in\{0,1\}\}\). Employing Lem. 9, we conclude that the contribution of round \(i\) to \(\mathrm{RT}^{\varrho,\varrho}_{\mathrm{halt}}(\eta)\) is up-bounded as \(\mathrm{TC}^{\varrho}(\mathbf{e}^{i})\leq O(n/\ell_{i}^{2})\). Notice that \(\ell_{i+1}\leq\ell_{i}\) and that the inequality is strict if round \(i\) is target-accomplished. This is no longer guaranteed if round \(i\) is target-deprived, however we argue that if \(\ell_{i^{\prime}}=\ell_{i}\) for some \(i^{\prime}>i\), then \(i^{\prime}-i\leq O(1)\). Indeed, this is ensured by Obs. 28 as \(i\geq i_{\max}\). Therefore, the total contribution of rounds \(i_{\max}\leq i<i_{\text{leader}}\) to \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) is up-bounded by
\[\sum_{j=2}^{n}\tfrac{n}{j^{2}}\,\leq\,O(n)\cdot\sum_{j=1}^{\infty}\tfrac{1}{ j^{2}}\,\leq\,O(n)\,,\]
thus establishing the assertion.
The total contribution of rounds \(i_{\text{leader}}\leq i<i^{*}\) to \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) is up-bounded by \(O(n)\).
Proof.: Follows since there can be at most two such rounds.
Combining Lem. 10 with Lem. 30, 31, and 32, we conclude that \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)=O(\mathrm{RT}^{\Pi_{1}}_{ \mathrm{halt}}(n)+\mathrm{RT}^{\Pi_{2}}_{\mathrm{halt}}(n)+n)\), which yields Prop. 15.
### Detection Predicates
For a vector \(\mathbf{x}\in\mathbb{N}^{\Sigma}\), let \(\mathbf{x}_{\downarrow}\in\{0,1\}^{\Sigma}\subset\mathbb{N}^{\Sigma}\) be the vector defined by setting \(\mathbf{x}_{\downarrow}(A)=1\) if \(\mathbf{x}(A)>0\); and \(\mathbf{x}_{\downarrow}(A)=0\) otherwise. A predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) is a _detection_ predicate if \(\psi(\mathbf{x})=\psi(\mathbf{x}_{\downarrow})\) for every vector \(\mathbf{x}\in\mathbb{N}^{\Sigma}\) (cf. [1, 12, 23]). Chen et al. [12] prove that in the context of the strongly fair adversarial scheduler, a predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) can be stably decided by a stabilization speed fault free CRD if and only if it is a detection predicate. Cor. 4 ensures that the only if direction translates to our weakly fair adversarial scheduler; employing Lem. 8, we conclude that a non-detection predicate cannot be decided by a CRD whose stabilization (and hence also halting) runtime is better than \(\Omega(n)\). For the if direction, the construction in [12] yields leaderless CRDs that haltingly decide \(\psi\) whose expected halting runtime under the stochastic scheduler is \(O(\log n)\). The following theorem states that the same (asymptotic) runtime upper bound can be obtained under the weakly fair adversarial scheduler.
For every detection predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\), there exists a leaderless CRD that haltingly decides \(\psi\) whose halting runtime is \(O(\log n)\). Moreover, the CRD is designed so that all molecules in the halting configuration are voters.
A standard probabilistic argument reveals that in a stochastically scheduled execution, the expected stochastic runtime until each molecule, present in the initial configuration, reacts at least once is \(\Omega(\log n)\). Employing Lemma 7, we deduce the same asymptotic lower bound for the stabilization (and thus also halting) runtime of any protocol whose outcome may be altered by a single input molecule. Since CRD protocols that decide detection predicates satisfy this property, it follows that the runtime upper bound promised in Thm. 33 is (asymptotically) tight.
We establish Thm. 33 by presenting a (leaderless) CRD \(\Pi=(\mathcal{S},\mathcal{R},\Sigma,\Upsilon_{0},\Upsilon_{1},F,\mathbf{0})\) that haltingly decides a given detection predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\). As promised in the theorem, the halting runtime of \(\Pi\) is \(\mathrm{RT}^{\Pi}_{\mathrm{halt}}(n)=O(\log n)\). We note that both the construction and the runtime analysis of \(\Pi\) are heavily inspired by the construction of [12] for a CRD that decides \(\psi\) (although the runtime in [12] is analyzed assuming a stochastic scheduler).
The species set of protocol \(\Pi\) is defined to be
\[\mathcal{S}\,=\,\Sigma\cup\{F\}\cup\left\{D_{\mathbf{u}}\mid\mathbf{u}\in\{0,1\}^ {\Sigma}\right\}\,.\]
The species in \(\Sigma\cup\{F\}\) are regarded as the ignition species of the ignition gadget presented in Sec. 4.2.2, taking the ignition reaction associated with species \(A\in\Sigma\) to be
\(\iota_{A}\): \(A\to D_{\mathbf{z}^{A}}\),
where \(\mathbf{z}^{A}\in\{0,1\}^{\Sigma}\) denotes the (unit) vector that corresponds to the multiset \(1A\); and the ignition reaction associated with species \(F\) to be
\(\iota_{F}\): \(F\to D_{\mathbf{0}}\).
Semantically, the presence of species \(D_{\mathbf{u}}\) in the configuration indicates that it has already been detected that input species \(A\) was present in the initial configuration for each \(A\in\Sigma\) such that \(\mathbf{u}(A)=1\). Following this semantics, the voter species are defined as
\[\Upsilon_{0}\,=\,\{D_{\mathbf{u}}\mid\psi(\mathbf{u})=0\}\qquad\text{and} \qquad\Upsilon_{1}\,=\,\{D_{\mathbf{u}}\mid\psi(\mathbf{u})=1\}\.\]
Concretely, given two vectors \(\mathbf{u},\mathbf{u}^{\prime}\in\{0,1\}^{\Sigma}\), let \(\mathbf{u}\vee\mathbf{u}^{\prime}\in\{0,1\}^{\Sigma}\) denote the bitwise logical or of \(\mathbf{u}\) and \(\mathbf{u}^{\prime}\). The non-void reaction set \(\mathrm{NV}(\mathcal{R})\) of protocol \(\Pi\) includes the following reactions on top of the aforementioned ignition reactions:
\(\beta_{\mathbf{u},\mathbf{u}^{\prime}}\): \(D_{\mathbf{u}}+D_{\mathbf{u}^{\prime}}\to 2D_{\mathbf{u}\vee\mathbf{u}^{ \prime}}\) for every distinct \(\mathbf{u},\mathbf{u}^{\prime}\in\{0,1\}^{\Sigma}\).
In other words, the \(\beta\) reactions "spread" the detection of the various input species among all (working) molecules.
Analysis.For the analysis of protocol \(\Pi\), fix some input vector \(\mathbf{x}\in\mathbb{N}^{\Sigma}\) and let \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) be a valid initial configuration with \(\mathbf{c}^{0}|_{\Sigma}=\mathbf{x}\). Consider a weakly fair execution \(\eta=\langle\mathbf{c}^{t},\zeta^{t}\rangle_{t\geq 0}\) of \(\Pi\) emerging from \(\mathbf{c}^{0}\).
Let \(t_{\mathrm{ignt}}\geq 0\) be the earliest step in which the ignition gadget matures in \(\eta\) (as promised in Lem. 4.2). For a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), let \(\mathrm{OR}(\mathbf{c})\) denote the result of the bitwise logical or of all vectors \(\mathbf{u}\in\{0,1\}^{\Sigma}\) such that \(\mathbf{c}(D_{\mathbf{u}})>0\).
For every step \(t\geq t_{\mathrm{ignt}}\), we have \(\mathrm{OR}(\mathbf{c}^{t})=\mathbf{x}_{\downarrow}\).
Proof.: Follows by the choice of the ignition reactions as \(\mathrm{OR}(\mathbf{c}^{t+1})=\mathrm{OR}(\mathbf{c}^{t})\) for every \(t\geq t_{\mathrm{ignt}}\).
Let \(n=\|\mathbf{c}^{0}\|\) be the molecular count of the initial configuration and observe that \(\|\mathbf{c}^{t}\|=n\) for all \(t\geq 0\) as \(\Pi\) is density preserving. Given a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) and an input species \(A\in\Sigma\), let
\[w_{A}(\mathbf{c})\,=\,\sum_{\mathbf{u}\in\{0,1\}^{\Sigma}\,:\,\mathbf{u}(A)=1} \mathbf{c}(D_{\mathbf{u}})\qquad\text{and}\qquad w(\mathbf{c})\,=\,\sum_{A \in\Sigma}w_{A}(\mathbf{c})\,.\]
The following three properties hold for every step \(t\geq t_{\mathrm{ignt}}\) and input species \(A\in\Sigma\):
(1) \(w_{A}(\mathbf{c}^{t+1})\geq w_{A}(\mathbf{c}^{t})\);
(2) if \(\zeta^{t}=\beta_{\mathbf{u},\mathbf{u}^{\prime}}\) and \(\mathbf{u}(A)\neq\mathbf{u}^{\prime}(A)\), then \(w_{A}(\mathbf{c}^{t+1})=w_{A}(\mathbf{c}^{t})+1\); and
(3) \(0\leq w(\mathbf{c}^{t})\leq n\cdot|\Sigma|\).
As \(\mathrm{app}(\mathbf{c}^{t})\) includes a \(\beta\) reaction if and only if \(\mathbf{c}^{t}(D_{\mathbf{u}})>0\) for some \(\mathbf{u}\neq\mathrm{OR}(\mathbf{c}^{t})\), we obtain the following observation.
There exists a step \(t^{*}\geq t_{\mathrm{ignt}}\) such that \(\mathbf{c}^{t^{*}}\) is halting and \(\mathbf{c}^{t^{*}}(D_{\mathbf{u}})>0\) implies that \(\mathbf{u}=\mathrm{OR}(\mathbf{c}^{t^{*}})\).
Cor. 37 now follows by the definition of \(\Upsilon_{0}\) and \(\Upsilon_{1}\) due to Obs. 34 and 36.
Protocol \(\Pi\) is haltingly correct.
For the halting runtime analysis, we fix some skipping policy \(\sigma\) and prove that \(\operatorname{RT}^{\Pi}_{\operatorname{halt}}(n)\leq O(\log n)\) by presenting a runtime policy \(\varrho\) for \(\Pi\) (defined independently of \(\eta\) and \(\sigma\)) and showing that \(\operatorname{RT}^{\varrho,\sigma}_{\operatorname{halt}}(\eta)\leq O(\log n)\). Given a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), the runtime policy \(\varrho\) is defined as follows:
* if \(\mathbf{c}(\Sigma\cup\{F\})>0\), then \(\varrho(\mathbf{c})\) consists of the ignition reactions;
* else \(\varrho(\mathbf{c})=\operatorname{NV}(\mathcal{R})\).
\(\operatorname{RT}^{\varrho,\sigma}_{\operatorname{halt}}(\eta)\leq O(\log n)\).
Proof.: Let \(t_{\mathbf{c}}(i)\) and \(\mathbf{e}^{i}=\mathbf{c}^{\mathbf{c}^{\mathbf{c}^{\mathbf{c}}}(i)}\) be the effective step and effective configuration, respectively, of round \(i\geq 0\) under \(\varrho\) and \(\sigma\). Let \(i_{\operatorname{ignt}}=\min\{i\geq 0\mid t_{\mathbf{c}}(i)\geq t_{ \operatorname{ignt}}\}\) and let \(i^{*}=\min\{i\geq i_{\operatorname{ignt}}\mid t_{\mathbf{c}}(i)\geq t^{*}\}\). Lem. 10 ensures that \(\sum_{i=0}^{i_{\operatorname{ign}}-1}\operatorname{TC}^{\varrho}(\mathbf{e}^{i })\leq O(\log n)\), so it remains to prove that \(\sum_{i=i_{\operatorname{ignt}}}^{i^{*}-1}\operatorname{TC}^{\varrho}( \mathbf{e}^{i})\leq O(\log n)\).
Fix a round \(i_{\operatorname{ignt}}\leq i<i^{*}\) and notice that the definition of \(\varrho\) guarantees that round \(i\) is target-accomplished. For an input species \(A\in\Sigma\), denote \(w_{A}(i)=w_{A}(\mathbf{e}^{i})\). Let \(A(i)\) be the first (according to an arbitrary total order on \(\Sigma\)) input species \(A\in\Sigma\) that satisfies \(w_{A}(i+1)>w_{A}(i)\).
The key observation now is that the temporal cost associated with round \(i\) is up-bounded as
\[\operatorname{TC}^{\varrho}(\mathbf{e}^{i})\,\leq\,O\left(\frac{n}{w_{A(i)} (i)\cdot(n-w_{A(i)}(i))}\right)\,.\]
Therefore, we can establish the assertion by developing
\[\sum_{i=i_{\operatorname{ignt}}}^{i^{*}-1}\operatorname{TC}^{ \varrho}(\mathbf{e}^{i}) \,\leq\sum_{A\in\Sigma}\sum_{i_{\operatorname{ignt}}\leq i<i^{* }\,:\,A_{i}=A}O\left(\frac{n}{w_{A}(i)\cdot(n-w_{A}(i))}\right)\] \[\,\leq\sum_{A\in\Sigma}\sum_{i_{\operatorname{ignt}}\leq i<i^{* }\,:\,w_{A}(i+1)=w_{A}(i)+1}O\left(\frac{n}{w_{A}(i)\cdot(n-w_{A}(i))}\right)\] \[\,\leq O(n)\cdot|\Sigma|\cdot\sum_{i=1}^{n-1}\frac{1}{i\cdot(n-i)}\] \[\,= O(n)\cdot|\Sigma|\cdot\left(\sum_{i=1}^{\lfloor n/2\rfloor} \frac{1}{i\cdot(n-i)}\,+\,\sum_{i=\lfloor n/2\rfloor+1}^{n-1}\frac{1}{i\cdot( n-i)}\right)\] \[\,\leq O(1)\cdot|\Sigma|\cdot\left(\sum_{i=1}^{\lfloor n/2\rfloor} \frac{1}{i}\right)\,\leq\,O(\log n)\,,\]
where the third transition follows from Obs. 35 and the last transition holds as \(|\Sigma|=O(1)\).
Thm. 33 follows from Cor. 37 and Lem. 38.
## 6 Vote Amplification
Recall that CRDs are required to stabilize/halt into configurations \(\mathbf{c}\) that include a positive number of \(v\)-voter molecules and zero \((1-v)\)-voter molecules, where \(v\in\{0,1\}\) is determined
by the decided predicate according to the input vector. This requirement alone does not rule out the possibility of having a small (yet positive) voter molecular count in \(\mathbf{c}\). Indeed, the semilinear predicate CRDs promised in Thm. 12 are designed so that the configuration \(\mathbf{c}\) includes a single voter molecule (this is in contrast to the detection predicate CRDs promised in Thm. 33, where all molecules in \(\mathbf{c}\) are voters).
In practice though, it may be difficult to obtain a meaningful signal from small molecular counts. Consequently, we aim for _vote amplified_ CRDs, namely, CRDs that guarantee to stabilize/halt into configurations in which the voter molecules take all but an \(\epsilon\)-fraction of the total molecular count for an arbitrarily small constant \(\epsilon>0\). These are obtained by means of a "generic compiler" that can be applied, in a black-box manner, to any existing CRD, turning it into a vote amplified CRD while preserving the original stabilization/halting correctness. At the heart of this compiler lies a CRN protocol for a standalone computational task, referred to as _vote amplification (VA)_, whose runtime dominates the runtime overhead of the compiler, as stated in the following theorem (proved in Sec. 6.1).
Consider a predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) that can be haltingly decided by a (leaderless) CRD in \(T_{\psi}(n)\) time. The existence of a VA protocol that stabilizes (resp., halts) in \(T_{\mathrm{amp}}(n)\) time implies that for any constant \(\epsilon>0\), there exists a (leaderless) CRD that stably (resp., haltingly) decides \(\psi\) in \(T_{\psi}(O(n))+T_{\mathrm{amp}}(O(n))+O(\log n)\) timeso that the non-voter molecules take at most an \(\epsilon\)-fraction of the molecular count of the configuration(s) into which the CRD stabilizes (resp., halts).
Assuming a stochastic scheduler, Angluin et al. [3] develop a VA protocol that halts in \(O(n)\) time. Unfortunately, the protocol of [3] does not meet the topological conditions of Lem. 2, hence the (weakly fair) adversarial scheduler can prevent this protocol from stabilizing (see Appendix C for more details). Using a completely different technique, we develop a VA protocol whose guarantees are cast in the following theorem.
There exists a VA protocol (operating under the weakly fair scheduler) that stabilizes in \(O(n)\) time and halts in \(O(n\log n)\) time.
Combined with Thm. 39, we obtain a compiler whose stabilization and halting runtime overheads are \(O(n)\) and \(O(n\log n)\), respectively. Applying this compiler to the CRDs promised in Thm. 12 results in vote amplified CRDs whose stabilization runtime remains \(O(n)\), however their halting runtime increases to \(O(n\log n)\). The excessive \(\log n\) factor would be shaved by a VA protocol that halts in \(O(n)\) time whose existence remains an open question.
Task Formalization.As stated in Thm. 39, our compiler is formalized by means of the VA task. A VA protocol is a CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) whose species set \(\mathcal{S}\) is partitioned into the pairwise disjoint sets \(\mathcal{P}_{0}\cup\mathcal{P}_{1}\cup\mathcal{F}_{0}\cup\mathcal{F}_{1}= \mathcal{S}\), where for \(v\in\{0,1\}\), the species in \(\mathcal{P}_{v}\) are referred to as _permanent \(v\)-voters_ and the species in \(\mathcal{F}_{v}\) are referred to as _fluid \(v\)-voters_. The permanent voters are regarded as part of the task specification and can participate in the reactions of \(\Pi\) only as catalysts (which means that the molecular count of each permanent voter remains invariant throughout the execution).
A configuration \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) is valid as an initial configuration for the VA task if there exists a vote \(v\in\{0,1\}\) such that \(\mathbf{c}^{0}(\mathcal{P}_{v})>0\) and \(\mathbf{c}^{0}(\mathcal{P}_{1-v})=0\), in which case we refer to \(\mathbf{c}^{0}\) as a _\(v\)-voting_ initial configuration. For convenience, we further require that \(\mathbf{c}^{0}(\{\mathcal{P}_{0},\mathcal{P}_{1}\})\leq\mathbf{c}^{0}(\{ \mathcal{F}_{0},\mathcal{F}_{1}\})\), which means that the permanent voters (in fact, the voters in \(\mathcal{P}_{v}\)) do not dominate the initial molecular count.22
Footnote 22: This requirement is not inherent to the task formulation and we present it solely for the sake of the proof.
A configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) is said to be an _amplification_ of a \(v\)-voting initial configuration \(\mathbf{c}^{0}\) if (1) \(\mathbf{c}(A)=\mathbf{c}^{0}(A)\) for every \(A\in\mathcal{P}_{0}\cup\mathcal{P}_{1}\); (2) \(\mathbf{c}(\mathcal{F}_{v})=\mathbf{c}^{0}(\{\mathcal{F}_{0}\cup\mathcal{F}_{1}\})\); and (3) \(\mathbf{c}(\mathcal{F}_{1-v})=0\). In other words, an amplification of a \(v\)-voting initial configuration keeps the original permanent voter molecules and shifts all fluid voter molecules to the \(v\)-voting side.
The VA protocol \(\Pi\) is stably (resp., haltingly) correct if every weakly fair valid execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) stabilizes (resp., halts) into the (set of) amplifications of its initial configuration \(\mathbf{c}^{0}\). The typical scenario involves a small number of permanent \(v\)-voter molecules and the challenge is to ensure that all (asymptotically many) fluid voter molecules "end up" in \(\mathcal{F}_{v}\). We emphasize that for \(\Pi\) to be correct, the protocol should handle any initial configuration \(\mathbf{c}^{0}|_{\mathcal{F}_{0}\cup\mathcal{F}_{1}}\) of the fluid voters.
The VA Protocol.We now turn to develop the VA protocol \(\Pi=(\mathcal{S},\mathcal{R})\) promised in Thm. 4.2. Fix some sets \(\mathcal{P}_{0}\) and \(\mathcal{P}_{1}\) of permanent \(0\)- and \(1\)-voters, respectively. Protocol \(\Pi\) is defined over the fluid voter sets \(\mathcal{F}_{0}=\{H_{0},L_{0}\}\) and \(\mathcal{F}_{1}=\{H_{1},L_{1}\}\). Semantically, we think of the \(H\) (resp., \(L\)) fluid voters as having a high (resp., low) confidence level in their vote. The reaction set \(\mathcal{R}\) of \(\Pi\) includes the following non-void reactions:
\(\beta^{A}_{v,P_{v}}\colon\,P_{v}+A\to P_{v}+H_{v}\) for every \(v\in\{0,1\}\), \(P_{v}\in\mathcal{P}_{v}\), and \(A\in\{H_{1-v},L_{0},L_{1}\}\);
\(\gamma\colon\,H_{0}+H_{1}\to L_{0}+L_{1}\); and
\(\delta_{v}\colon\,H_{v}+L_{1-v}\to 2L_{v}\) for every \(v\in\{0,1\}\).
In other words, the \(\beta_{v}\) reactions turn any fluid voter into a high confidence fluid \(v\)-voter; reaction \(\gamma\) turns high confidence fluid voters with opposite votes into low confidence fluid voters with opposite votes; and reaction \(\delta_{v}\) turns a high confidence fluid \(v\)-voter and a low confidence fluid \((1-v)\)-voter into two low confidence fluid \(v\)-voters. Informally, these reactions guarantee that the adversary has little leverage because, as we show soon, _all_ of the non-void reactions make nontrivial progress in their own different ways.
For the runtime analysis of protocol \(\Pi\), consider a weakly fair valid execution \(\eta=\langle\mathbf{c}^{t},\zeta^{t}\rangle_{t\geq 0}\) of initial molecular count \(\|\mathbf{c}^{0}\|=n\). Assume for simplicity that the initial configuration \(\mathbf{c}^{0}\) is \(1\)-voting which means that all permanent voters present in \(\mathbf{c}^{0}\) (and in \(\mathbf{c}^{t}\) for any \(t\geq 0\)) belong to \(\mathcal{P}_{1}\); the case where \(\mathbf{c}^{0}\) is \(0\)-voting is analyzed symmetrically. Let \(m=\mathbf{c}^{0}(\{H_{0},L_{0},L_{1},H_{1}\})\) be the initial molecular count of the fluid voters and observe that \(\mathbf{c}^{t}(\{H_{0},L_{0},L_{1},H_{1}\})=m\) for every \(t\geq 0\).
To capture progress made as execution \(\eta\) advances, we assign an integral score \(s(\cdot)\) to each fluid voter by setting
\[s(H_{0})=-4,\quad s(L_{0})=-1,\quad s(L_{1})=1,\quad\text{and}\quad s(H_{1})= 2\,.\]
Substituting the \(s(\cdot)\) scores into each reaction \(\alpha\in\mathrm{NV}(\mathcal{R})\) reveals that the sum of scores of \(\alpha\)'s fluid reactants is strictly smaller than the sum of scores of \(\alpha\)'s fluid products. Denoting the total score in a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) by \(s(\mathbf{c})=\sum_{A\in\{H_{0},L_{0},L_{1},H_{1}\}}\mathbf{c}(A)\cdot s(A)\), we deduce that \(s(\mathbf{c}^{t+1})\geq s(\mathbf{c}^{t})\) and that \(\zeta^{t}\in\mathrm{NV}(\mathcal{R})\Longrightarrow s(\mathbf{c}^{t+1})>s( \mathbf{c}^{t})\) for every \(t\geq 0\). Since \(-4m\leq s(\mathbf{c}^{t})\leq 2m\) for every \(t\geq 0\), it follows that \(\eta\) includes, in total, at most \(O(m)\leq O(n)\) non-void reactions until it halts.
The last bound ensures that progress is made whenever a non-void reaction is scheduled. Accordingly, we choose the runtime policy \(\varrho\) so that \(\varrho(\mathbf{c})=\mathrm{NV}(\mathcal{R})\) for all configurations \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\).23 This means in particular that for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), the only configuration
reachable from \(\mathbf{c}\) via a \(\varrho\)-avoiding path is \(\mathbf{c}\) itself.
Fix some skipping policy \(\sigma\) and let \(\mathbf{e}^{i}\) be the effective configuration of round \(i\geq 0\) under \(\varrho\) and \(\sigma\). Let \(i^{*}=\min\{i\geq 0\mid\mathbf{e}^{i}(\{H_{0},L_{0}\})=0\}\) be the first round whose effective step appears after \(\eta\) stabilizes and let \(i^{**}=\min\{i\geq 0\mid\mathbf{e}^{i}(\{H_{0},L_{0},L_{1}\})=0\}\) be the first round whose effective step appears after \(\eta\) halts. Since the choice of \(\varrho\) ensures that each round \(0\leq i<i^{**}\) is target-accomplished, ending with a non-void reaction, it follows that \(i^{*}\leq i^{**}\leq O(n)\).
To bound the stabilization runtime of execution \(\eta\) under \(\varrho\) and \(\sigma\), we argue that \(\pi_{\mathbf{e}^{i}}(\operatorname{NV}(\mathcal{R}))\geq\Omega(1)\) for every \(0\leq i<i^{*}\); this allows us to employ Lem. 9 and conclude that \(\operatorname{TC}^{\varrho}(\mathbf{e}^{i})\leq O(1)\) for every \(0\leq i<i^{*}\). To this end, notice that if \(\mathbf{e}^{i}(H_{1})\geq m/2\), then
\[\pi_{\mathbf{e}^{i}}\left(\{\gamma,\delta_{1}\}\right)\,=\,\tfrac{1}{\varphi} \cdot\mathbf{e}^{i}(H_{1})\cdot\mathbf{e}^{i}(\{H_{0},L_{0}\})\,\geq\,\Omega( m/n)\,=\,\Omega(1)\,.\]
Otherwise (\(\mathbf{e}^{i}(H_{1})<m/2\)), we know that \(\mathbf{e}^{i}(\{H_{0},L_{0},L_{1}\})>m/2\), hence
\[\pi_{\mathbf{e}^{i}}\left(\{\beta^{A}_{1,P_{1}}\mid P_{1}\in \mathcal{P}_{1},A\in\{H_{0},L_{0},L_{1}\}\}\right)\,= \,\tfrac{1}{\varphi}\cdot\mathbf{e}^{i}(\{H_{0},L_{0},L_{1}\}) \cdot\mathbf{e}^{i}(\mathcal{P}_{1})\] \[\geq \,\Omega(m/n)\,=\,\Omega(1)\,,\]
thus establishing the argument. Therefore, the stabilization runtime of \(\eta\) satisfies
\[\operatorname{RT}^{\varrho,\sigma}_{\mathrm{stab}}(\eta)\,=\,\sum_{i=0}^{i^{*} -1}\operatorname{TC}^{\varrho}(\mathbf{e}^{i})\,\leq\,\sum_{i=0}^{O(n)}O(1)\,= \,O(n)\,.\]
To bound the halting runtime of \(\eta\) under \(\varrho\) and \(\sigma\), fix some round \(i^{*}\leq i<i^{**}\) and observe that \(\mathbf{e}^{i}(\{H_{0},L_{0}\})=0\) and that \(\operatorname{app}(\mathbf{e}^{i})\cap\operatorname{NV}(\mathcal{R})=\{\beta^ {L_{1}}_{1,P_{1}}\mid P_{1}\in\mathcal{P}_{1}\}\). Let \(\ell_{i}=\mathbf{e}^{i}(L_{1})\) and notice that \(\pi_{\mathbf{e}^{i}}(\{\beta^{L_{1}}_{1,P_{1}}\mid P_{1}\in\mathcal{P}_{1}\}) \geq\ell_{i}/\varphi\geq\Omega(\ell_{i}/n)\). Therefore, we can employ Lem. 9 to conclude that \(\operatorname{TC}^{\varrho}(\mathbf{e}^{i})\leq O(n/\ell_{i})\). Since \(\ell_{i+1}<\ell_{i}\) for every \(i^{*}\leq i<i^{**}\) and since \(\ell_{i^{*}}\leq m\leq n\) and \(\ell_{i^{**}}=0\), it follows that the halting runtime of \(\eta\) satisfies
\[\operatorname{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\,= \,\operatorname{RT}^{\varrho,\sigma}_{\mathrm{stab}}(\eta)+\sum_{i =i^{*}}^{i^{**}-1}\operatorname{TC}^{\varrho}(\mathbf{e}^{i})\] \[\leq \,O(n)+\sum_{\ell=1}^{n}O(n/\ell)\,=\,O(n)\cdot\sum_{\ell=1}^{n}1 /\ell\,=\,O(n\log n)\,,\]
thus establishing Thm. 40.
### Obtaining Vote Amplified CRDs
Let \(\Pi_{\psi}=(\mathcal{S}_{\psi},\mathcal{R}_{\psi},\Sigma,\Upsilon_{\psi,0}, \Upsilon_{\psi,1},F_{\psi},\mathbf{k}_{\psi})\) be a CRD protocol that haltingly decides the predicate \(\psi:\mathbb{N}^{\Sigma}\to\{0,1\}\) in \(T_{\psi}(n)\) time using the runtime policy \(\varrho_{\psi}\). Let \(\Pi^{\prime}_{\psi}=(\mathcal{S}^{\prime}_{\psi},\mathcal{R}^{\prime}_{\psi}, \Sigma^{\prime},\Upsilon^{\prime}_{\psi,0},\Upsilon^{\prime}_{\psi,1},F^{\prime} _{\psi},\mathbf{k}^{\prime}_{\psi})\) be the CRD derived from \(\Pi_{\psi}\) by replacing each species \(A\in\mathcal{S}_{\psi}\) with a \(\Pi^{\prime}_{\psi}\) designated species \(A^{\prime}\in\mathcal{S}^{\prime}_{\psi}\); in particular, the CRD \(\Pi^{\prime}_{\psi}\) is defined over the input species \(\Sigma^{\prime}=\{A^{\prime}\mid A\in\Sigma\}\). Let \(\varrho^{\prime}_{\psi}\) be the runtime policy for \(\Pi^{\prime}_{\psi}\) derived from \(\varrho_{\psi}\) by replacing each species \(A\in\mathcal{S}_{\psi}\) with the corresponding species \(A^{\prime}\in\mathcal{S}^{\prime}_{\psi}\). Consider a VA protocol \(\Pi_{\mathrm{amp}}=(\mathcal{S}_{\mathrm{amp}},\mathcal{R}_{\mathrm{amp}})\) that stabilizes (resp., halts) in \(T_{\mathrm{amp}}(n)\) time using the runtime policy \(\varrho_{\mathrm{amp}}\) and assume that the permanent \(v\)-voters of \(\Pi_{\mathrm{amp}}\) are identified with the species in \(\Upsilon^{\prime}_{\psi,v}\) for \(v\in\{0,1\}\).
We construct the CRD \(\Pi=(\mathcal{S},\mathcal{R},\Sigma,\Upsilon_{0},\Upsilon_{1},F,\mathbf{k})\) promised in Thm. 39 as follows: The species set of \(\Pi\) is taken to be \(\mathcal{S}=\mathcal{S}_{\psi}\cup\mathcal{S}^{\prime}_{\psi}\cup\mathcal{S}_{ \mathrm{amp}}\). The species in \(\mathcal{S}_{\psi}\) are regarded as the ignition species of the ignition gadget presented in Sec. 4.2.2, taking the ignition reaction associated with species \(A\in\mathcal{S}_{\psi}\) to be
\(\iota_{A}\): \(A\to A^{\prime}+\lceil 1/\epsilon\rceil\cdot B\),
where \(B\in\mathcal{S}_{\mathrm{amp}}\) is an arbitrary fluid voter of \(\Pi_{\mathrm{amp}}\). The remaining non-void reactions of \(\Pi\) are the non-void reactions of \(\Pi^{\prime}_{\psi}\) and \(\Pi_{\mathrm{amp}}\) so that \(\mathrm{NV}(\mathcal{R})=\mathrm{NV}(\mathcal{R}^{\prime}_{\psi})\cup\mathrm{NV} (\mathcal{R}_{\mathrm{amp}})\cup\{t_{A}\mid A\in\mathcal{S}_{\psi}\}\). For \(v\in\{0,1\}\), the \(v\)-voters of \(\Pi\) are taken to be the (permanent and fluid) \(v\)-voters of \(\Pi_{\mathrm{amp}}\). Finally, the context \(\mathbf{k}\) of \(\Pi\) is taken to be \(\mathbf{k}=\mathbf{k}_{\psi}\).
Intuitively, the construction of \(\Pi\) ensures that once the ignition gadget has matured, all but an \(e\)-fraction of the molecules in the test tube are fluid voters (of \(\Pi_{\mathrm{amp}}\)) and that this remains the case subsequently. The fluid voters may interact with the voters of \(\Pi^{\prime}_{\psi}\) -- playing the role of the permanent voters of \(\Pi_{\mathrm{amp}}\) -- and consequently "switch side" back and forth. However, once the (projected) execution of \(\Pi^{\prime}_{\psi}\) halts, all permanent voters present in the test tube have the same vote, so \(\Pi_{\mathrm{amp}}\) can now run in accordance with the definition of the VA task.
Formally, to establish Thm. 39, we construct the runtime policy \(\varrho\) for \(\Pi\) by setting \(\varrho(\mathbf{c})\) as follows for each configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\):
* if \(\mathbf{c}(\mathcal{S}_{\psi})>0\), then \(\varrho(\mathbf{c})=\{\iota_{A}\mid A\in\mathcal{S}_{\psi}\}\);
* else if \(\mathbf{c}|_{\mathcal{S}^{\prime}_{\psi}}\) is not a halting configuration of \(\Pi^{\prime}_{\psi}\), then \(\varrho(\mathbf{c})=\varrho^{\prime}_{\psi}(\mathbf{c}|_{\mathcal{S}^{\prime}_ {\psi}})\);
* else \(\varrho(\mathbf{c})=\varrho_{\mathrm{amp}}(\mathbf{c}|_{\mathcal{S}_{\mathrm{ amp}}})\).
Consider a weakly fair valid execution \(\eta=\langle\mathbf{c}^{\iota},\zeta^{\iota}\rangle_{\mathbf{\iota}\geq 0}\) of \(\Pi\) of initial molecular count \(\|\mathbf{c}^{0}\|=n\) and fix a skipping policy \(\sigma\).
By Lemma 10, the construction of the runtime policy \(\varrho\) guarantees that the ignition gadget matures in \(\eta\) within \(O(\log n)\) time; following that, the configurations of \(\eta\) consist only of \(\Pi^{\prime}_{\psi}\) and \(\Pi_{\mathrm{amp}}\) molecules. Recall that if a \(\Pi^{\prime}_{\psi}\) species \(A^{\prime}\in\mathcal{S}^{\prime}_{\psi}\) participates in a \(\Pi_{\mathrm{amp}}\) reaction \(\alpha=(\mathbf{r},\mathbf{p})\in\mathcal{R}_{\mathrm{amp}}\), then \(A^{\prime}\) is a catalyst for \(\alpha\), i.e., \(\mathbf{r}(A^{\prime})=\mathbf{p}(A^{\prime})\), hence the execution of \(\Pi^{\prime}_{\psi}\) is not affected by that of \(\Pi_{\mathrm{amp}}\). In particular, the construction of \(\varrho\) guarantees that \(\Pi^{\prime}_{\psi}\) halts within \(T_{\psi}(O(n))\) time. Once the execution of \(\Pi^{\prime}_{\psi}\) halts, that of \(\Pi_{\mathrm{amp}}\) sets into motion, exploiting the fact that the molecular counts of the voters of \(\Pi^{\prime}_{\psi}\) (that play the role of the permanent voters in \(\Pi_{\mathrm{amp}}\)) remain fixed. the construction of \(\varrho\) guarantees that \(\Pi_{\mathrm{amp}}\) stabilizes (resp., halts) within \(T_{\mathrm{amp}}(O(n))\) time.
## 7 Justifying the Runtime Policy Definition
Consider a CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\). Recall that as defined in Sec. 4, a runtime policy \(\varrho\) for \(\Pi\) can map a given configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) to any subset \(\varrho(\mathbf{c})\subseteq\mathrm{NV}(\mathcal{R})\) of non-void target reactions. This definition is fairly general and the reader may wonder whether it can be restricted without hurting the (asymptotic) runtime efficiency of \(\Pi\). In the current section, we show that this is not the case for four "natural" such restrictions as stated in Prop. 41, 42, 43, and 44.
There exists a CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) such that \(\mathrm{RT}^{\Pi}_{\mathrm{halt}}(n)=O(n)\), however if we restrict the runtime policies \(\varrho\) so that \(\varrho(\mathbf{c})=Q\) for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), where \(Q\) is a fixed subset of \(\mathrm{NV}(\mathcal{R})\), then \(\mathrm{RT}^{\Pi}_{\mathrm{halt}}(n)\) cannot be bounded as a function of \(n\).
There exists a CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) such that \(\mathrm{RT}^{\Pi}_{\mathrm{stab}}(n)=O(\log n)\), however if we restrict the runtime policies \(\varrho\) so that \(|\varrho(\mathbf{c})|\leq 1\) for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), then \(\mathrm{RT}^{\Pi}_{\mathrm{stab}}(n)=\Omega(n)\).
The following two propositions rely on the notation \(\mathcal{E}(\mathbf{c})\), denoting the set of reactions that escape from the component of configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) in the configuration digraph \(D^{\Pi}\).
There exists a CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) such that \(\mathrm{RT}^{\Pi}_{\mathrm{stab}}(n)=O(\log n)\), however if we restrict the runtime policies \(\varrho\) so that \(\varrho(\mathbf{c})\subseteq\mathcal{E}(\mathbf{c})\) for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), then \(\mathrm{RT}^{\Pi}_{\mathrm{stab}}(n)=\Omega(n)\).
**Proposition 44**.: _There exists a CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) such that \(\mathrm{RT}^{\Pi}_{\mathrm{halt}}(n)=O(n)\), however if we restrict the runtime policies \(\varrho\) so that \(\varrho(\mathbf{c})\supseteq\mathcal{E}(\mathbf{c})\) for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\), then \(\mathrm{RT}^{\Pi}_{\mathrm{halt}}(n)=\Omega(n\log n)\)._
Prop. 41, 42, 43, and 44 are proved in Sec. 7.1, 7.2, 7.3, and 7.4, respectively. Each proof is followed by a short discussion explaining why the adversarial runtime obtained with our general definition is intuitively more plausible than that obtained with the more restricted definition.
### Fixed Policies
In this section, we prove Prop. 41. To this end, consider the CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) defined over the species set \(\mathcal{S}=\{A_{0},A_{1},X_{0},X_{1},W\}\) and the following non-void reactions:
\(\beta\): \(X_{0}+X_{1}\to 2W\);
\(\gamma_{0}\): \(A_{1}+X_{0}\to A_{0}+X_{0}\); and
\(\gamma_{1}\): \(A_{0}+X_{1}\to A_{1}+X_{1}\).
A configuration \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) is valid as an initial configuration of \(\Pi\) if \(\mathbf{c}^{0}(\{A_{0},A_{1}\})=1\) and \(\mathbf{c}^{0}(X_{0})\neq\mathbf{c}^{0}(X_{1})\).
**Observation 45**.: _For every weakly fair valid execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\), there exists a step \(\hat{t}\geq 0\) such that (1) \(\min\{\mathbf{c}^{t}(X_{0}),\mathbf{c}^{t}(X_{1})\}>0\) for every \(0\leq t<\hat{t}\); and (2) \(\min\{\mathbf{c}^{t}(X_{0}),\mathbf{c}^{t}(X_{1})\}=0\) for every \(t\geq\hat{t}\)._
**Observation 46**.: _Every weakly fair execution emerging from a valid initial configuration \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) with \(\mathbf{c}^{0}(X_{j})>\mathbf{c}^{0}(X_{1-j})\), \(j\in\{0,1\}\), halts into a configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) that satisfies_
* \(\mathbf{c}(A_{j})=1\)_;_
* \(\mathbf{c}(X_{j})=\mathbf{c}^{0}(X_{j})-\mathbf{c}^{0}(X_{1-j})\)_; and_
* \(\mathbf{c}(X_{1-j})=\mathbf{c}(A_{1-j})=0\)_._
Consider the runtime policy \(\varrho\) defined as
\[\varrho(\mathbf{c})\,=\,\begin{cases}\{\beta\}\,,&\mathbf{c}(X_{0})>0\,\wedge \,\mathbf{c}(X_{1})>0\\ \{\gamma_{0},\gamma_{1}\}\,,&\text{otherwise}\end{cases}\,.\]
Fix a weakly fair valid execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) of initial molecular count \(\|\mathbf{c}^{0}\|=n\) and a skipping policy \(\sigma\).
**Lemma 47**.: \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\leq O(n)\)_._
Proof.: Let \(\mathpzc{t}_{\mathrm{e}}(i)\) and \(\mathbf{e}^{i}=\mathbf{c}^{\mathpzc{t}_{\mathrm{e}}(i)}\) be the effective step and effective configuration, respectively, of round \(i\geq 0\) under \(\varrho\) and \(\sigma\). Let \(\hat{i}=\min\{i\geq 0\mid\mathpzc{t}_{\mathrm{e}}(i)\geq\hat{t}\}\), where \(\hat{t}\) is the step promised in Obs. 45, and let \(i^{*}=\min\{i\geq 0\mid\mathpzc{t}_{\mathrm{e}}(i)\geq t^{*}\}\), where \(t^{*}\) is the halting step promised in Obs. 46. We establish the assertion by proving the following two claims:
(C1) the total contribution of rounds \(0\leq i<\hat{i}\) to \(\mathrm{RT}^{\varrho,\sigma}_{\mathrm{halt}}(\eta)\) is up-bounded by \(O(n)\); and
(C2) \(i^{*}-\hat{i}\leq 1\).
Indeed, the assertion follows as the temporal cost charged to a single round is at most \(O(n)\).
To establish claim (C1), let \(\ell_{i}=\min\{\mathbf{e}^{i}(X_{0}),\mathbf{e}^{i}(X_{1})\}\) for each round \(0\leq i<\hat{i}\). Notice that \(\pi_{\mathbf{e}^{i}}(\beta)=\frac{\mathbf{e}^{i}(X_{0})\cdot\mathbf{e}^{i}(X_ {1})}{\varphi}\geq\Omega(\ell_{i}^{2}/n)\) and that \(\pi_{\mathbf{c}}(\beta)=\pi_{\mathbf{e}^{i}}(\beta)\) for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) reachable form \(\mathbf{e}^{i}\) via a \(\varrho\)-avoiding path. Employing Lem. 9, we conclude that \(\mathrm{TC}^{\varrho}(\mathbf{e}^{i})\leq O(n/\ell_{i}^{2})\). Since \(\ell_{i+1}>\ell_{i}\) for every \(0\leq i<\hat{i}\), it follows that
\[\sum_{i=0}^{\hat{i}}\mathrm{TC}^{\varrho}(\mathbf{e}^{i})\,\leq\,\sum_{i=0}^{ \hat{i}}O(n/\ell_{i}^{2})\,\leq\,O(n)\cdot\sum_{\ell=1}^{\infty}\frac{1}{\ell ^{2}}\,=\,O(n)\,.\]
To establish claim (C2), it suffices to observe that from step \(\hat{t}\) onward, only one of the reactions \(\gamma_{0}\) and \(\gamma_{1}\) may be applicable and that the execution halts once this reaction is scheduled.
Now, consider a runtime policy \(\varrho_{Q}\) with a fixed target reaction set \(Q\subseteq\{\beta,\gamma_{0},\gamma_{1}\}\), that is, \(\varrho_{Q}(\mathbf{c})=Q\) for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\). We argue that \(\gamma_{j}\in Q\) for each \(j\in\{0,1\}\). Indeed, if \(\mathbf{c}(A_{1-j})=1\), \(\mathbf{c}(X_{j})>0\), and \(\mathbf{c}(X_{1-j})=0\), then \(\operatorname{app}(\mathbf{c})=\{\gamma_{j}\}\), hence \(\varrho_{Q}(\mathbf{c})\) must include \(\gamma_{j}\) in order to bound the halting runtime.
However, if \(\{\gamma_{0},\gamma_{1}\}\subseteq Q\), then the adversarial scheduler can construct a weakly fair valid execution \(\eta\) of initial molecular count \(n\) in a manner that forces the protocol to go through arbitrarily many rounds under \(\varrho_{Q}\) and the identity skipping policy \(\sigma_{\mathrm{id}}\) before the execution halts, charging an \(\Omega(1/n)\) temporal cost to each one of them. This is done simply by starting with a configuration that includes both \(X_{0}\) and \(X_{1}\) molecules and then scheduling reactions \(\gamma_{0}\) and \(\gamma_{1}\) in alternation. We conclude that \(\operatorname{RT}^{\varrho_{Q},\sigma_{\mathrm{id}}}_{\mathrm{init}}(\eta)\) is unbounded (as a function of \(n\)), thus establishing Prop. 4.1.
In summary, a policy with a fixed target reaction set can inappropriately reward an adversarial scheduler that delays progress indefinitely: the longer the delay, the larger the runtime. This runs counter to the philosophy of adversarial runtimes that are standard in distributed computing, discussed in Sec. 1.
### Singleton Target Reaction Sets
In this section, we prove Prop. 4.2. To this end, consider the CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) defined over the species set \(\mathcal{S}=\{A,B,B^{\prime}X,X^{\prime},Y\}\) and the following non-void reactions:
\(\beta\): \(A+A\to 2B\);
\(\gamma\): \(A+X\to A+X^{\prime}\);
\(\gamma^{\prime}\): \(A+X^{\prime}\to A+X\);
\(\delta\): \(X+Y\to 2Y\);
\(\delta^{\prime}\): \(X^{\prime}+Y\to 2Y\);
\(\chi\): \(B+B\to 2B\); and
\(\chi^{\prime}\): \(B^{\prime}+B^{\prime}\to 2B\).
A configuration \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) is valid as an initial configuration of \(\Pi\) if \(\mathbf{c}^{0}(A)=2\), \(\mathbf{c}^{0}(Y)=1\), and \(\mathbf{c}^{0}(\{B,B^{\prime}\})=0\).
The following properties hold for every weakly fair valid execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) and for every \(t\geq 0\):
* \(\mathbf{c}^{t}(A),\mathbf{c}^{t}(B),\mathbf{c}^{t}(B^{\prime})\in\{0,2\}\)
* \(\mathbf{c}^{t+1}(A)\leq\mathbf{c}^{t}(A)\);
* if \(\mathbf{c}^{t}(A)=0\), then \(\mathbf{c}^{t}(\{B,B^{\prime}\})=2\);
* \(\mathbf{c}^{t+1}(\{X,X^{\prime}\})\leq\mathbf{c}^{t}(\{X,X^{\prime}\})\); and
* \(\mathbf{c}^{t+1}(Y)\geq\mathbf{c}^{t}(Y)\).
Every weakly fair execution emerging from a valid initial configuration \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) of molecular count \(\|\mathbf{c}^{0}\|=n\) stabilizes into the configurations \(\mathbf{c}\) satisfying \(\mathbf{c}(A)=\mathbf{c}(\{X,X^{\prime}\})=0\) and \(\mathbf{c}(Y)=n-2\).
Consider the runtime policy \(\varrho\) defined as
\[\varrho(\mathbf{c})\,=\,\begin{cases}\{\beta,\delta,\delta^{\prime}\}\,,& \mathbf{c}(A)=2\\ \{\delta,\delta^{\prime}\}\,,&\mathbf{c}(A)=0\end{cases}\,.\]
Fix a weakly fair valid execution \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle_{t\geq 0}\) of initial molecular count \(\|\mathbf{c}^{0}\|=n\) and let \(t^{*}\) be the stabilization step of \(\eta\). Fix a skipping policy \(\sigma\) and let \(t_{\text{e}}(i)\) and \(\mathbf{e}^{i}=\mathbf{c}^{t_{\text{e}}(i)}\) be the effective step and effective configuration, respectively, of round \(i\geq 0\) under \(\varrho\) and \(\sigma\). Let \(i^{*}=\min\{i\geq 0\mid t_{\text{e}}(i)\geq t^{*}\}\).
The total contribution of rounds \(0\leq i<i^{*}\) to \(\operatorname{RT}^{\varrho,\sigma}_{\text{stab}}(\eta)\) is up-bounded by \(O(\log n)\).
Proof.: Let \(\ell_{i}=\mathbf{e}^{i}(Y)\) for each round \(0\leq i<i^{*}\). The key observation here is that every round \(0\leq i<i^{*}\) is target-accomplished with \(\ell_{i+1}\leq\ell_{i}\); moreover, there exists at most one round \(0\leq i<i^{*}\) such that \(\ell_{i+1}=\ell_{i}\). Since \(\pi_{\mathbf{c}}(\varrho(\mathbf{e}^{i}))=\pi_{\mathbf{e}^{i}}(\varrho( \mathbf{e}^{i}))\) for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) reachable form \(\mathbf{e}^{i}\) via a \(\varrho\)-avoiding path and since \(\pi_{\mathbf{e}^{i}}(\varrho(\mathbf{e}^{i}))\geq\frac{\ell_{i}\cdot(n-\ell_ {i}-2)}{\varphi}\geq\Omega\left(\frac{\ell_{i}\cdot(n-\ell_{i})}{n}\right)\), it follows by Lem. 9 that \(\operatorname{TC}^{\varrho}(\mathbf{e}^{i})\leq O\left(\frac{n}{\ell_{i}\cdot( n-\ell_{i})}\right)\). As \(\ell_{0}\geq 1\) and \(\ell_{i^{*}-1}\leq n-3\), we can bound the runtime of \(\eta\) under \(\varrho\) and \(\sigma\) as
\[\operatorname{RT}^{\varrho,\sigma}(\eta)\,=\,\sum_{i=0}^{i^{*}-1} \operatorname{TC}^{\varrho}(\mathbf{e}^{i})\,\leq\,\sum_{\ell=1}^{n-3}O\left( \frac{n}{\ell\cdot(n-\ell)}\right)\,\leq\,O(n)\cdot\sum_{\ell=1}^{n-1}\,\frac{ 1}{\ell\cdot(n-\ell)}\,=\,O(\log n)\,,\]
thus establishing the assertion.
Next, let us examine the efficiency of the runtime policies all of whose target reaction sets are of size (at most) \(1\). To this end, consider such a runtime policy \(\varrho\) and the configuration set
\[S_{0}\,=\,\left\{\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\mid\mathbf{c}(A)=2 \wedge\mathbf{c}(Y)=1\wedge\mathbf{c}(\{B,B^{\prime}\})=0\wedge\|\mathbf{c}\| =n\right\}\,.\]
By definition, every configuration in \(S_{0}\) is a valid initial configuration of \(\Pi\). Moreover, the set \(S_{0}\) forms a component of the configuration digraph \(D^{\Pi}\). As \(\beta\) is the only configuration that escapes \(S_{0}\), we deduce that there must exist a configuration \(\mathbf{c}\in S_{0}\) such that \(\varrho(\mathbf{c})=\{\beta\}\); indeed, if \(\beta\notin\varrho(\mathbf{c})\), then the adversarial scheduler can generate an arbitrarily long sequence of rounds all of whose effective configurations are \(\mathbf{c}\), thus pumping up the stabilization runtime of \(\Pi\). Let \(\hat{\mathbf{c}}\in S_{0}\) be such a configuration.
To low-bound the stabilization runtime of \(\Pi\), construct a weakly fair valid execution \(\eta\) and a skipping policy \(\sigma\) such that the effective configuration of round \(0\) is \(\mathbf{e}^{0}=\hat{\mathbf{c}}\). Notice that \(\pi_{\mathbf{c}}(\beta)=2/\varphi\) and that \(\pi_{\mathbf{c}}(\beta)=\pi_{\mathbf{c}}(\beta)\) for every configuration \(\mathbf{c}\) reachable from \(\hat{\mathbf{c}}\) via a \(\varrho\)-avoiding path. Therefore, we can employ Lem. 9 to conclude that \(\operatorname{TC}^{\varrho}(\hat{\mathbf{c}})=\varphi/2=\Omega(n)\). Prop. 42 follows as the temporal cost of round \(0\) is clearly (weakly) dominated by the stabilization runtime of the entire execution.
In this example, restricting the runtime policy to a singleton target set limits the meaning of "progress" in an artificial way. Specifically, it that does not give the protocol designer credit for ensuring that progress is indeed possible from the effective configuration identified by the adversary.
### Target Reactions \(\subseteq\) Escaping Reactions
Our goal in this section is to prove Prop. 43. To this end, consider the CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) introduced in Sec. 7.2 and recall that
\[\operatorname{RT}^{\Pi}_{\text{stab}}(n)\,=\,O(\log n)\,.\]
Consider the configuration set \(S_{0}\) introduced in Sec. 7.2 and recall that \(S_{0}\) forms a component of the configuration digraph \(D^{\Pi}\) and that \(\beta\in\mathcal{R}\) is the only reaction that escapes from \(S_{0}\)
Moreover, in Sec. 7.2, we prove that if a runtime policy \(\varrho\) satisfies \(\varrho(\mathbf{c})=\{\beta\}\) for some configuration \(\mathbf{c}\in S_{0}\), then there exist a weakly fair valid execution \(\eta\) of initial molecular count \(n\) and a skipping policy \(\sigma\) such that
\[\operatorname{RT}^{\varrho,\sigma}_{\operatorname{stab}}(\eta)=\Omega(n)\,,\]
thus establishing Prop. 4.3.
### Target Reactions \(\supseteq\) Escaping Reactions
This section is dedicated to proving Prop. 4.4. To this end, consider the CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) defined over the species set \(\mathcal{S}=\{L,X,W\}\) and the following non-void reactions: \(\alpha\): \(L+X\to L+W\); and
\(\beta\): \(L+L\to 2W\).
A configuration \(\mathbf{c}^{0}\in\mathbb{N}^{\mathcal{S}}\) is valid as an initial configuration of \(\Pi\) if \(\mathbf{c}^{0}(L)=2\) and \(\mathbf{c}^{0}(W)=0\).
**Observation 51**.: _For every weakly fair valid execution \(\eta=\langle\mathbf{c}^{t},\zeta^{t}\rangle_{t\geq 0}\), there exists a step \(t^{*}>0\) such that_
_(1)_ \(\beta\in\operatorname{app}(\mathbf{c}^{t})\) _for every_ \(0\leq t<t^{*}\)_;_
_(2)_ \(\zeta^{t^{*}-1}=\beta\)_; and_
_(3)_ \(\mathbf{c}^{t^{*}}\) _is a halting configuration._
Notice that each configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) forms a singleton component in the configuration digraph \(D^{\Pi}\) and that every applicable non-void reaction is escaping; that is, if \(\mathbf{c}(L)=2\), then \(\mathcal{E}(\mathbf{c})=\{\alpha,\beta\}\). Therefore, the only runtime policy that satisfies the restriction presented in Prop. 4.4 is the runtime policy \(\varrho_{\supseteq}\) defined so that \(\varrho_{\supseteq}(\mathbf{c})=\{\alpha,\beta\}\) for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) with \(\mathbf{c}(L)=2\).
**Lemma 52**.: _For every sufficiently large \(n\), there exist a weakly fair valid execution \(\eta\) of initial molecular count \(n\) and a skipping policy \(\sigma\) such that \(\operatorname{RT}^{\varrho_{\supseteq},\sigma}_{\operatorname{halt}}(\eta)= \Omega(n\log n)\)._
Proof.: Construct the execution \(\eta=\langle\mathbf{c}^{t},\zeta^{t}\rangle_{t\geq 0}\) by scheduling \(\zeta^{t}=\alpha\) for \(t=0,1,\ldots,n-3\), i.e., as long as \(\mathbf{c}^{t}(X)>0\), and then scheduling \(\zeta^{n-2}=\beta\). This means that \(\mathbf{c}^{t}=2L+(n-2-t)X+tW\) for every \(0\leq t\leq n-2\) and that \(\mathbf{c}^{t}=nW\) for every \(t\geq n-1\).
Let \(\sigma\) be the identity skipping policy mapping each step \(t\geq 0\) to \(\sigma(t)=t\). Under \(\varrho_{\supseteq}\) and \(\sigma\), each step constitutes a full round, so each configuration \(\mathbf{c}^{t}\) is the effective configuration of its own round. Since \(\pi_{\mathbf{c}^{t}}(\{\alpha,\beta\})=\frac{1+2(n-2-t)}{\varphi}\), it follows that \(\operatorname{TC}^{\varrho_{\supseteq}}(\mathbf{c}^{t})=\frac{\varphi}{1+2(n -2-t)}\) for each \(0\leq t\leq n-2\). We conclude that
\[\operatorname{RT}^{\varrho_{\supseteq},\sigma}_{\operatorname{halt}}(\eta)\, \geq\,\sum_{t=0}^{n-2}\frac{\varphi}{1+2(n-2-t)}\,=\,\Omega(n)\cdot\sum_{t= 1}^{n}\frac{1}{t}\,=\,\Omega(n\log n)\,,\]
thus establishing the assertion.
Next, consider the runtime policy \(\varrho\) defined so that \(\varrho(\mathbf{c})=\{\beta\}\) for every configuration \(\mathbf{c}\in\mathbb{N}^{\mathcal{S}}\) with \(\mathbf{c}(L)=2\). Fix a weakly fair valid execution \(\eta=\langle\mathbf{c}^{t},\zeta^{t}\rangle_{t\geq 0}\) of initial molecular count \(n\) and a skipping policy \(\sigma\) and let \(t_{\!\!e}(i)\) and \(\mathbf{e}^{i}=\mathbf{c}^{t_{\!\!e}(i)}\) be the effective step and effective configuration, respectively, of round \(i\geq 0\) under \(\varrho\) and \(\sigma\).
The key observation now is that when round \(0\) ends, the execution must have halted, i.e., \(t_{\!\!e}(1)\geq t^{*}\), where \(t^{*}\) is the step promised in Obs. 51. As the temporal cost charged to a single round is always \(O(n)\), we conclude that
\[\operatorname{RT}^{\varrho,\sigma}_{\operatorname{halt}}(\eta)\,\leq\,O(n)\,,\]
thus establishing Prop. 44.
The intuition behind this example is that by repeatedly scheduling \(\alpha\), the adversary can postpone halting, but not indefinitely. Should the protocol designer be charged the temporal cost of _every_ adversarially-scheduled postponing step? It is reasonable to argue that the answer is no: The adversary drives the execution to a pitfall, and the protocol designer should pay for that (to the tune of \(O(n)\)), but should not have to pay for every step of the execution that leads to the pitfall.
## 8 Large Adversarial Runtime vs. Small Expected Stochastic Runtime
This section focuses on the ability of the weakly fair adversarial scheduler to slow down the execution of CRN protocols. In particular, we show that the stabilization/halting runtime of a CRN protocol \(\Pi\) operating under the weakly fair adversarial scheduler may be significantly larger than the expected stochastic stabilization/halting runtime of the same protocol \(\Pi\) when it operates under the stochastic scheduler. This phenomenon is demonstrated by two CRN protocols in which the aforementioned gap is obtained using different strategies: In Sec. 8.1, we present a protocol designed so that the (weakly fair) adversarial scheduler can lead the execution through a sequence of (asymptotically) many pitfall configurations before it stabilizes; a stochastic execution, on the other hand, avoids all those pitfall configurations with high probability and thus, stabilizes much faster. In contrast, the protocol presented in Sec. 8.2 does not admit any pitfall configurations; rather, this protocol is designed so that the adversarial scheduler can "extend" the execution far beyond what one would expect from a stochastically generated execution, thus charging the protocol's runtime for an inflated number of rounds.
### Reaching Multiple Pitfall Configurations
This section presents a CRN protocol for which there exists a weakly fair execution that reaches \(\Theta(n)\) pitfall configurations before stabilization. However, under a stochastic scheduler, the execution reaches stabilization with a high probability of avoiding any pitfall configurations.24 Consider the CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) presented in Fig. 1(a) and the valid initial configuration \(\mathbf{c}^{0}=L_{0}+C+(n-2)X\). In Fig. 1(c), we devise a runtime policy \(\varrho\) demonstrating that the stabilization runtime of \(\Pi\) is \(\mathrm{RT}^{\Pi}_{\mathrm{stab}}(n)=\Theta(n^{2})\). Next, we show that on stochastically generated execution \(\eta_{r}=\langle\mathbf{c}^{t},\zeta^{t}\rangle_{0\leq t}\) of initial molecular count \(n\), the expected (stochastic) stabilization runtime of \(\Pi\) is \(O(n)\).
Footnote 24: We say that event occurs with high probability if its probability is at least \(1-n^{-c}\) for an arbitrarily large constant \(c\).
Notice that after every consumption of an \(X\) molecule (and production of a \(Y\) molecule), the resulting configuration \(\mathbf{c}\) satisfies \(\mathbf{c}(L_{0})=1\). Recalling that \(\mathbf{c}^{0}(L_{0})=1\) and that \(\mathbf{c}^{0}(X)=n-2\), we conclude that until stabilizing, the stochastic execution reaches exactly \(\frac{1}{2}n\) configurations \(\mathbf{c}^{0}_{L_{0}},\mathbf{c}^{1}_{L_{0}},\ldots\mathbf{c}^{\frac{1}{2} {n-1}}_{L_{0}}\) such that \(\mathbf{c}^{j}_{L_{0}}(L_{0})=1\) for every \(0\leq j\leq\frac{1}{2}n-1\).
For every \(0\leq j\leq\frac{1}{2}n-1\) it holds that \(\mathbf{c}^{j}_{L_{0}}(X)=n-2-j\).
Fix some \(0\leq j\leq\frac{1}{2}n-2\). We analyze the contribution to the expected stochastic runtime of the step interval \([t_{j},t_{j+1}-1]\) where \(\mathbf{c}^{t_{j}}=\mathbf{c}^{j}_{L_{0}}\) and \(\mathbf{c}^{t_{j+1}}=\mathbf{c}^{j+1}_{L_{0}}\). Observe that exactly one of the following holds: (I) \(1\leq t_{j+1}-t_{j}\leq k\); or (II) \(t_{j+1}-t_{j}=k+2\) (refer to the configuration digraph \(D^{\Pi}\) for \(k=2\) shown in Fig. 1(b)). Recalling that
for every \(t\geq 0\), combined with Obs. 53, we conclude that the probability of \(t_{j+1}-t_{j}=q\) for \(1\leq q\leq k\) (event (I)) is \(\left(\frac{1}{n-1-j}\right)^{q-1}\cdot\frac{n-2-j}{n-1-j}\), and probability of event (II) is \(\left(\frac{1}{n-1-j}\right)^{k}\). Therefore we get
\[\mathbb{E}(t_{j+1}-t_{j})\] \[= n\left(\sum_{q=1}^{k}\left(\left(\frac{1}{n-1-j}\right)^{q-1} \cdot\frac{n-2-j}{n-1-j}\cdot\left((q-1)n+\frac{n}{n-2-j}\right)\right)+(k+1) n\cdot\left(\frac{1}{n-1-j}\right)^{k}\right)\] \[= \,\Theta\left(\frac{n^{2}}{n-j}\right)\,.\]
Note that we can take \(k\) to be any arbitrarily large constant. Thus, it can be shown that event (I) occurs for every \(0\leq j\leq\frac{1}{2}n-2\) with high probability, which means that the execution does not reach a stabilizing pitfall configuration.
Let
\[\hat{t}\,=\,\min\{t>0\mid\mathbf{c}^{t}(X)\leq\mathbf{c}^{t}(Y)\}\]
be the stabilization step of \(\eta_{r}\). We can bound \(\mathbb{E}(\hat{t})\) as
\[\mathbb{E}(\hat{t})\,\leq\,\left(\sum_{j=0}^{\frac{1}{2}n-2}O\left(\frac{n^{2} }{n-j}\right)\right)\,=\,O\left(n^{2}\right)\,.\]
The upper bound follows by recalling that the time span of each step is \(\Theta(1/n)\). Notice that in order to reach a halting configuration, the final non-void reaction must be \(\gamma_{k}\), which shifts the \(L_{k}\) molecule into \(L_{k+1}\). It can be shown that the expected stochastic halting runtime is \(O(n\log n)\), which is still asymptotically faster than the adversarial (stabilization and halting) runtime.
### Round Inflation
In this section, we present a CRN protocol whose halting runtime is asymptotically larger than the stochastic halting runtime due to a larger number of rounds. Consider the CRN protocol \(\Pi=(\mathcal{S},\mathcal{R})\) presented in Fig. 2(a). Intuitively, an execution of \(\Pi\) halts once a \(\beta\) reaction is scheduled. The adversary can delay halting by scheduling \(\gamma\) reactions again and again until all of the \(X\) molecules are used up. In particular, the adversary can continue to schedule \(\gamma\) reactions when few \(X\) molecules remain, even though such reactions have low propensity and are not likely to be scheduled stochastically. By doing so, the adversary inflates the number of rounds, thus increasing the execution's runtime.
Consider the valid initial configuration \(\mathbf{c}^{0}=L_{0}+((n/2)-1)X+(n/2)Y\) for some sufficiently large even integer \(n\) and fix a weakly fair execution \(\eta=\langle\mathbf{c}^{t},\zeta^{t}\rangle_{t\geq 0}\) emerging from \(\mathbf{c}^{0}\).
For every \(t\geq 0\), we have
(1) \(\mathbf{c}^{t+1}(X)\leq\mathbf{c}^{t}(X)\); and
(2) \(\mathbf{c}^{t+1}(Y)\geq\mathbf{c}^{t}(Y)\).
There exists a step \(t^{*}>0\) such that \(\mathbf{c}^{t}(\{L_{0},L_{1}\})=1\) for every \(0\leq t<t^{*}\) and \(\mathbf{c}^{t}(L_{0},L_{1})=0\) for every \(t\geq t^{*}\). Moreover, \(t^{*}\) is the halting step of \(\eta\).
We first argue that if \(\eta\) is generated by the stochastic scheduler, then the expected stochastic runtime of the execution prefix \(\langle\mathbf{c}^{t},\zeta^{t}\rangle_{0\leq t<t^{*}}\) is \(O(1)\), where \(t^{*}>0\) is the halting
step promised in Obs. 55. Indeed, since the molecular count of \(Y\) remains \(\Omega(n)\) throughout the execution, it follows that at any step \(0\leq t<t^{*}\), a \(\beta\) reaction is scheduled, which causes \(\eta\) to halt, with probability \(\Omega(1/n)\). Thus, in expectation, it takes \(O(n)\) steps until \(\eta\) halts, whereas the time span of each step is \(O(1/n)\).
On the other hand, we argue that a weakly fair adversarial scheduler can construct the execution \(\eta\) so that its (adversarial) halting runtime is \(\Omega(n)\). To this end, denote \(x=\mathbf{c}^{0}(X)\), recalling that \(x=\Omega(n)\). The adversarial scheduler constructs the execution prefix \(\langle\mathbf{c}^{t},\zeta^{t}\rangle_{0\leq t<x}\) by setting
\[\zeta^{t}\,=\,\begin{cases}\gamma_{0}\,,&t=0\bmod 2\\ \gamma_{1}\,,&t=1\bmod 2\end{cases}\,,\]
and chooses the skipping policy to be the identity skipping policy \(\sigma_{\mathrm{id}}\), which leads to the following observation.
The execution \(\eta\) satisfies
\[\mathrm{app}(\mathbf{c}^{t})\,=\,\begin{cases}\{\beta_{0},\gamma_{0}\}\,,&t=0 \bmod 2\\ \{\beta_{1},\gamma_{1}\}\,,&t=1\bmod 2\end{cases}\]
for every \(0\leq t<x\) and \(\mathrm{app}(\mathbf{c}^{x})=\{\beta_{0},\beta_{1}\}-\mathrm{app}(\mathbf{c}^ {x-1})\).
Fix a runtime policy \(\varrho\) for \(\Pi\) and let \(t_{\mathrm{e}}(i)\) and \(\mathbf{e}^{i}=\mathbf{c}^{t_{\mathrm{e}}(i)}\) be the effective step and effective configuration, respectively, of round \(i\geq 0\) under \(\varrho\) and \(\sigma_{\mathrm{id}}\). Obs. 56 implies that \(t_{\mathrm{e}}(i)=i\) for every \(0\leq i<x\), hence \(\eta\) includes (at least) \(x=\Omega(n)\) rounds before it halts. The key observation now is that the temporal cost of each round \(0\leq i<x\) is \(\Omega(1)\); this holds since \(\mathbf{c}(\{L_{0},L_{1}\})\leq 1\) for every configuration \(\mathbf{c}\) reachable from \(\mathbf{e}^{i}\), whereas \(L_{0}\) and \(L_{1}\) are reactants of each reaction in \(\mathrm{NV}(\mathcal{R})\) and, in particular, in the target reaction set \(\varrho(\mathbf{e}^{i})\) of round \(i\). Therefore, the halting runtime of \(\eta\) under \(\varrho\) and \(\sigma_{\mathrm{id}}\) is \(\Omega(n)\), as promised.
In Fig. 2(c), we devise a runtime policy \(\varrho\) demonstrating that the \(\Omega(n)\) halting runtime lower bound of \(\Pi\) is tight. The reader might question why the temporal cost of the "late rounds" under \(\varrho\) is \(\Theta(1)\) although the propensity of the \(\gamma\) reactions (scheduled by the adversarial scheduler) in those rounds is \(\Theta(1/n)\). Intuitively, this captures the fact that the temporal cost associated with a round is independent of the reactions scheduled by the adversarial scheduler in that round; rather, it is determined by the reactions targeted by the protocol designer (for that specific round). Put differently, while the adversary has the power to determine which reactions are actually scheduled, the adversary has no control over the expected time for the target reactions to be scheduled.
## 9 Additional Related Work and Discussion
The runtime of stochastically scheduled CRNs is the focus of a vast body of literature, mainly under the umbrella of population protocols. When it comes to stochastically scheduled executions \(\eta=\langle\mathbf{c}^{t},\alpha^{t}\rangle\), an important distinction is made between stabilizing into a desired configuration set \(Z\) in step \(t^{*}\), which means that \(\mathbf{c}\in Z\) for all configurations \(\mathbf{c}\) reachable from \(\mathbf{c}^{t^{*}}\) (as defined in Sec. 2), and _converging_ into \(Z\) in step \(t^{*}\), which means that \(\mathbf{c}^{t}\in Z\) for all \(t\geq t^{*}\). The latter notion is weaker as it applies only to the configurations visited (after step \(t^{*}\)) by the specific execution \(\eta\) and does not rule out the existence of some configurations \(\mathbf{c}\notin Z\) that can be reached. Moreover, in contrast to stabilization (and halting), the notion
of convergence does not make much sense once we switch to the (weakly or strongly fair) adversarial scheduler.
Angluin et al. [3] prove that if a context is available (which is equivalent to having a designated leader in the initial configuration), then any semilinear predicate can be decided by a protocol whose expected convergence runtime (under the stochastic scheduler) is \(\mathrm{polylog}(n)\). However, the expected stabilization runtime of these protocols is \(\Omega(n)\) (see the discussion in [8]). For leaderless protocols, Belleville et al. [8] establish an \(\Omega(n)\) lower bound on the expected stabilization runtime (under the stochastic scheduler) of protocols that decide any predicate which is not _eventually constant_ -- a restricted subclass of semilinear predicates (that includes the detection predicates). This leaves an open question regarding the expected runtime of leaderless protocols that decide non-detection eventually constant predicates and another open question regarding the expected runtime of protocols that decide non-eventually constant predicates with a non-zero context -- see Table 2. Notice that both these questions are answered once we switch to the adversarial scheduler and runtime definition of the current paper -- see Table 1.
Our goal of measuring the runtime of protocols in scenarios that go beyond the (perhaps less realistic) assumption of "purely" stochastically scheduled executions is shared with other papers, again, mainly in the context of population protocols. In particular, Schwartzman and Sudo [29] study population protocols in which an adversary chooses which agents interact at each step, but with probability \(p\), the adversary's choice is replaced by a randomly chosen interaction (the population protocols analog of _smoothed analysis_[32]). Angluin et al. [4] consider population protocols in which a small proportion of agents can assume any identity, and can change that identity over time, thereby skewing the rates at which reactions occur. Both of these runtime definitions seem quite specific in their modeling choices relative to those of the current paper that considers arbitrary (weakly fair) executions. Yet other models of faulty interactions are studied by Di Luna et al. [18, 19], but runtime analysis is still done using the stochastic model, so the emphasis is more on protocol correctness.
The fundamental work of Chen et al. [14] on rate-independent computations in continuous CRNs introduces and relies critically on notions of adversarial schedulers and fairness. In their continuous CRN model, configurations are vectors of concentrations of chemical species (rather than vectors of integral species counts as in the discrete model described in our paper). Trajectories describe how configurations change over time as a result of reaction "fluxes". Roughly, in a fair continuous-time CRN trajectory, an adversary determines how flux is pushed through the CRN's reactions, but must ensure that applicable reactions eventually occur. This notion of a fair adversarial scheduler corresponds quite naturally to the weakly fair adversarial scheduler in our paper, and is used by Chen et al. to characterize what functions can be stably computed by continuous CRNs. Chen et al. do not provide results on the time complexity of continuous-time CRNs, and note that this can be challenging [30, 14, 33].
The current paper leaves several open questions, starting with the ones that stick out from Table 1: Do there exist vote amplified CRDs for (non-detection) semilinear predicates whose halting runtime (under an adversarial scheduler) is \(O(n)\)? If so, can these CRDs be leaderless? Next, while this paper focuses on the adversarial runtime of predicate decidability tasks, much attention has been devoted in the CRN literature also to function computation tasks [13, 22, 8] which calls for a rigorous study of their adversarial runtime. Finally, perhaps the framework of our paper, namely partitioning executions into rounds by means of policies chosen by the adversary and protocol designer, could be useful in formulating a runtime notion of rate-independent continuous CRNs. |
2303.03861 | Semigroups of (linear) transformations whose restrictions belong to a
given semigroup | Let $T(X)$ (resp. L(V)) be the semigroup of all transformations (resp. linear
transformations) of a set $X$ (resp. vector space $V$). For a subset $Y$ of $X$
and a subsemigroup $\mathbb{S}(Y)$ of $T(Y)$, consider the subsemigroup
$T_{\mathbb{S}(Y)}(X) = \{f\in T(X)\colon f_{\upharpoonright_Y} \in
\mathbb{S}(Y)\}$ of $T(X)$, where $f_{\upharpoonright_Y}\in T(Y)$ agrees with
$f$ on $Y$. We give a new characterization for $T_{\mathbb{S}(Y)}(X)$ to be a
regular semigroup [inverse semigroup]. For a subspace $W$ of $V$ and a
subsemigroup $\mathbb{S}(W)$ of $L(W)$, we define an analogous subsemigroup
$L_{\mathbb{S}(W)}(V) = \{f\in L(V) \colon f_{\upharpoonright_W} \in
\mathbb{S}(W)\}$ of $L(V)$. We describe regular elements in
$L_{\mathbb{S}(W)}(V)$ and determine when $L_{\mathbb{S}(W)}(V)$ is a regular
semigroup [inverse semigroup, completely regular semigroup]. If $\mathbb{S}(Y)$
(resp. $\mathbb{S}(W)$) contains the identity of $T(Y)$ (resp. $L(W)$), we
describe unit-regular elements in $T_{\mathbb{S}(Y)}(X)$ (resp.
$L_{\mathbb{S}(W)}(V)$) and determine when $T_{\mathbb{S}(Y)}(X)$ (resp.
$L_{\mathbb{S}(W)}(V)$) is a unit-regular semigroup. | Mosarof Sarkar, Shubh N. Singh | 2023-03-07T12:59:21Z | http://arxiv.org/abs/2303.03861v1 | # Semigroups of (linear) transformations whose restrictions belong to a given semigroup
###### Abstract.
Let \(T(X)\) (resp. L(V)) be the semigroup of all transformations (resp. linear transformations) of a set \(X\) (resp. vector space \(V\)). For a subset \(Y\) of \(X\) and a subsemigroup \(\mathbb{S}(Y)\) of \(T(Y)\), consider the subsemigroup \(T_{\mathbb{S}(Y)}(X)=\{f\in T(X)\colon f_{\restriction_{Y}}\in\mathbb{S}(Y)\}\) of \(T(X)\), where \(f_{\restriction_{Y}}\in T(Y)\) agrees with \(f\) on \(Y\). We give a new characterization for \(T_{\mathbb{S}(Y)}(X)\) to be a regular semigroup [inverse semigroup]. For a subspace \(W\) of \(V\) and a subsemigroup \(\mathbb{S}(W)\) of \(L(W)\), we define an analogous subsemigroup \(L_{\mathbb{S}(W)}(V)=\{f\in L(V)\colon f_{\restriction_{W}}\in\mathbb{S}(W)\}\) of \(L(V)\). We describe regular elements in \(L_{\mathbb{S}(W)}(V)\) and determine when \(L_{\mathbb{S}(W)}(V)\) is a regular semigroup [inverse semigroup, completely regular semigroup]. If \(\mathbb{S}(Y)\) (resp. \(\mathbb{S}(W)\)) contains the identity of \(T(Y)\) (resp. \(L(W)\)), we describe unit-regular elements in \(T_{\mathbb{S}(Y)}(X)\) (resp. \(L_{\mathbb{S}(W)}(V)\)) and determine when \(T_{\mathbb{S}(Y)}(X)\) (resp. \(L_{\mathbb{S}(W)}(V)\)) is a unit-regular semigroup.
Key words and phrases:Semigroups of transformations; Regular and unit-regular elements; Regular semigroups; Unit-regular semigroups; Completely regular semigroups; Inverse semigroups.
## 1. Introduction
Let \(\mathbb{S}\) be a semigroup of a finite set \(S\). A _semigroup_\(\mathbb{S}\) is a semigroup of \(\mathbb{S}\) if and only if it is a semigroup of \(\mathbb{S}\). The semigroup \(\mathbb{S}\) is called _semigroup_ if it is a semigroup of \(\mathbb{S}\). The semigroup \(\mathbb{S}\) is called _semigroup_ if it is a semigroup of \(\mathbb{S}\).
We write \(\operatorname{ureg}(S)\) for the set of all unit-regular elements in \(S\). If \(\operatorname{ureg}(S)=S\), then \(S\) is said to be a _unit-regular semigroup_.
Let \(V\) be a vector space over a field. We denote by \(0\) the zero vector of \(V\). The subspaces \(\{0\}\) and \(V\) of \(V\) are called _trivial subspaces_. By \(\langle T\rangle\), we mean the subspace spanned by a subset \(T\) of \(V\). For any subspace \(U\) of \(V\), we write \(\dim(U)\) and \(\operatorname{codim}(U)\) to denote the dimensions of \(U\) and the quotient space \(V/U\), respectively. Let \(f\in L(V)\). Denote by \(N(f)\) the null space \(\{v\in V\colon vf=0\}\) of \(f\). We write \(\operatorname{nullity}(f)\) (resp. \(\operatorname{corank}(f)\)) to denote \(\dim(N(f))\) (resp. \(\dim(V/R(f))\)), and \(V\approx U\) if vector spaces \(V\) and \(U\) are isomorphic. We write \(\operatorname{Aut}(V)\) for the group under composition consisting of all automorphisms of \(V\). The notation \(\Omega(V)\) can be regarded as a linear version of the semigroup \(\Omega(X)\) of surjective transformations of a set \(X\).
We also refer the reader to [5] and [11] for undefined definitions and notation of semigroup theory and linear algebra, respectively.
## 3. The semigroup \(T_{\mathbb{S}(Y)}(X)\)
In this section, we give a new characterization for the regularity of \(T_{\mathbb{S}(Y)}(X)\). Next, we give a new characterization for \(T_{\mathbb{S}(Y)}(X)\) to be an inverse semigroup. If \(I_{Y}\in\mathbb{S}(Y)\), we describe unit-regular elements in \(T_{\mathbb{S}(Y)}(X)\) and then determine when \(T_{\mathbb{S}(Y)}(X)\) is a unit-regular semigroup. Throughout this section, we assume that \(Y\) is a nonempty subset of \(X\) and \(\mathbb{S}(Y)\) is a subsemigroup of \(T(Y)\).
We begin with the following proposition that gives a sufficient condition for \(T_{\mathbb{S}(Y)}(X)\) to be regular.
**Proposition 3.1**.: _If \(\mathbb{S}(Y)\) is a subgroup of \(\text{Sym}(Y)\), then \(T_{\mathbb{S}(Y)}(X)\) is regular._
_Proof_. Let \(f\in T_{\mathbb{S}(Y)}(X)\). Then \(f_{\restriction_{Y}}\in\mathbb{S}(Y)\). If \(\mathbb{S}(Y)\) is a subgroup of \(\text{Sym}(Y)\), then it is obvious that \(f_{\restriction_{Y}}\in\operatorname{reg}(\mathbb{S}(Y))\) and \(R(f_{\restriction_{Y}})=R(f)\cap Y\). Therefore \(f\in\operatorname{reg}(T_{\mathbb{S}(Y)}(X))\) by [6, Theorem 2.2 (1)], and hence \(T_{\mathbb{S}(Y)}(X)\) is regular. \(\square\)
The following auxiliary lemma is also needed in the sequel.
**Lemma 3.2**.: _Let \(S\) be a subsemigroup of a group \(G\). Then \(S\) is a subgroup of \(G\) if and only if \(S\) is regular._
Janusz Konieczny [6, Theorem 2.2(2)] characterized the regularity of \(T_{\mathbb{S}(Y)}(X)\). The following theorem provides a new characterization of the regularity of \(T_{\mathbb{S}(Y)}(X)\).
**Theorem 3.3**.: _Let \(\mathbb{S}(Y)\) be a subsemigroup of \(T(Y)\). Then \(T_{\mathbb{S}(Y)}(X)\) is regular if and only if one of the following holds:_
1. \(\mathbb{S}(Y)\) _is a subgroup of_ \(\text{Sym}(Y)\)_._
2. \(\mathbb{S}(Y)\) _is regular and_ \(Y=X\)_._
_Proof_. Suppose that \(T_{\mathbb{S}(Y)}(X)\) is regular. By Proposition 3.1, we see that (i) may hold. Let us assume that (i) does not hold.
Since \(T_{\mathbb{S}(Y)}(X)\) is regular, it follows that \(\mathbb{S}(Y)\) is regular by [6, Proposition 2.1] and the fact that any homomorphic image of a regular semigroup is regular (cf. [5, Lemma 2.4.4]).
To show \(Y=X\), suppose to the contrary that \(Y\neq X\). Observe that \(|Y|\geq 2\), since \(\mathbb{S}(Y)\) is not a subgroup of \(\text{Sym}(Y)\). Now we claim that \(\mathbb{S}(Y)\setminus\Omega(Y)\neq\varnothing\). Suppose to the contrary that \(\mathbb{S}(Y)\setminus\Omega(Y)=\varnothing\). Then \(\mathbb{S}(Y)\subseteq\Omega(Y)\). Since \(\mathbb{S}(Y)\)
is regular, we see that \(\mathbb{S}(Y)\subseteq\operatorname{Sym}(Y)\), and so \(\mathbb{S}(Y)\) is a subgroup of \(\operatorname{Sym}(Y)\) by Lemma 3.2. This contradicts the assumption that (i) does not hold. Hence \(\mathbb{S}(Y)\setminus\Omega(Y)\neq\varnothing\). Choose \(\alpha\in\mathbb{S}(Y)\) such that \(\alpha\notin\Omega(Y)\). Fix \(y_{0}\in Y\setminus R(\alpha)\) and define \(f\in T(X)\) by
\[xf=\begin{cases}x\alpha&\text{ if }x\in Y,\\ y_{0}&\text{ if }x\in X\setminus Y.\end{cases}\]
Clearly \(f\in T_{\mathbb{S}(Y)}(X)\). However, we see that \(R(f)\cap Y\neq R(f_{\restriction_{Y}})\). Therefore \(f\notin\operatorname{reg}(T_{\mathbb{S}(Y)}(X))\) by [6, Theorem 2.2(1)], which contradicts the regularity of \(T_{\mathbb{S}(Y)}(X)\). Hence \(Y=X\).
For the converse, suppose first that (i) holds. Then \(T_{\mathbb{S}(Y)}(X)\) is regular by Proposition 3.1. Next, suppose that (ii) holds. Since \(Y=X\), we have \(T_{\mathbb{S}(Y)}(X)=\mathbb{S}(Y)\), and so \(T_{\mathbb{S}(Y)}(X)\) is regular by hypothesis.
Janusz Konieczny [6, Theorem 2.3] determined when \(T_{\mathbb{S}(Y)}(X)\) is an inverse semigroup. The following theorem provides a new characterization for \(T_{\mathbb{S}(Y)}(X)\) to be an inverse semigroup.
**Theorem 3.4**.: _Let \(\mathbb{S}(Y)\) be a subsemigroup of \(T(Y)\). Then \(T_{\mathbb{S}(Y)}(X)\) is an inverse semigroup if and only if_
* \(\mathbb{S}(Y)\) _is an inverse semigroup;_
* _either_ \(Y=X\) _or_ \(|X|=2\)_._
Proof.: Suppose that \(T_{\mathbb{S}(Y)}(X)\) is an inverse semigroup. Then (i) is true by [6, Proposition 2.1] and the fact that any homomorphic image of an inverse semigroup is an inverse semigroup (cf. [5, Theorem 5.1.4]).
To show (ii), we note that \(T_{\mathbb{S}(Y)}(X)=\mathbb{S}(Y)\) if \(Y=X\). Therefore \(Y=X\) is a possible case. Assume that \(Y\neq X\). First, we claim that \(|X\setminus Y|=1\). Suppose to the contrary that there exist distinct \(x_{1},x_{2}\in X\setminus Y\). Choose \(\alpha\in E(\mathbb{S}(Y))\) and define \(f,g\in T(X)\) by
\[xf=\begin{cases}x\alpha&\text{ if }x\in Y,\\ x_{1}&\text{ if }x\in X\setminus Y\end{cases}\qquad\text{ and }\qquad xg=\begin{cases}x\alpha&\text{ if }x\in Y,\\ x_{2}&\text{ if }x\in X\setminus Y.\end{cases}\]
It is routine to verify that \(f,g\in E(T_{\mathbb{S}(Y)}(X))\). However, we see that \(x_{1}(fg)\neq x_{1}(gf)\), and so \(fg\neq gf\). This leads to a contradiction, because \(T_{\mathbb{S}(Y)}(X)\) is an inverse semigroup. Hence \(|X\setminus Y|=1\). Write \(X\setminus Y=\{z\}\).
Next, we claim that \(|Y|=1\). Suppose to the contrary that \(|Y|\geq 2\). Let \(\beta\in E(\mathbb{S}(Y))\). There are two cases to consider.
**Case 1:**\(|R(\beta)|=1\). Write \(R(\beta)=\{y_{1}\}\). Since \(|Y|\geq 2\), there exists \(y_{2}\in Y\) such that \(y_{2}\neq y_{1}\). Define \(h\in T(X)\) by
\[xh=\begin{cases}x\beta&\text{ if }x\in Y,\\ y_{2}&\text{ if }x\in X\setminus Y.\end{cases}\]
Clearly \(h\in T_{\mathbb{S}(Y)}(X)\). However, we see that \(R(\beta)\neq Y\cap R(h)\). Therefore \(h\notin\operatorname{reg}(T_{\mathbb{S}(Y)}(X))\) by [6, Theorem 2.2 (1)], and so \(T_{\mathbb{S}(Y)}(X)\) is not regular. This leads to a contradiction, because \(T_{\mathbb{S}(Y)}(X)\) is an inverse semigroup.
**Case 2:**\(|R(\beta)|\geq 2\). Choose distinct \(y_{1},y_{2}\in R(\beta)\). Define \(f,g\in T(X)\) by
\[xf=\begin{cases}x\beta&\text{if }x\in Y,\\ y_{1}&\text{if }x\in X\setminus Y\end{cases}\qquad\text{ and }\qquad xg=\begin{cases}x\beta&\text{if }x\in Y,\\ y_{2}&\text{if }x\in X\setminus Y.\end{cases}\]
It is routine to verify that \(f,g\in E(T_{\mathbb{S}(Y)}(X))\). However, we see that \(z(fg)\neq z(gf)\), and so \(fg\neq gf\). This leads to a contradiction, because \(T_{\mathbb{S}(Y)}(X)\) is an inverse semigroup.
Thus, in either case, we get a contradiction. Hence \(|Y|=1\). Since \(|X\setminus Y|=1\), we conclude that \(|X|=2\).
Conversely, suppose that the given conditions hold. If \(Y=X\), then \(T_{\mathbb{S}(Y)}(X)=\mathbb{S}(Y)\), and so \(T_{\mathbb{S}(Y)}(X)\) is an inverse semigroup by (i). Assume that \(Y\neq X\). Then we have \(|X|=2\) by (ii), and subsequently \(Y\) is singleton. Therefore \(\mathbb{S}(Y)\) is a subgroup of \(\operatorname{Sym}(Y)\), and so \(T_{\mathbb{S}(Y)}(X)\) is regular by Theorem 3.3. Further, since \(|X|=2\) and \(|Y|=1\), it is clear that \(T_{\mathbb{S}(Y)}(X)\) has exactly two idempotents which commute. Hence \(T_{\mathbb{S}(Y)}(X)\) is an inverse semigroup.
The following lemma is used in the next two results.
**Lemma 3.5**.: _Let \(f\in T_{\mathbb{S}(Y)}(X)\). Then there exists a subset \(T\) of \(X\) such that \(T\) and \(T\cap Y\) are transversals of \(\ker(f)\) and \(\ker(f_{\restriction_{Y}})\), respectively._
Proof.: Observe that \(xf^{-1}\cap Y\neq\varnothing\) for all \(x\in R(f_{\restriction_{Y}})\). Therefore for each \(x\in R(f_{\restriction_{Y}})\), fix \(x^{\prime}\in xf^{-1}\cap Y\), and let \(T_{1}=\{x^{\prime}\colon x\in R(f_{\restriction_{Y}})\}\). By construction of \(T_{1}\), it is clear that \(T_{1}\) is a transversal of \(\ker(f_{\restriction_{Y}})\). Now we consider \(R(f)\setminus R(f_{\restriction_{Y}})\). There are two cases.
**Case 1:**\(R(f)\setminus R(f_{\restriction_{Y}})=\varnothing\). Take \(T=T_{1}\). Clearly \(T\cap Y=T_{1}\). Thus \(T\) and \(T\cap Y\) are transversals of \(\ker(f)\) and \(\ker(f_{\restriction_{Y}})\), respectively.
**Case 2:**\(R(f)\setminus R(f_{\restriction_{Y}})\neq\varnothing\). Clearly \(xf^{-1}\neq\varnothing\) for all \(x\in R(f)\setminus R(f_{\restriction_{Y}})\). Therefore for each \(x\in R(f)\setminus R(f_{\restriction_{Y}})\), fix \(x^{\prime\prime}\in xf^{-1}\), and let \(T=T_{1}\cup\{x^{\prime\prime}\colon x\in R(f)\setminus R(f_{\restriction_{Y}})\}\). It is routine to verify that \(T\) is a transversal of \(\ker(f)\). Now, we see that \(T_{1}=T\cap Y\). Thus \(T\) and \(T\cap Y\) are transversals of \(\ker(f)\) and \(\ker(f_{\restriction_{Y}})\), respectively.
If \(I_{Y}\in\mathbb{S}(Y)\), then it is clear that \(I_{X}\in T_{\mathbb{S}(Y)}(X)\). The following theorem characterizes unit-regular elements in \(T_{\mathbb{S}(Y)}(X)\) when \(I_{Y}\in\mathbb{S}(Y)\).
**Theorem 3.6**.: _Let \(\mathbb{S}(Y)\) be a subsemigroup of \(T(Y)\) such that \(I_{Y}\in\mathbb{S}(Y)\), and let \(f\in T_{\mathbb{S}(Y)}(X)\). Then \(f\in\text{ureg}(T_{\mathbb{S}(Y)}(X))\) if and only if_
* \(f_{\restriction_{Y}}\in\text{ureg}(\mathbb{S}(Y))\)_;_
* \(R(f)\cap Y=R(f_{\restriction_{Y}})\)_;_
* \(|C(f)\setminus C(f_{\restriction_{Y}})|=|D(f)\setminus D(f_{\restriction_{Y}})|\)_, where_ \(C(f)=X\setminus T_{f}\) _and_ \(C(f_{\restriction_{Y}})=Y\setminus T_{(f_{\restriction_{Y}})}\) _for some transversals_ \(T_{f}\) _and_ \(T_{(f_{\restriction_{Y}})}\) _of_ \(\ker(f)\) _and_ \(\ker(f_{\restriction_{Y}})\)_, respectively, such that_ \(Y\cap T_{f}=T_{(f_{\restriction_{Y}})}\)_._
Proof.: Suppose that \(f\in\text{ureg}(T_{\mathbb{S}(Y)}(X))\). Then there exists \(g\in U(T_{\mathbb{S}(Y)}(X))\) such that \(fgf=f\). It follows that \(g_{\restriction_{Y}}\in U(\mathbb{S}(Y))\) and \(f_{\restriction_{Y}}g_{\restriction_{Y}}f_{\restriction_{Y}}=f_{\restriction_{Y}}\), and so \(f_{\restriction_{Y}}\in\text{ureg}(\mathbb{S}(Y))\). Thus (i) holds.
To show (ii) and (iii), we notice that \(T_{T(Y)}(X)\) is the semigroup \(\overline{T}(X,Y)=\{f\in T(X)\colon Yf\subseteq Y\}\) under composition. Since \(f\in\text{ureg}(T_{\mathbb{S}(Y)}(X))\) and \(T_{\mathbb{S}(Y)}(X)\subseteq T_{T(Y)}(X)\), we have \(f\in\text{ureg}(\overline{T}(X,Y))\). Hence (ii) and (iii) are directly followed by [12, Theorem 4.2].
Conversely, suppose that \(f\) satisfies (i)-(iii). By (i), there exists \(g_{0}\in U(\mathbb{S}(Y))\) such that \(f_{\restriction_{Y}}g_{0}f_{\restriction_{Y}}=f_{\restriction_{Y}}\).
Now, we observe that \(R(f)\setminus Y\neq\varnothing\) if and only if \(T_{f}\setminus Y\neq\varnothing\). If \(R(f)\setminus Y\neq\varnothing\), then we show that \(g_{1}\colon R(f)\setminus Y\to T_{f}\setminus Y\) defined by \(xg_{1}=x^{\prime}\), where \(x^{\prime}\in T_{f}\cap xf^{-1}\), is bijective. Clearly \(g_{1}\) is injective, since \(x_{1}f^{-1}\cap x_{2}f^{-1}=\varnothing\) for distinct \(x_{1},x_{2}\in R(f)\). To show \(g_{1}\) is surjective, let \(x\in T_{f}\setminus Y\) and write \(z=xf\). Certainly \(z\in R(f)\). By (ii), note that \(R(f)\cap Y=R(f_{\restriction_{Y}})\). Therefore, since \(T_{(f_{\restriction_{Y}})}=Y\cap T_{f}\), we have \(z\in R(f)\setminus Y\). Also, we see that \(zg_{1}=x\), and so \(g_{1}\) is surjective.
By (iii), there is a bijection \(g_{2}\colon D(f)\setminus D(f_{\restriction_{Y}})\to C(f)\setminus C(f_{ \restriction_{Y}})\). Notice that \(X=Y\cup\big{(}R(f)\setminus Y\big{)}\cup\big{(}D(f)\setminus D(f_{\restriction _{Y}})\big{)}\) and also \(X=Y\cup\big{(}T_{f}\setminus Y\big{)}\cup\big{(}C(f)\setminus C(f_{ \restriction_{Y}})\big{)}\). Define \(g\in T(X)\) by
\[xg=\begin{cases}xg_{0}&\text{if }x\in Y,\\ xg_{1}&\text{if }x\in R(f)\setminus Y,\\ xg_{2}&\text{if }x\in D(f)\setminus D(f_{\restriction_{Y}}).\end{cases}\]
It is routine to verify that \(g\) is bijective, \(g\in T_{\mathbb{S}(Y)}(X)\), and \(fgf=f\). Hence \(f\in\operatorname{ureg}(T_{\mathbb{S}(Y)}(X))\).
The following proposition gives a sufficient condition for \(T_{\mathbb{S}(Y)}(X)\) to be unit-regular.
**Proposition 3.7**.: _If \(\mathbb{S}(Y)\) is a subgroup of \(\text{Sym}(Y)\) and \(X\setminus Y\) is finite, then \(T_{\mathbb{S}(Y)}(X)\) is unit-regular._
Proof.: Let \(f\in T_{\mathbb{S}(Y)}(X)\). Then \(f_{\restriction_{Y}}\in\mathbb{S}(Y)\). Since \(\mathbb{S}(Y)\) is a subgroup of \(\text{Sym}(Y)\), it is clear that \(f_{\restriction_{Y}}\in\operatorname{ureg}(\mathbb{S}(Y))\) and \(R(f)\cap Y=R(f_{\restriction_{Y}})\). Thus \(f\) satisfies the conditions (i) and (ii) of Theorem 3.6.
By Lemma 3.5, let \(T_{f}\) and \(T_{(f_{\restriction_{Y}})}\) be transversals of \(\ker(f)\) and \(\ker(f_{\restriction_{Y}})\), respectively, such that \(T_{f}\cap Y=T_{(f_{\restriction_{Y}})}\). Since \(f_{\restriction_{Y}}\in\mathbb{S}(Y)\) and \(\mathbb{S}(Y)\) is a subgroup of \(\text{Sym}(Y)\), we have \(T_{(f_{\restriction_{Y}})}=Y=R(f_{\restriction_{Y}})\). Therefore \(Y\subseteq T_{f}\) and \(Y\subseteq R(f)\). Since \(X\setminus Y\) is finite, it follows that both \(T_{f}\setminus Y\) and \(R(f)\setminus Y\) are finite.
Claim that \(|T_{f}\setminus Y|=|R(f)\setminus Y|\). Notice that \(R(f)\setminus Y\neq\varnothing\) if and only if \(T_{f}\setminus Y\neq\varnothing\). If \(R(f)\setminus Y\neq\varnothing\), then we show that \(\alpha\colon R(f)\setminus Y\to T_{f}\setminus Y\) defined by \(x\alpha=x^{\prime}\), where \(x^{\prime}\in T_{f}\cap xf^{-1}\), is bijective. Clearly \(\alpha\) is injective, since \(x_{1}f^{-1}\cap x_{2}f^{-1}=\varnothing\) for distinct \(x_{1},x_{2}\in R(f)\). To show \(\alpha\) is surjective, let \(x\in T_{f}\setminus Y\) and write \(z=xf\). Certainly \(z\in R(f)\). Since \(R(f)\cap Y=R(f_{\restriction_{Y}})\) and \(T_{(f_{\restriction_{Y}})}=Y\cap T_{f}\), it follows that \(z\in R(f)\setminus Y\). Also, we see that \(z\alpha=x\), and so \(\alpha\) is surjective. Thus \(|T_{f}\setminus Y|=|R(f)\setminus Y|\).
Observe that \(D(f)=(X\setminus Y)\setminus(R(f)\setminus Y)\). Write \(C(f)=X\setminus T_{f}\) and \(C(f_{\restriction_{Y}})=Y\setminus T_{(f_{\restriction_{Y}})}\). It is clear that \(C(f)=(X\setminus Y)\setminus(T_{f}\setminus Y)\). Since \(X\setminus Y\), \(T_{f}\setminus Y\), and \(R(f)\setminus Y\) are finite, and \(|T_{f}\setminus Y|=|R(f)\setminus Y|\), we obtain
\[|C(f)|=|X\setminus Y|-|T_{f}\setminus Y|=|X\setminus Y|-|R(f)\setminus Y|=|(X \setminus Y)\setminus\big{(}R(f)\setminus Y\big{)}|=D(f).\]
Since \(f_{\restriction_{Y}}\) is a bijection on \(Y\), we have \(C(f_{\restriction_{Y}})=\varnothing\) and \(D(f_{\restriction_{Y}})=\varnothing\). Therefore \(|C(f)\setminus C(f_{\restriction_{Y}})|=|C(f)|=|D(f)|=|D(f)\setminus D(f_{ \restriction_{Y}})|\). Thus \(f\) also satisfies the condition (iii) of Theorem 3.6, and so \(f\in\operatorname{ureg}(T_{\mathbb{S}(Y)}(X))\) by Theorem 3.6. Hence, since \(f\) is arbitrary, we conclude that \(T_{\mathbb{S}(Y)}(X)\) is a unit-regular semigroup.
The next theorem gives a necessary and sufficient condition for \(T_{\mathbb{S}(Y)}(X)\) to be unit-regular if \(I_{Y}\in\mathbb{S}(Y)\).
**Theorem 3.8**.: _Let \(\mathbb{S}(Y)\) be a subsemigroup of \(T(Y)\) such that \(I_{Y}\in\mathbb{S}(Y)\). Then \(T_{\mathbb{S}(Y)}(X)\) is unit-regular if and only if one of the following holds:_
1. \(\mathbb{S}(Y)\) _is a subgroup of_ \(\text{Sym}(Y)\) _and_ \(X\setminus Y\) _is finite._
2. \(\mathbb{S}(Y)\) _is unit-regular and_ \(Y=X\)_._
Proof.: Suppose that \(T_{\mathbb{S}(Y)}(X)\) is unit-regular. By Proposition 3.7, we see that (i) may hold. Let us assume that (i) does not hold.
Since \(T_{\mathbb{S}(Y)}(X)\) is unit-regular, it follows that \(\mathbb{S}(Y)\) is unit-regular by [6, Proposition 2.1] and the fact that any homomorphic image of a unit-regular semigroup is unit-regular (cf. [3, Proposition 2.7]).
To show \(Y=X\), suppose to the contrary that \(Y\neq X\). First, we claim that \(\mathbb{S}(Y)\) is not a subgroup of \(\text{Sym}(Y)\). Suppose to the contrary that \(\mathbb{S}(Y)\) is a subgroup of \(\text{Sym}(Y)\). Then we next claim that \(X\setminus Y\) is finite. Suppose to the contrary that \(X\setminus Y\) is infinite. Then there exists \(\alpha\colon X\setminus Y\to X\setminus Y\) which is injective but not surjective. Define \(f\in T(X)\) by
\[xf=\begin{cases}x&\text{if }x\in Y,\\ x\alpha&\text{if }x\in X\setminus Y.\end{cases}\]
Clearly \(f_{\restriction_{Y}}=I_{Y}\) and \(f\in T_{\mathbb{S}(Y)}(X)\). Moreover, we see that \(f\) is injective. It follows that \(Y\) and \(X\) are only transversals of \(\ker(f_{\restriction_{Y}})\) and \(\ker(f)\), respectively. Therefore \(C(f)=X\setminus T_{f}=\varnothing\) and \(C(f_{\restriction_{Y}})=Y\setminus T_{(f_{\restriction_{Y}})}=\varnothing\), and so \(|C(f)\setminus C(f_{\restriction_{Y}})|=0\). Notice that \(f_{\restriction_{Y}}\) is surjective but \(f\) is not surjective. Therefore \(D(f_{\restriction_{Y}})=\varnothing\) but \(D(f)\neq\varnothing\), and so \(|D(f)\setminus D(f_{\restriction_{Y}})|\neq 0\). Thus \(f\) does not satisfy Theorem 3.6(iii), and hence \(f\notin\text{ureg}(T_{\mathbb{S}(Y)}(X))\) by Theorem 3.6, which contradicts the unit-regularity of \(T_{\mathbb{S}(Y)}(X)\). Thus \(X\setminus Y\) is finite, and subsequently (i) holds. This contradicts the assumption that (i) does not hold. Therefore \(\mathbb{S}(Y)\) is not a subgroup of \(\text{Sym}(Y)\). Hence, since \(T_{\mathbb{S}(Y)}(X)\) is regular, we have \(Y=X\) by Theorem 3.3.
For the converse, suppose first that (i) holds. Then \(T_{\mathbb{S}(Y)}(X)\) is unit-regular by Proposition 3.7. Next, suppose that (ii) holds. Since \(Y=X\), we have \(T_{\mathbb{S}(Y)}(X)=\mathbb{S}(X)\), and so \(T_{\mathbb{S}(Y)}(X)\) is a unit-regular semigroup by hypothesis.
## 4. The semigroup \(L_{\mathbb{S}(W)}(V)\)
In this section, we characterize regular elements in \(L_{\mathbb{S}(W)}(V)\) and then determine when \(L_{\mathbb{S}(W)}(V)\) is a regular semigroup. If \(I_{W}\in\mathbb{S}(W)\), then we characterize unit-regular elements in \(L_{\mathbb{S}(W)}(V)\) and then determine when \(L_{\mathbb{S}(W)}(V)\) is a unit-regular semigroup. We also determine when \(L_{\mathbb{S}(W)}(V)\) is a completely regular semigroup, and when \(L_{\mathbb{S}(W)}(V)\) is an inverse semigroup. Throughout this section, we assume that \(W\) is a subspace of \(V\) and \(\mathbb{S}(W)\) is a subsemigroup of \(L(W)\). We begin with the following proposition.
**Proposition 4.1**.: _Let \(\mathbb{S}(W)\) be a subsemigroup of \(L(W)\). Then \(\varphi\colon L_{\mathbb{S}(W)}(V)\to\mathbb{S}(W)\) defined by \(f\varphi=f_{\restriction_{W}}\) is an epimorphism. Moreover, if \(I_{V}\in L_{\mathbb{S}(W)}(V)\), then \((I_{V})\varphi=I_{W}\)._
Proof.: Observe that \((fg)_{\restriction_{W}}=f_{\restriction_{W}}g_{\restriction_{W}}\) for all \(f,g\in L_{\mathbb{S}(W)}(V)\). Clearly \(\varphi\colon L_{\mathbb{S}(W)}(V)\to\mathbb{S}(W)\) is a homomorphism, since
\[(fg)\varphi=(fg)_{\restriction_{W}}=f_{\restriction_{W}}g_{\restriction_{W}}=(f \varphi)(g\varphi)\]
for all \(f,g\in L_{\mathbb{S}(W)}(V)\). To show \(\varphi\) is surjective, let \(\alpha\in\mathbb{S}(W)\). Extend \(\alpha\) to a linear operator, say \(f\) on \(V\). Then we see that \(f\varphi=f_{\restriction_{W}}=\alpha\), and so \(\varphi\) is surjective. Thus \(\varphi\) is an epimorphism. Further, if \(I_{V}\in L_{\mathbb{S}(W)}(V)\), then it is clear that \((I_{V})\varphi=I_{W}\).
The next theorem gives a characterization of regular elements in \(L_{\mathbb{S}(W)}(V)\).
**Theorem 4.2**.: _Let \(\mathbb{S}(W)\) be a subsemigroup of \(L(W)\) and \(f\in L_{\mathbb{S}(W)}(V)\). Then \(f\in\text{reg}(L_{\mathbb{S}(W)}(V))\) if and only if_
* \(f_{\restriction_{W}}\in\text{reg}(\mathbb{S}(W))\)_;_
* \(R(f)\cap W=R(f_{\restriction_{W}})\)_._
Proof.: Suppose that \(f\in\text{reg}(L_{\mathbb{S}(W)}(V))\). Then there exists \(g\in L_{\mathbb{S}(W)}(V)\) such that \(fgf=f\). It follows that \(g_{\restriction_{W}}\in\mathbb{S}(W)\) and \(f_{\restriction_{W}}g_{\restriction_{W}}f_{\restriction_{W}}=f_{\restriction_{W}}\), and so \(f_{\restriction_{W}}\in\text{reg}(\mathbb{S}(W))\). Thus (i) holds. To show (ii), note that \(L_{\mathbb{S}(W)}(V)\subseteq L_{L(W)}(V)\) and \(L_{L(W)}(V)=\overline{L}(V,W)\). Since \(f\in\text{reg}(L_{\mathbb{S}(W)}(V))\), we have \(f\in\text{reg}(\overline{L}(V,W))\), and hence (ii) holds by [8, Proposition 3.1].
Conversely, suppose that \(f\) satisfies the given conditions. By (i), there exists \(\alpha\in\mathbb{S}(W)\) such that \(f_{\restriction_{W}}\alpha f_{\restriction_{W}}=f_{\restriction_{W}}\). Now, let \(B_{1}\) be a basis for \(R(f)\cap W\). Extend \(B_{1}\) to bases \(B_{1}\cup B_{2}\) and \(B_{1}\cup B_{3}\) for \(W\) and \(R(f)\), respectively, where \(B_{2}\subseteq W\setminus(B_{1})\) and \(B_{3}\subseteq R(f)\setminus(B_{1})\). Then \(B_{1}\cup B_{2}\cup B_{3}\) is a basis for \(W+R(f)\). Extend \(B_{1}\cup B_{2}\cup B_{3}\) to a basis \(B_{1}\cup B_{2}\cup B_{3}\cup B_{4}\) for \(V\), where \(B_{4}\subseteq V\setminus(W+R(f))\). Since \(B_{3}\subseteq R(f)\), we have \(vf^{-1}\neq\varnothing\) for all \(v\in B_{3}\). Therefore for each \(v\in B_{3}\), choose \(v^{\prime}\in vf^{-1}\). Define \(h\in L(V)\) by
\[vh=\begin{cases}v\alpha&\text{ if }v\in B_{1}\cup B_{2},\\ v^{\prime}&\text{ if }v\in B_{3},\\ 0&\text{ if }v\in B_{4}.\end{cases}\]
It is routine to verify that \(h\in L_{\mathbb{S}(W)}(V)\) and \(fhf=f\). Hence \(f\in\text{reg}(L_{\mathbb{S}(W)}(V))\).
The following proposition gives a sufficient condition for \(L_{\mathbb{S}(W)}(V)\) to be regular.
**Proposition 4.3**.: _If \(\mathbb{S}(W)\) is a subgroup of \(\text{Aut}(W)\), then \(L_{\mathbb{S}(W)}(V)\) is regular._
Proof.: Let \(f\in L_{\mathbb{S}(W)}(V)\). Then \(f_{\restriction_{W}}\in\mathbb{S}(W)\). If \(\mathbb{S}(W)\) is a subgroup of \(\text{Aut}(W)\), then we certainly have \(f_{\restriction_{W}}\in\text{reg}(\mathbb{S}(W))\) and \(R(f_{\restriction_{W}})=W=R(f)\cap W\). Therefore \(f\in\text{reg}(L_{\mathbb{S}(W)}(V))\) by Theorem 4.2. Hence \(L_{\mathbb{S}(W)}(V)\) is regular.
The following theorem gives a necessary and sufficient condition for \(L_{\mathbb{S}(W)}(V)\) to be regular.
**Theorem 4.4**.: _Let \(\mathbb{S}(W)\) be a subsemigroup of \(L(W)\). Then \(L_{\mathbb{S}(W)}(V)\) is regular if and only if one of the following holds:_
* \(\mathbb{S}(W)\) _is a subgroup of_ \(\text{Aut}(W)\)_._
* \(\mathbb{S}(W)\) _is regular and_ \(W=V\)_._
Proof.: Suppose that \(L_{\mathbb{S}(W)}(V)\) is regular. By Proposition 4.3, we see that (i) may hold. Let us assume that (i) does not hold.
Since \(L_{\mathbb{S}(W)}(V)\) is regular, it follows that \(\mathbb{S}(W)\) is regular by Proposition 4.1 and the fact that any homomorphic image of a regular semigroup is regular (cf. [5, Lemma 2.4.4]).
To show \(W=V\), suppose to the contrary that \(W\neq V\). Observe that \(W\neq\{0\}\), since \(\mathbb{S}(W)\) is not a subgroup of \(\operatorname{Aut}(W)\). Now we claim that \(\mathbb{S}(W)\setminus\Omega(W)\neq\varnothing\). Suppose to the contrary that \(\mathbb{S}(W)\setminus\Omega(W)=\varnothing\). Then \(\mathbb{S}(W)\subseteq\Omega(W)\). Since \(\mathbb{S}(W)\) is regular, we simply get \(\mathbb{S}(W)\subseteq\operatorname{Aut}(W)\). It follows that \(\mathbb{S}(W)\) is a subgroup of \(\operatorname{Aut}(W)\) by Lemma 3.2, a contradiction to the assumption that (i) does not hold. Hence \(\mathbb{S}(W)\setminus\Omega(W)\neq\varnothing\). Thus we choose \(\alpha\in\mathbb{S}(W)\) such that \(\alpha\notin\Omega(W)\). Clearly \(W\setminus R(\alpha)\neq\varnothing\). Let \(B_{1}\) be a basis for \(R(\alpha)\). Extend \(B_{1}\) to a basis \(B_{1}\cup B_{2}\) for \(W\), where \(B_{2}\subseteq W\setminus R(\alpha)\). Next, extend \(B_{1}\cup B_{2}\) to a basis \(B_{1}\cup B_{2}\cup B_{3}\) for \(V\), where \(B_{3}\subseteq V\setminus W\). Fix \(v_{0}\in B_{2}\), and define \(f\in L(V)\) by
\[vf=\begin{cases}v\alpha&\text{ if }v\in B_{1}\cup B_{2},\\ v_{0}&\text{ if }v\in B_{3}.\end{cases}\]
Clearly \(f\in L_{\mathbb{S}(W)}(V)\). However, we see that \(R(f_{\upharpoonright W})=R(\alpha)\) and \(R(f)\cap W=\langle R(\alpha)\cup\{v_{0}\}\rangle\), and so \(R(f)\cap W\neq R(f_{\upharpoonright W})\). Therefore \(f\notin\operatorname{reg}(L_{\mathbb{S}(W)}(V))\) by Theorem 4.2, which contradicts the regularity of \(L_{\mathbb{S}(W)}(V)\). Hence \(W=V\).
For the converse, suppose first that (i) holds. Then \(L_{\mathbb{S}(W)}(V)\) is regular by Proposition 4.3. Next, we suppose that (ii) holds. Since \(W=V\), we have \(L_{\mathbb{S}(W)}(V)=\mathbb{S}(V)\), and hence \(L_{\mathbb{S}(W)}(V)\) is regular by hypothesis.
The following lemma is used in the next two results.
**Lemma 4.5**.: _Let \(f\in L_{\mathbb{S}(W)}(V)\). Then there exists a subspace \(U\) of \(V\) such that \(U\) and \(U\cap W\) are transversals of \(\ker(f)\) and \(\ker(f_{\upharpoonright W})\), respectively._
Proof.: Let \(B_{1}\) be a basis for \(R(f_{\upharpoonright W})\). If \(B_{1}\neq\varnothing\), then we see that \(uf^{-1}\cap W\neq\varnothing\) for all \(u\in B_{1}\). Therefore for each \(u\in B_{1}\), fix \(u^{\prime}\in uf^{-1}\cap W\) and let \(C_{1}=\{u^{\prime}\colon u\in B_{1}\}\). If \(B_{1}=\varnothing\), then we take \(C_{1}=\varnothing\).
In either case, we claim that \(U_{1}=\langle C_{1}\rangle\) is a transversal of \(\ker(f_{\upharpoonright W})\). If \(C_{1}=\varnothing\), then \(U_{1}=\{0\}\), and so we are done. Assume that \(C_{1}\neq\varnothing\) and let \(w\in R(f_{\upharpoonright W})\). Since \(B_{1}\) is a basis for \(R(f_{\upharpoonright W})\), we have \(w=c_{1}u_{1}+\cdots+c_{m}u_{m}\) for some \(u_{1},\ldots,u_{m}\in B_{1}\), where \(m\geq 0\). Consider \(w^{\prime}=c_{1}u_{1}^{\prime}+\cdots+c_{m}u_{m}^{\prime}\in U_{1}\), where \(u_{1}^{\prime},\ldots,u_{m}^{\prime}\in C_{1}\). Observe that \(w^{\prime}f=w\), and so \(w^{\prime}\in wf^{-1}\cap U_{1}\). Recall that \(C_{1}\) is a basis for \(U_{1}\). Therefore, by construction of \(C_{1}\), we see that \(wf^{-1}\cap U_{1}=\{w^{\prime}\}\). Hence, since \(w\in R(f_{\upharpoonright W})\) is arbitrary, the subspace \(U_{1}\) of \(V\) is a transversal of \(\ker{(f_{\upharpoonright W})}\). Now we consider \(R(f)\setminus R(f_{\upharpoonright W})\). There are two cases.
**Case 1:** Suppose \(R(f)\setminus R(f_{\upharpoonright W})=\varnothing\). Take \(U=U_{1}\). Clearly \(U\cap W=U_{1}\). Thus \(U\) and \(U\cap W\) are transversals of \(\ker(f)\) and \(\ker(f_{\upharpoonright W})\), respectively.
**Case 2:** Suppose \(R(f)\setminus R(f_{\upharpoonright W})\neq\varnothing\). Extend \(B_{1}\) to a basis \(B_{1}\cup B_{2}\) for \(R(f)\), where \(B_{2}\subseteq R(f)\setminus R(f_{\upharpoonright W})\). Note that \(B_{2}\neq\varnothing\) and \(vf^{-1}\neq\varnothing\) for all \(v\in B_{2}\). Therefore for each \(v\in B_{2}\), fix \(\bar{v}\in vf^{-1}\). Write \(C=C_{1}\cup\{\bar{v}\in vf^{-1}\mid v\in B_{2}\}\) and let \(U=\langle C\rangle\). Clearly \(U\cap W=U_{1}\), and so \(U\cap W\) is a transversal of \(\ker(f_{\upharpoonright W})\). Now it remains to show that \(U\) is a transversal of \(\ker(f)\). For this, let \(w\in R(f)\). Since \(B_{1}\cup B_{2}\) is a basis for \(R(f)\), we have \(w=c_{1}u_{1}+\cdots+c_{m}u_{m}+d_{1}v_{1}+\cdots+d_{n}v_{n}\) for some \(u_{1},\ldots,u_{m}\in B_{1}\) and \(v_{1},\ldots,v_{n}\in B_{2}\), where \(m,n\geq 0\). Let \(w^{\prime}=c_{1}u_{1}^{\prime}+\cdots+c_{m}u_{m}^{\prime}+d_{1}\bar{v}_{1}+ \cdots+d_{n}\bar{v}_{n}\in U\), where \(u_{1}^{\prime},\ldots,u_{m}^{\prime},\bar{v}_{1},\ldots,\bar{v}_{n}\in C\). Observe that \(w^{\prime}f=w\), and so \(w^{\prime}\in wf^{-1}\cap U\). Recall that \(C\) is a basis for \(U\). Therefore, by construction of \(C\), we see that \(wf^{-1}\cap U=\{w^{\prime}\}\). Hence, since \(w\in R(f)\) is arbitrary, the subspace \(U\) of \(V\) is a transversal of \(\ker{(f)}\)
Observe that \(I_{V}\) is not an element of \(L_{\mathbb{S}(W)}(V)\), in general. However, if \(I_{W}\in\mathbb{S}(W)\), then it is clear that \(I_{V}\in L_{\mathbb{S}(W)}(V)\). The following theorem provides a characterization of unit-regular elements in \(L_{\mathbb{S}(W)}(V)\) when \(I_{W}\in\mathbb{S}(W)\).
**Theorem 4.6**.: _Let \(\mathbb{S}(W)\) be a subsemigroup of \(L(W)\) such that \(I_{W}\in\mathbb{S}(W)\), and let \(f\in L_{\mathbb{S}(W)}(V)\). Then \(f\in\text{\rm ureg}(L_{\mathbb{S}(W)}(V))\) if and only if_
1. \(f_{\restriction_{W}}\in\text{\rm ureg}(\mathbb{S}(W))\)_;_
2. \(R(f)\cap W=R(f_{\restriction_{W}})\)_;_
3. \(\text{codim}(W+T_{f})=\text{codim}(W+R(f))\) _for some subspace_ \(T_{f}\) _of_ \(V\) _such that_ \(T_{f}\) _and_ \(W\cap T_{f}\) _are transversals of_ \(\ker(f)\) _and_ \(\ker(f_{\restriction_{W}})\)_, respectively._
Proof.: Suppose that \(f\in\text{\rm ureg}(L_{\mathbb{S}(W)}(V))\). Then there exists \(g\in U(L_{\mathbb{S}(W)}(V))\) such that \(fgf=f\). This simply gives \(g_{\restriction_{W}}\in U(\mathbb{S}(W))\) and \(f_{\restriction_{W}}g_{\restriction_{W}}f_{\restriction_{W}}=f_{\restriction_{W}}\), and so \(f_{\restriction_{W}}\in\text{\rm ureg}(\mathbb{S}(W))\). Thus (i) holds.
To show (ii) and (iii), note that \(L_{L(W)}(V)=\overline{L}(V,W)\). Since \(f\in\text{\rm ureg}(L_{\mathbb{S}(W)}(V))\) and \(L_{\mathbb{S}(W)}(V)\subseteq L_{L(W)}(V)\), we have \(f\in\text{\rm ureg}(\overline{L}(V,W))\). Hence (ii) and (iii) are directly followed by [12, Theorem 5.6].
Conversely, suppose that \(f\) satisfies (i)-(iii). By (i), there exists \(g_{0}\in U(\mathbb{S}(W))\) such that \(f_{\restriction_{W}}g_{0}f_{\restriction_{W}}=f_{\restriction_{W}}\). Now, let \(B_{1}\) be a basis for \(R(f)\cap W\). Extend \(B_{1}\) to bases \(B_{1}\cup B_{2}\) and \(B_{1}\cup B_{3}\) for \(W\) and \(R(f)\), respectively, where \(B_{2}\subseteq W\setminus\langle B_{1}\rangle\) and \(B_{3}\subseteq R(f)\setminus\langle B_{1}\rangle\). Then \(B_{1}\cup B_{2}\cup B_{3}\) is a basis for \(W+R(f)\). Extend \(B_{1}\cup B_{2}\cup B_{3}\) to a basis \(B:=B_{1}\cup B_{2}\cup B_{3}\cup B_{4}\) for \(V\), where \(B_{4}\subseteq V\setminus\big{(}W+R(f)\big{)}\).
Write \(C_{1}=B_{1}g_{0}\) and \(C_{2}=B_{2}g_{0}\). Since \(B_{1}\cup B_{2}\) is a basis for \(W\) and \(g_{0}\) is an automorphism of \(W\), it follows that \(C_{1}\cup C_{2}\) is also a basis for \(W\) (cf. [11, Theorem 2.4(3)]).
By (iii), since the subspace \(T_{f}\) of \(V\) is a transversal of \(\ker(f)\), we see that the corestriction of \(f_{\restriction_{T_{f}}}\) to \(R(f)\) is an isomorphism. Denote the inverse of this corestriction map by \(g_{1}\). It is clear that \(g_{1}\colon R(f)\to T_{f}\) is an isomorphism. Write \(C^{\prime}_{1}=B_{1}g_{1}\) and \(C_{3}=B_{3}g_{1}\). Since \(B_{1}\cup B_{3}\) is a basis for \(R(f)\) and \(g_{1}\colon R(f)\to T_{f}\) is an isomorphism, it follows that \(C^{\prime}_{1}\cup C_{3}\) is a basis for \(T_{f}\) (cf. [11, Theorem 2.4(3)]).
Write \(W\cap T_{f}=T_{(f_{\restriction_{W}})}\). By (ii), it is clear that \(C^{\prime}_{1}\) is a basis for the subspace \(T_{(f_{\restriction_{W}})}\) of \(W\), and so \(C^{\prime}_{1}\subseteq\langle C_{1}\cup C_{2}\rangle\). Therefore \(C_{1}\cup C_{2}\cup C_{3}\) is a basis for \(W+T_{f}\). Extend \(C_{1}\cup C_{2}\cup C_{3}\) to a basis \(C:=C_{1}\cup C_{2}\cup C_{3}\cup C_{4}\) for \(V\), where \(C_{4}\subseteq V\setminus(W+T_{f})\). Since \(\text{codim}(W+R(f))=\text{codim}(W+T_{f})\), we get \(|B_{4}|=|C_{4}|\). Therefore there exists a bijection \(g_{2}\colon B_{4}\to C_{4}\). Define \(g\in L(V)\) by
\[vg=\begin{cases}vg_{0}&\text{if $v\in B_{1}\cup B_{2}$},\\ vg_{1}&\text{if $v\in B_{3}$},\\ vg_{2}&\text{if $v\in B_{4}$}.\end{cases}\]
It is routine to verify that \(g\) is bijective, \(g\in L_{\mathbb{S}(W)}(V)\), and \(fgf=f\). Hence \(f\in\text{\rm ureg}(L_{\mathbb{S}(W)}(V))\).
If \(I_{W}\in\mathbb{S}(W)\), then the following proposition gives a sufficient condition for \(L_{\mathbb{S}(W)}(V)\) to be unit-regular.
**Proposition 4.7**.: _Let \(\mathbb{S}(W)\) be a subsemigroup of \(L(W)\) such that \(I_{W}\in\mathbb{S}(W)\). If \(\mathbb{S}(W)\) is a subgroup of \(\text{Aut}(W)\) and \(\text{codim}(W)\) is finite, then \(L_{\mathbb{S}(W)}(V)\) is unit-regular._
Proof.: Let \(f\in L_{\mathbb{S}(W)}(V)\). Then \(f_{\upharpoonright_{W}}\in\mathbb{S}(W)\). Since \(\mathbb{S}(W)\) is a subgroup of \(\operatorname{Aut}(W)\), we have \(f_{\upharpoonright_{W}}\in\operatorname{Aut}(W)\). Therefore both (i) and (ii) of Theorem 4.6 trivially hold.
Now, by Lemma 4.5, let \(T_{f}\) be a subspace of \(V\) such that \(T_{f}\) and \(T_{f}\cap W\) are transversals of \(\ker(f)\) and \(\ker(f_{\upharpoonright_{W}})\), respectively. Observe that \(f\colon T_{f}\to R(f)\) is an isomorphism and \(Wf=W\). By [12, Lemma 5.2], we therefore have \(\dim(T_{f}/W)=\dim(R(f)/W)\). Since \(f_{\upharpoonright_{W}}\in\operatorname{Aut}(W)\), we simply have \(W\subseteq T_{f}\) and \(W\subseteq R(f)\). Then \(W+T_{f}=T_{f}\), \(W+R(f)=R(f)\), and both \(T_{f}/W\) and \(R(f)/W\) are subspaces of \(V/W\). Therefore we obtain
\[\operatorname{codim}(W+T_{f}) =\operatorname{codim}(T_{f})\] \[=\dim(V/T_{f})\] \[=\dim((V/W)/(T_{f}/W))\] \[=\dim(V/W)-\dim(T_{f}/W)\] (since \[\operatorname{codim}(W)<\infty\] ) \[=\dim(V/W)-\dim(R(f)/W)\] \[=\dim((V/W)/(R(f)/W))\] (since \[\operatorname{codim}(W)<\infty\] ) \[=\operatorname{codim}(V/R(f))\] \[=\operatorname{codim}(R(f))\] \[=\operatorname{codim}(W+R(f)),\]
and so \(f\) satisfies Theorem 4.6(iii). Thus \(f\in\operatorname{ureg}(L_{\mathbb{S}(W)}(V))\) by Theorem 4.6. Hence, since \(f\) is arbitrary, the semigroup \(L_{\mathbb{S}(W)}(V)\) is unit-regular.
If \(I_{W}\in\mathbb{S}(W)\), then the following theorem determines when \(L_{\mathbb{S}(W)}(V)\) is unit-regular.
**Theorem 4.8**.: _Let \(\mathbb{S}(W)\) be a subsemigroup of \(L(W)\) such that \(I_{W}\in\mathbb{S}(W)\). Then \(L_{\mathbb{S}(W)}(V)\) is unit-regular if and only if one of the following holds:_
1. \(\mathbb{S}(W)\) _is a subgroup of_ \(\operatorname{Aut}(W)\) _and_ \(\operatorname{codim}(W)\) _is finite._
2. \(\mathbb{S}(W)\) _is unit-regular and_ \(W=V\)_._
Proof.: Suppose that \(L_{\mathbb{S}(W)}(V)\) is unit-regular. By Proposition 4.7, we see that (i) may hold. Let us assume that (i) does not hold.
Since \(L_{\mathbb{S}(W)}(V)\) is unit-regular, it follows that \(\mathbb{S}(W)\) is unit-regular by Proposition 4.1 and the fact that any homomorphic image of a unit-regular semigroup is unit-regular (cf. [3, Proposition 2.7]).
To show \(W=V\), suppose to the contrary that \(W\neq V\). Then we claim that \(\mathbb{S}(W)\) is not a subgroup of \(\operatorname{Aut}(W)\). Suppose to the contrary that \(\mathbb{S}(W)\) is a subgroup of \(\operatorname{Aut}(W)\). Then we further claim that \(\operatorname{codim}(W)\) is finite. Suppose to the contrary that \(\operatorname{codim}(W)\) is infinite. Then there exists a linear map \(\psi\colon V/W\to V/W\) which is injective but not surjective. Take a basis \(B_{1}\) for \(W\). Extend \(B_{1}\) to a basis \(B_{1}\cup B_{2}\) for \(V\), where \(B_{2}\subseteq V\setminus\langle B_{1}\rangle\). Define \(f\in L(V)\) by
\[vf=\begin{cases}v&\text{if $v\in B_{1}$,}\\ v^{\prime}&\text{if $v\in B_{2}$, where $(v+W)\psi=v^{\prime}+W$.}\end{cases}\]
It is easy to see that \(f\) is injective but not surjective. Therefore \(\operatorname{nullity}(f)\neq\operatorname{corank}(f)\), and so \(f\notin\operatorname{ureg}(L(V))\) by [12, Corollary 5.7]. Clearly \(f\in L_{\mathbb{S}(W)}(V)\) and \(L_{\mathbb{S}(W)}(V)\subseteq L(V)\), it follows that \(f\notin\operatorname{ureg}(L_{\mathbb{S}(W)}(V))\), which contradicts the
unit-regularity of \(L_{\mathbb{S}(W)}(V)\). Hence \(\operatorname{codim}(W)\) is finite, and so (i) holds. This is a contradiction to the assumption that (i) does not hold. Thus \(\mathbb{S}(W)\) is not a subgroup of \(\operatorname{Aut}(W)\). Hence, since \(L_{\mathbb{S}(W)}(V)\) is regular, we have \(W=V\) by Theorem 4.4.
For the converse, suppose first that (i) holds. Then \(L_{\mathbb{S}(W)}(V)\) is unit-regular by Proposition 4.7. Next, suppose that (ii) holds. If \(W=V\), then \(L_{\mathbb{S}(W)}(V)=\mathbb{S}(V)\), and so \(L_{\mathbb{S}(W)}(V)\) is unit-regular by hypothesis.
The following theorem gives a necessary and sufficient condition for \(L_{\mathbb{S}(W)}(V)\) to be an inverse semigroup.
**Theorem 4.9**.: _Let \(\mathbb{S}(W)\) be a subsemigroup of \(L(W)\). Then \(L_{\mathbb{S}(W)}(V)\) is an inverse semigroup if and only if_
* \(\mathbb{S}(W)\) _is an inverse semigroup;_
* _either_ \(W=V\) _or_ \(\dim(V)=1\)_._
Proof.: Suppose that \(L_{\mathbb{S}(W)}(V)\) is an inverse semigroup. Then (i) is true by Proposition 4.1 and the fact that any homomorphic image of an inverse semigroup is an inverse semigroup (cf. [5, Theorem 5.1.4]).
The case \(W=V\) may be possible, since \(L_{\mathbb{S}(W)}(V)=\mathbb{S}(W)\) if \(W=V\). Assume that \(W\neq V\). Let \(\alpha\in E(\mathbb{S}(W))\). Let \(B_{1}\) and \(B_{2}\) be bases for \(N(\alpha)\) and \(R(\alpha)\), respectively. Then \(W=N(\alpha)\oplus R(\alpha)\), and so \(B_{1}\cup B_{2}\) is a basis for \(W\). Extend \(B_{1}\cup B_{2}\) to a basis \(B:=B_{1}\cup B_{2}\cup B_{3}\) for \(V\), where \(B_{3}\subseteq V\setminus W\). Claim that \(|B_{3}|=1\). Suppose to the contrary that there are distinct \(v_{1},v_{2}\in B_{3}\). Define \(f,g\in L(V)\) by
\[vf=\begin{cases}v\alpha&\text{if }v\in B_{1}\cup B_{2},\\ v_{1}&\text{if }v\in B_{3}\end{cases}\qquad\text{ and }\qquad vg=\begin{cases}v\alpha&\text{if }v\in B_{1}\cup B_{2},\\ v_{2}&\text{if }v\in B_{3}.\end{cases}\]
Clearly \(f,g\in L_{\mathbb{S}(W)}(V)\). Since \(\alpha\) is idempotent, it is routine to verify that \(f\) and \(g\) are idempotents. However, we have \(v_{1}(fg)\neq v_{1}(gf)\), and so \(fg\neq gf\). This leads to a contradiction, because \(L_{\mathbb{S}(W)}(V)\) is an inverse semigroup. Hence \(|B_{3}|=1\), and thus \(\operatorname{codim}(W)=1\). Write \(B_{3}=\{u\}\).
Finally, we show that \(W=\{0\}\). Suppose to the contrary that \(W\neq\{0\}\). There are two cases to consider.
**Case 1:**\(\alpha=I_{W}\). Then \(B_{2}\neq\varnothing\). Fix \(w\in B_{2}\), and define \(f,g\in L(V)\) by
\[vf=\begin{cases}v\alpha&\text{if }v\in B_{1}\cup B_{2},\\ 0&\text{if }v\in B_{3}\end{cases}\qquad\text{ and }\qquad vg=\begin{cases}v\alpha&\text{if }v\in B_{1}\cup B_{2},\\ w&\text{if }v\in B_{3}.\end{cases}\]
Clearly \(f,g\in L_{\mathbb{S}(W)}(V)\). Since \(\alpha\) is idempotent, it is routine to verify that \(f\) and \(g\) are idempotents. However, we have \(u(fg)\neq u(gf)\), and so \(fg\neq gf\). This leads to a contradiction, because \(L_{\mathbb{S}(W)}(V)\) is an inverse semigroup.
**Case 2:**\(\alpha\neq I_{W}\). Then \(B_{1}\neq\varnothing\). Fix \(w^{\prime}\in B_{1}\), and write \(w^{\prime}+u=u^{\prime}\). Obviously \(u^{\prime}\neq u\) and \(u^{\prime}\in V\setminus W\). Define \(f,g\in L(V)\) by
\[vf=\begin{cases}v\alpha&\text{if }v\in B_{1}\cup B_{2},\\ u&\text{if }v\in B_{3}\end{cases}\quad\text{ and }\quad vg=\begin{cases}v\alpha&\text{if }v\in B_{1}\cup B_{2},\\ u^{\prime}&\text{if }v\in B_{3}.\end{cases}\]
Clearly \(f,g\in L_{\mathbb{S}(W)}(V)\). Since \(\alpha\) is idempotent, it is routine to verify that \(f\) and \(g\) are idempotents. However, we have \(u(fg)=(uf)g=ug=u^{\prime}\) and \(u(gf)=(ug)f=u^{\prime}f=(w^{\prime}+u)f=w^{\prime}f+uf=u\), and so \(fg\neq gf\). This leads to a contradiction, since \(L_{\mathbb{S}(W)}(V)\) is an inverse semigroup.
Thus, in either case, we get a contradiction. Hence \(W=\{0\}\). Thus, since \(\operatorname{codim}(W)=1\), we conclude that \(\dim(V)=1\).
Conversely, suppose that the given conditions hold. If \(W=V\), then \(L_{\mathbb{S}(W)}(V)=\mathbb{S}(W)\), and so \(L_{\mathbb{S}(W)}(V)\) is an inverse semigroup by (i). Assume that \(W\neq V\). Then \(\dim(V)=1\), and so we get \(W=\{0\}\). It follows that \(L_{\mathbb{S}(W)}(V)=L(V)\), and so \(L_{\mathbb{S}(W)}(V)\) is regular (cf. [5, p.63, Exercise 19]). Since \(\dim(V)=1\), we also see that \(L_{\mathbb{S}(W)}(V)\) has only two idempotents, namely the identity and the zero linear transformations on \(V\). Since these idempotents commute, we conclude that \(L_{\mathbb{S}(W)}(V)\) is an inverse semigroup.
The next proposition is a consequence of Theorem 4.9, which determines when \(L(V)\) is an inverse semigroup.
**Proposition 4.10**.: \(L(V)\) _is an inverse semigroup if and only if \(\dim(V)\leq 1\)._
Proof.: Suppose that \(L(V)\) is an inverse semigroup. The result is obvious if \(V=\{0\}\). Let us assume that \(V\neq\{0\}\). Take \(W=\{0\}\). Then \(L_{L(W)}(V)=L(V)\), and hence \(\dim(V)=1\) by Theorem 4.9.
Conversely, suppose that \(\dim(V)\leq 1\). The result is obvious if \(\dim(V)=0\). Let us assume that \(\dim(V)=1\). Take \(W=\{0\}\). Then \(L_{L(W)}(V)=L(V)\) and \(L(W)\) is an inverse semigroup. Hence, since \(\dim(V)=1\), we conclude that \(L(V)\) is an inverse semigroup by Theorem 4.9.
The following theorem gives a necessary and sufficient condition for \(L_{\mathbb{S}(W)}(V)\) to be completely regular.
**Theorem 4.11**.: _Let \(\mathbb{S}(W)\) be a subsemigroup of \(L(W)\). Then \(L_{\mathbb{S}(W)}(V)\) is a completely regular semigroup if and only if \(\mathbb{S}(W)\) is a completely regular semigroup and one of the following holds:_
1. \(W=V\)_._
2. \(\text{codim}(W)=1\) _and_ \(\mathbb{S}(W)\) _is a subgroup of_ \(\text{Aut}(W)\)_._
Proof.: Suppose that \(L_{\mathbb{S}(W)}(V)\) is completely regular. Then \(\mathbb{S}(W)\) is completely regular by Proposition 4.1 and the fact that any homomorphic image of a completely regular semigroup is completely regular (cf. [10, Lemma II.2.4]).
The case \(W=V\) may be possible, since \(L_{\mathbb{S}(W)}(V)=\mathbb{S}(W)\) if \(W=V\). Assume that \(W\neq V\). First, we show that \(\operatorname{codim}(W)=1\). Let \(B_{1}\) be a basis for \(W\). Extend \(B_{1}\) to a basis \(B_{1}\cup B_{2}\) for \(V\), where \(B_{2}\subseteq V\setminus W\). Claim that \(|B_{2}|=1\). Suppose to the contrary that there are distinct \(v_{1},v_{2}\in B_{2}\). Take \(\alpha\in\mathbb{S}(W)\), and then fix \(w\in R(\alpha)\). Define \(f\in L(V)\) by
\[vf=\begin{cases}v\alpha&\text{if $v\in B_{1}$,}\\ v_{1}&\text{if $v=v_{2}$,}\\ w\alpha&\text{if $v=B_{2}\setminus\{v_{2}\}$.}\end{cases}\]
Clearly \(f\in L_{\mathbb{S}(W)}(V)\). Moreover, we have \(v_{1},w\in R(f)\) and \(v_{1}f=wf\). Therefore \(R(f)\) is not a transversal of \(\ker(f)\), and so \(f\) does not belong to any subgroup of \(T(V)\) (cf. [2, Theorem 2.10]). Thus \(f\) is not contained in any subgroup of \(L_{\mathbb{S}(W)}(V)\), which contradicts the completely regularity of \(L_{\mathbb{S}(W)}(V)\). Hence \(|B_{2}|=1\), and thus \(\operatorname{codim}(W)=1\).
Finally, we show that \(\mathbb{S}(W)\) is a subgroup of \(\operatorname{Aut}(W)\). It is obvious if \(W=\{0\}\). Assume that \(W\neq\{0\}\). Let \(\beta\in\mathbb{S}(W)\). Claim that \(\beta\) is injective. Suppose to the contrary that \(w_{1}\beta=w_{2}\beta\) for some distinct \(w_{1},w_{2}\in W\). Recall that \(\mathbb{S}(W)\) is completely regular. It follows that \(\beta\) is contained in some subgroup of \(\mathbb{S}(W)\), and so \(\beta\) is contained in some subgroup of \(T(W)\). Therefore \(R(\beta)\) is a transversal of \(\ker(\beta)\) (cf. [2, Theorem 2.10]). Choose \(w_{0}\in R(\beta)\) such that \(w_{0}\) belongs to the \(\ker(\beta)\)-class that contains \(w_{1}\) and \(w_{2}\). We may assume that \(w_{0}\neq w_{1}\). Define \(g\in L(V)\) by
\[yg=\begin{cases}v\beta&\text{if $v\in B_{1}$,}\\ w_{1}&\text{if $v\in B_{2}$.}\end{cases}\]
Clearly \(g\in L_{\mathbb{S}(W)}(V)\). However, we have \(w_{0},w_{1}\in R(g)\) and \(w_{0}g=w_{1}g\). Therefore \(R(g)\) is not a transversal of \(\ker(g)\), and so \(g\) does not belong to any subgroup of \(T(V)\) (cf. [2, Theorem 2.10]). Thus \(g\) is not contained in any subgroup of \(L_{\mathbb{S}(W)}(V)\), which contradicts the completely regularity of \(L_{\mathbb{S}(W)}(V)\). Hence \(\beta\) is injective. Further, we see that \(\beta\) is surjective, since \(\beta\) is injective and \(R(\beta)\) is a transversal of \(\ker(\beta)\). Thus \(\mathbb{S}(W)\) is a subsemigroup of \(\operatorname{Aut}(W)\). Hence, since \(\mathbb{S}(W)\) is completely regular, we conclude that \(\mathbb{S}(W)\) is a subgroup of \(\operatorname{Aut}(W)\) by Lemma 3.2.
Conversely, suppose that the given conditions hold. If \(W=V\), then \(L_{\mathbb{S}(W)}(V)=\mathbb{S}(W)\), and so \(L_{\mathbb{S}(W)}(V)\) is completely regular. Assume that \(W\neq V\). Then, by (ii), \(\operatorname{codim}(W)=1\) and \(\mathbb{S}(W)\) is a subgroup of \(\operatorname{Aut}(W)\). Take a basis \(B\) for \(W\). Then \(B\cup\{x\}\) is a basis for \(V\), where \(x\in V\setminus W\). For \(z\in V\) and \(\lambda\in\mathbb{S}(W)\), let \(\alpha_{z}^{\lambda}\in L_{\mathbb{S}(W)}(V)\) be such that \((\alpha_{z}^{\lambda})_{\upharpoonright W}=\lambda\) and \(x\alpha_{z}^{\lambda}=z\). Notice that \(L_{\mathbb{S}(W)}(V)=\{\alpha_{z}^{\lambda}\colon z\in V\text{ and }\lambda\in\mathbb{S}(W)\}\), and for all \(y\in W\), \(z\in V\), and \(\lambda,\delta\in\mathbb{S}(W)\),
\[\alpha_{x}^{\lambda}\alpha_{x}^{\delta}=\alpha_{x}^{\lambda\delta}\quad\text{ and}\quad\alpha_{y}^{\lambda}\alpha_{z}^{\delta}=\alpha_{y\delta}^{\lambda\delta}.\]
Let \(f\in L_{\mathbb{S}(W)}(V)\). Then \(f=\alpha_{y}^{\lambda}\) for some \(y\in V\) and \(\lambda\in\mathbb{S}(W)\). There are two cases to consider.
**Case 1:**\(y\in W\). Recall that \(\mathbb{S}(W)\) is a subgroup of \(\operatorname{Aut}(W)\). Define \(\varepsilon=\alpha_{y\lambda^{-1}}^{I_{W}}\) and \(g=\alpha_{y\lambda^{-2}}^{\lambda^{-1}}\), where \(\lambda^{-1}\) denotes the inverse of \(\lambda\) in \(\mathbb{S}(W)\). Obviously \(\varepsilon,g\in L_{\mathbb{S}(W)}(V)\). Moreover, we obtain
\[\varepsilon\varepsilon =\alpha_{y\lambda^{-1}}^{I_{W}}\alpha_{y\lambda^{-1}}^{I_{W}}= \alpha_{y\lambda^{-1}}^{I_{W}}=\varepsilon,\] \[f\varepsilon =\alpha_{y}^{\lambda}\alpha_{y\lambda^{-1}}^{I_{W}}=\alpha_{y}^{ \lambda}=f,\] \[\varepsilon f =\alpha_{y\lambda^{-1}}^{I_{W}}\alpha_{y}^{\lambda}=\alpha_{y}^{ \lambda}=f,\] \[g\varepsilon =\alpha_{y\lambda^{-2}}^{\lambda^{-1}}\alpha_{y\lambda^{-1}}^{I_{ W}}=\alpha_{y\lambda^{-2}}^{\lambda^{-1}}=g,\] \[\varepsilon g =\alpha_{y\lambda^{-1}}^{I_{W}}\alpha_{y\lambda^{-2}}^{\lambda^{-1 }}=\alpha_{y\lambda^{-2}}^{\lambda^{-1}}=g,\] \[fg =\alpha_{y}^{\lambda}\alpha_{y\lambda^{-2}}^{\lambda^{-1}}=\alpha _{y\lambda^{-1}}^{I_{W}}=\varepsilon,\] \[gf =\alpha_{y\lambda^{-2}}^{\lambda^{-1}}\alpha_{y}^{\lambda}=\alpha _{y\lambda^{-1}}^{I_{W}}=\varepsilon.\]
Hence the subsemigroup \(\langle f,g\rangle\) of \(L_{\mathbb{S}(W)}(V)\) generated by \(\{f,g\}\) is a subgroup of \(L_{\mathbb{S}(W)}(V)\).
**Case 2:**\(y\in V\setminus W\). Then, since \(\mathbb{S}(W)\) is a subgroup of \(\operatorname{Aut}(W)\) and \(\lambda\in\mathbb{S}(W)\), we see that \(f\) is bijective. Therefore \(f\in U(L_{\mathbb{S}(W)}(V))\).
Thus, in either case, we see that \(f\) belongs to a subgroup of \(L_{\mathbb{S}(W)}(V)\). Hence, since \(f\) is arbitrary, we conclude that \(L_{\mathbb{S}(W)}(V)\) is completely regular.
The next proposition is a consequence of Theorem 4.11, which determines when \(L(V)\) is a completely regular semigroup.
**Proposition 4.12**.: \(L(V)\) _is a completely regular semigroup if and only if \(\dim(V)\leq 1\)._
Proof.: Suppose that \(L(V)\) is completely regular. The result is obvious if \(V=\{0\}\). Let us assume that \(V\neq\{0\}\). Take \(W=\{0\}\). Then \(L_{L(W)}(V)=L(V)\), and so \(\operatorname{codim}(W)=1\) by Theorem 4.11. Hence \(\dim(V)=1\).
Conversely, suppose that \(\dim(V)\leq 1\). The result is obvious if \(\dim(V)=0\). Let us assume that \(\dim(V)=1\). Take \(W=\{0\}\). Then obviously \(L(W)\) is a group, \(\operatorname{codim}(W)=1\), and \(L_{L(W)}(V)=L(V)\). Hence \(L(V)\) is completely regular by Theorem 4.11.
|
2307.11992 | On selectively highly divergent spaces | We say that a topological space $X$ is selectively highly divergent (SHD) if
for every sequence of non-empty open sets $\{U_n\mid n\in\omega \}$ of $X$, we
can find $x_n\in U_n$ such that the sequence $(x_n)$ has no convergent
subsequences. We investigate the basic topological properties of SHD spaces and
we will exhibit that this class of spaces is full of variety. We present an
example of a SHD space wich has a non trivial convergent sequence and with a
dense set with no convergent sequences. Also, we prove that if $X$ is a regular
space such that for all $x\in X$ holds $\psi(x,X)>\omega$, then $X_\delta$ (the
$G_\delta$ modification of $X$) is a SHD space and, moreover, if $X$
homogeneous, then $X_\delta$ is also homogeneous. Finally, given $X$ a
Hausdorff space without isolated points, we construct a new space denoted by
$sX$ such that $sX$ is extremally disconnected, zero-dimensional Hausdorff
space, SHD with $|X|=|sX|$, $\pi w(X)=\pi w(sX)$ and $c(X)=c(sX)$ where $\pi w$
and $c$ are the cardinal functions $\pi$-weight and celullarity respectively. | Carlos David Jiménez-Flores, Alejandro Ríos-Herrejón, Alejandro Darío Rojas-Sánchez, Elmer Enrique Tovar-Acosta | 2023-07-22T06:24:59Z | http://arxiv.org/abs/2307.11992v4 | # On selectively highly divergent spaces
###### Abstract
We say that a topological space \(X\) is selectively highly divergent (SHD) if for every sequence of non-empty open sets \(\{U_{n}\mid n\in\mathbb{N}\}\) of \(X\), we can find \(x_{n}\in U_{n}\) such that the sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) has no convergent subsequences. We investigate the basic topological properties of SHD spaces and we will exhibit that this class of spaces is full of variety. We present an example of a SHD space wich has a non trivial convergent sequence and with a dense set with no convergent sequences. Also, we prove that if \(X\) is a regular space such that for all \(x\in X\) holds \(\psi(x,X)>\omega\), then \(X_{\delta}\) (the \(G_{\delta}\) modification of \(X\)) is a SHD space and, moreover, if \(X\) homogeneous, then \(X_{\delta}\) is also homogeneous. Finally, given \(X\) a Hausdorff space without isolated points, we construct a new space denoted by \(sX\) such that \(sX\) is extremally disconnected, zero-dimensional Hausdorff space, SHD with \(|X|=|sX|\), \(\pi w(X)=\pi w(sX)\) and \(c(X)=c(sX)\) where \(\pi w\) and \(c\) are the cardinal functions \(\pi\)-weight and celullarity respectively.
keywords: _2010 MSC:_ 54A20, 54A25, 54B20, 54D40, 54G05 sequences, compactifications, remainder, realcompact, absolutes, \(F\)-spaces, isolated points
## 1 Introduction
The properties of sequences have always been a subject of study, although they do not generally completely characterize the topology of a space. A couple of relevant properties, as highly divergente sequences, are those appearing in [3]. On the other hand, A. Dorantes-Aldama and D. Shakhmatov define a
topological property for sequences as it appears in [1, Definition 2.1], and based on this, they stated the term selectively \(S\) space in [1, Definition 2.3(iii)] with \(S\) a topological property for sequences. The property of being selectively highly divergent arises from considering the "highly divergent" property together with the definition of a selectively \(S\) space. It should be noted that the highly divergent property does not satisfy [1, Definition 2.1]. As a result, a very diverse class of spaces with quite relevant properties was obtained, which will be the object of study throughout this article.
All topological spaces are assumed to have no separation unless otherwise stated. Additionally, the notation and definitions we use can be found in [7], [9], and [12]. The notation and definition for cardinal functions is as in [8, Chapter 1]. Here, as usual, \(X^{*}\) denotes the space \(\beta X\setminus X\), i.e., the remainder of the Stone-Cech compactification of \(X\).
## 2 Basic properties
Inspired by [1, Definition 2.3] and [3, Definition 2], we say that a topological space \(X\) is _selectively highly divergent_ (SHD from here for short) if for every sequence of non-empty open sets \(\{U_{n}\mid n\in\mathbb{N}\}\) of \(X\), we can find \(x_{n}\in U_{n}\) such that the sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) has no convergent subsequences. Clearly, this is a topological property that is inherited by open subspaces. Furthermore, it follows from the definition that if a topological space \(X\) has a point of countable character, then it cannot be SHD and thus, no metrizable space is SHD. The following remark will be relevant for the subsequent development of this work:
**Remark 1**: _Let \((X,\tau)\) be a topological space and \(\mathcal{B}\) a \(\pi\)-base for \(X\). If for any sequence of open sets \(\{U_{n}\mid n\in\mathbb{N}\}\subseteq\mathcal{B}\), it is true that for each \(n\in\mathbb{N}\) there exists \(x_{n}\in U_{n}\) such that the sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) has no convergent subsequences, then \(X\) is SHD._
The following theorem is easy to prove and establishes the relationship between the property of being SHD and topological products:
**Theorem 1**: _If \(\{X_{i}\mid i\in I\}\) is a family of topological spaces and at least one of them is SHD, then \(\prod\{X_{i}\mid i\in I\}\) is SHD._
We obtain an analogous result for the topological sum:
**Theorem 2**: _If \(\{X_{i}\mid i\in J\}\) is a family of topological spaces, and all of them are SHD, then \(\bigoplus\{X_{i}\mid i\in J\}\) is SHD._
Proof. Let \(X=\bigoplus\{X_{i}\mid i\in J\}\). First note that if a sequence \(\{z_{n}\}_{n\in\mathbb{N}}\) converges in \(X\), then there exist some \(j\in J\) such that a tail of the sequence is contained in \(X_{j}\), i.e., convergente sequences in \(X\) are eventually cointained in one and only one of the spaces. Let \(\{U_{n}\mid n\in\mathbb{N}\}\) be a family consisting of basic open sets, i.e., every \(U_{n}\) is an open set of some \(X_{i}\). Let \(\alpha\) the function that takes a natural number \(n\in\mathbb{N}\) and assigns an element \(j\in J\) with the property that
\(U_{n}\subseteq X_{j}\). Thanks to the fact that the spaces of the topological sum are disjoint by pairs, the index obtained by \(\alpha\) is unique. Then, for every \(n\in\mathbb{N}\), we'll write \(\alpha(n)\) instead the index \(j\in J\) and, from the previous argument, \(\alpha(n)\in J\) is the only element in \(J\) with the property that \(U_{n}\subseteq X_{\alpha(n)}\). Consider the following cases:
**Case 1.** For every \(j\in J\), the set \(\alpha^{-1}[j]\) is finite. In this case, simply select any point \(x_{n}\in U_{n}\). Note that \(\{x_{n}\}_{n\in\mathbb{N}}\) doesn't have convergent sequences, since every subsequence is infinitely oscillating between the different \(X_{\alpha(n)}\), and so it can't converge by the observation made at the start of the proof.
**Case 2.** There exists \(j\in J\) such that \(\alpha^{-1}[j]\) is infinite. Let \(A=\{j\in J\mid|\alpha^{-1}[j]|=\aleph_{0}\}\). For each \(i\in A\), \(\alpha^{-1}[i]\) is a infinite subset of \(\mathbb{N}\), say \(\{n(k,i)\mid j\in\mathbb{N}\}=\alpha^{-1}[i]\), where \(n(k,i)<n(k+1,i)\) for every \(k\in\mathbb{N}\). In this notation, we have that \(\{U_{n(k,i)}\mid k\in\mathbb{N}\}\) is a countable family of open sets in \(X_{i}\), and since \(X_{i}\) is a SHD space, we can take \(x_{n(k,i)}\in U_{n(k,i)}\) such that \(\{x_{n(k,i)}\}_{k\in\mathbb{N}}\) is a sequence with no convergente subsequences.
Let \(B=\mathbb{N}\setminus\bigcup_{i\in A}\alpha^{-1}[i]\). For every \(n\in\mathbb{N}\) simply take \(x_{n}\in U_{n}\). Define a new sequence in the following way:
\[y_{n}=\left\{\begin{array}{ccc}x_{n(k,i)}&\mbox{if}&n=n(k,i)\mbox{ for some }k\in\mathbb{N},i\in A\\ \\ x_{n}&\mbox{if}&n\in B\end{array}\right.\]
Note that since the family \(\mathcal{U}=\{\alpha^{-1}[j]\mid j\in K\}\cup\{B\}\) is made up of disjoint sets and \(\bigcup\mathcal{U}=\mathbb{N}\), this is a well defined sequence. Let's prove that none of the subsequences of \(\{y_{n}\}_{n\in\mathbb{N}}\) converges. Take a subsequence \(\{y_{n_{k}}\}_{k\in\mathbb{N}}\). We have the following cases:
**Case 1.** If \(|\{n_{k}\mid k\in\mathbb{N}\}\cap B|=\aleph_{0}\), we can think the set \(\{n_{k}\mid k\in\mathbb{N}\}\cap B\) as a sequence, say \(\{n_{k_{\ell}}\}_{\ell\in\mathbb{N}}\). Then \(y_{n_{k_{\ell}}}=x_{n_{k_{\ell}}}\), so \(\{y_{n_{k_{l}}}\}_{\ell\in\mathbb{N}}=\{x_{n_{k_{l}}}\}_{\ell\in\mathbb{N}}\) is a non convergent subsequence of \(\{y_{n_{k}}\}_{k\in\mathbb{N}}\), and so \(\{y_{n_{k}}\}_{k\in\mathbb{N}}\) can't converge.
**Case 2.** If there exists some \(i\in A\) such that \(|\{n_{k}\mid k\in\mathbb{N}\}\cap\alpha^{-1}[i]|=\aleph_{0}\), then, in the same way as the previous case, we can think the set \(\{n_{k}\mid k\in\mathbb{N}\}\cap\alpha^{-1}[i]\) as \(\{n_{k_{\ell}}\}_{\ell\in\mathbb{N}}\). Then \(\{y_{n_{k_{\ell}}}\}_{\ell\in\mathbb{N}}\) is a subsequence of both \(\{y_{n_{k}}\}_{k\in\mathbb{N}}\) and \(\{x_{n(k,i)}\}_{k\in\mathbb{N}}\), and since the last one doesn't have convergent subsequences, we conclude that \(\{y_{n_{k_{\ell}}}\}_{\ell\in\mathbb{N}}\) does not converge, and thus \(\{y_{n_{k}}\}_{k\in\mathbb{N}}\) also does not converge.
**Case 3.** If for all \(i\in A\), the set \(\{n_{k}\mid k\in\mathbb{N}\}\cap\alpha^{-1}[i]\) is finite, and \(\{n_{k}\mid k\in\mathbb{N}\}\cap B\) is also finite, then, \(\{y_{n_{k}}\}_{k\in\mathbb{N}}\) can't converge by the observation made at the start of the proof.
Therefore, we conclude that every subsequence of \(\{y_{n}\}_{n\in\mathbb{N}}\) is divergent, and so \(X\) is a SHD space.
Once we have analyzed whether a topological property is hereditary, productive, or additive, it is natural to consider under which class of functions it is preserved. To show that our property of being SHD is in fact rarely preserved, let us consider the following example:
**Example 1**: _Let \(X\) be an infinite, compact and SHD space5 and \(Y\) to be a non-SHD space (for example, \([0,1]\) as it is first countable). Thus, we know that \(X\times Y\) is a SHD space. However, the projection \(\Pi_{Y}:X\times Y\to Y\) is not only continuous, open, and onto, but also a perfect map by Kuratowski's Theorem. But interestingly, despite being almost a homeomorphism, \(Y\) is not SHD. This demonstrates that the property of being SHD is not preserved under continuous, surjective, open, and perfect mappings._
Footnote 5: In Proposition 3 we will present a wide range of SHD spaces with these properties.
Clearly if \(X\) is SHD and \(f:X\to Y\) is bijective, continuous and \(f^{-1}\) is sequentially continuous, then \(Y\) is SHD. Also notice that the SHD space \(X\) constructed in Example 2 and the non-SHD space constructed in Example 3, here denoted by \(Y\), have the property that the identity function condenses \(X\) onto \(Y\).
The following result is not difficult to prove and will be useful in some of our next constructions:
**Proposition 1**: _Let \(X\) be a topological space. If every non-empty open subset of \(X\) is infinite and \(X\) does not admit non-trivial convergent sequences, then \(X\) is SHD._
From (13, Fact 3.2), it follows that if \(X\) is an \(F^{\prime}\)-space without isolated points, the only convergent sequences in \(X\) are eventually constant and the open sets are infinite. By Proposition 1, \(X\) is SHD. These arguments prove the following statement:
**Proposition 2**: _Let \(X\) be a Tychonoff topological space. If \(X\) is an \(F^{\prime}\)-space without isolated points then \(X\) is SHD._
Since any \(F\)-space is an \(F^{\prime}\)-space, any \(F\)-space without isolated points is SHD. It is worth noticing that not all SHD spaces are \(F\)-spaces. To illustrate this, we can refer to a result from (9, 14Q2), which states that if \(X\) and \(Y\) are two infinite pseudocompact spaces, then \(X\times Y\) is not an \(F\)-space. Therefore, we can take \(Z\) to be a pseudocompact, infinite, SHD space, and consider \(Z\times Z\). It follows that \(Z\times Z\) is SHD, but it is not an \(F\)-space.
## 3 Constructions and examples
Considering the definition of a SHD space, one might intuitively assume that a space of this class does not admit non-trivially convergent sequences. However, surprisingly, it is possible to construct a SHD space with non-trivially convergent sequences, along with a couple of other characteristics. An easy way to construct and example is using Theorem 1 and the space \(\mathbb{N}^{*}\times[0,1]\). But the next example has more interesting properties as \(X\) is a SHD space which has non-trivial convergent sequences and also has a dense open set in which the only convergent sequences are the trivial ones.
**Example 2**.: _We know that \(Y=\mathbb{N}^{*}\) is a SHD space since it is an \(F\)-space with no isolated points. Let \(\mathcal{U}=\{U_{n}\ |\ n\in\mathbb{N}\}\) be a cellular family consisting of clopen sets in \(Y\). Note that \(\bigcup_{n\in\mathbb{N}}U_{n}\) is SHD as it is an open subset of \(Y\). For each \(n\in\mathbb{N}\), select a point \(z_{n}\in U_{n}\). Furthermore, for each \(x\in Y\), fix a local base for that point, denoted by \(\mathcal{V}_{x}=\{V_{i}\ |\ i\in I\}\), such that \(\bigcup\mathcal{V}_{x}\subseteq U_{n}\), where \(I\) is a sufficiently large set and \(x\in U_{n}\). To simplify notation, we will write \(\mathcal{V}_{z_{n}}\) as \(\mathcal{V}_{n}\). Let \(p\notin\beta\mathbb{N}\). Our space of interest will be \(X=\bigcup_{n\in\mathbb{N}}U_{n}\cup\{p\}\), endowed with the following topology. Let us first consider \(\mathcal{B}_{n}=\{\bigcup_{m\geq n}V_{m}\ |V_{m}\in\mathcal{V}_{m}\}\). Now, let \(\mathcal{B}=\bigcup_{n\in\mathbb{N}}\mathcal{B}_{n}\). We define the topology by specifying a neighborhood base at each point in the following way:_
1. _If_ \(x\in\bigcup_{n\in\mathbb{N}}U_{n}\)_, then the set_ \(\mathcal{V}_{x}\) _remains a neighborhood base for_ \(x\)_._
2. _A neighborhood base for_ \(p\) _is given by the family_ \(\{B\cup\{p\}\ |\ B\in\mathcal{B}\}\)_._
_This defines a Hausdorff topology on \(X\). Furthermore, this topology is Lindelof. Now let's examine a couple of important properties of \(X\):_
1. _The set_ \(\bigcup_{n\in\mathbb{N}}U_{n}\) _inherits its original topology. Moreover,_ \(U_{n}\) _remains a clopen set in_ \(X\)_._
2. _The sequence_ \(\{z_{n}\}_{n\in\mathbb{N}}\) _converges to_ \(p\) _in_ \(X\)_. This follows from the structure of neighborhoods of_ \(p\)_. From this, we conclude that_ \(Z=\{z_{n}\ |n\in\mathbb{N}\}\cup\{p\}\) _is a closed set in_ \(X\)_. Also, note that this set cannot be open, nor is it a neighborhood of_ \(p\)_._
3. _Given a non-trivial sequence_ \(\{x_{n}\}_{n\in\mathbb{N}}\subseteq\bigcup_{n\in\mathbb{N}}U_{n}\)_, this sequence cannot converge to any_ \(y\in\bigcup_{n\in\mathbb{N}}U_{n}\)_. This follows from property a) and the fact that in_ \(\bigcup_{n\in\mathbb{N}}U_{n}\) _there are no non-trivial convergent sequences, as it is a subspace of_ \(\mathbb{N}^{*}\)_._
4. _If_ \(\{x_{n}\}_{n\in\mathbb{N}}\subseteq\bigcup_{n\in\mathbb{N}}U_{n}\) _is such that for every_ \(n\in\mathbb{N}\)_,_ \(x_{n}\notin\{z_{n}\ |\ n\in\mathbb{N}\}\)_, then_ \(\{x_{n}\}_{n\in\mathbb{N}}\) _does not converge to_ \(p\) _in_ \(X\)_._ _For each_ \(n\in\mathbb{N}\)_, let_ \(i(n)\) _be the unique natural number such that_ \(x_{n}\in U_{i(n)}\)_. We have the following cases:_ _Case 1._ _The set_ \(\{i(n)\ |\ n\in\mathbb{N}\}\) _is finite. Under these conditions, the sequence is contained in a finite number of the_ \(U_{n}\) _sets and is therefore forced to converge to someone within this finite union, which is a closed set in_ \(X\)_. Therefore,_ \(\{x_{n}\}_{n\in\mathbb{N}}\) _does not converge to_ \(p\)_._ _Case 2._ _There exists_ \(m\in\mathbb{N}\) _such that the set_ \(A=\{n\in\mathbb{N}\ |\ i(n)=m\}\) _is infinite. If_ \(\{x_{n}\}_{n\in\mathbb{N}}\) _were to converge to_ \(p\)_, it would follow that_ \(\{x_{n}\}_{n\in A}\) _would also converge to_ \(p\)_, which is impossible since this sequence is contained in the closed set_ \(U_{m}\)_._ _Case 3._ _The set_ \(B=\{i(n)\ |\ n\in\mathbb{N}\}\) _is infinite, and for each_ \(m\in\mathbb{N}\)_,_ \(i^{-1}[m]\) _is finite. For_ \(m\notin B\)_, we define_ \(W_{m}=U_{m}\)_. On the other hand, if_ \(m\in B\)_, for each_ \(n\in i^{-1}[m]\)_, we choose_ \(V_{n}\in\mathcal{V}_{m}\) _such that_ \(x_{n}\notin V_{n}\)_, and_
_, finally we define \(W_{m}=\bigcap_{i(n)=m}V_{n}\), which is still an open set containing \(z_{m}\) as it is the finite intersection of elements from \(\mathcal{V}_{m}\). To conclude, we define \(W=\bigcup_{m\in\mathbb{N}}W_{m}\cup\{p\}\). \(W\) is an open set in \(X\) containing \(p\), and it is disjoint from the sequence \(\{x_{n}\}_{n\in\mathbb{N}}\). Therefore, \(\{x_{n}\}_{n\in\mathbb{N}}\) does not converge to \(p\)._
* \(X\) _is SHD. Let_ \(\{W_{n}\mid n\in\mathbb{N}\}\) _be a collection of non-empty open sets in_ \(X\)_. Without loss of generality, we assume that each_ \(W_{n}\) _is a basic neighborhood. We define_ \(G_{n}=W_{n}\setminus Z\)_, and thus the collection_ \(\{G_{n}\mid n\in\mathbb{N}\}\) _consists of non-empty open sets in_ \(\bigcup_{n\in\mathbb{N}}U_{n}\)_. As this space is SHD, we can choose_ \(x_{n}\in G_{n}\) _such that_ \(\{x_{n}\}_{n\in\mathbb{N}}\) _does not have convergent subsequences in_ \(\bigcup_{n\in\mathbb{N}}U_{n}\)_. Combining this with the previous result, we conclude that none of its subsequences can converge to the point_ \(p\)_._
**Example 3**.: _In a similar way to the previous example, we consider the set \(X=\bigcup_{n\in\mathbb{N}}U_{n}\cup\{p\}\), where the \(U_{n}\) are clopen sets in \(\mathbb{N}^{*}\) and for all \(x\in\mathbb{N}^{*}\) consider \(\mathcal{V}_{x}\) a local basis for \(x\) such that \(\bigcup\mathcal{V}_{x}\subseteq U_{n}\) when \(x\in U_{n}\) and \(p\notin\beta\mathbb{N}\). We define the topology by specifying a neighbourhood base at each point in the following way:_
* _For each_ \(x\in\bigcup_{n\in\mathbb{N}}U_{n}\)_, the set_ \(\mathcal{V}_{x}\) _remains a neighborhood base for_ \(x\)_._
* _For_ \(p\)_, a neighborhood base is given by the family_ \(\{\bigcup_{m\geq n}U_{m}\cup\{p\}\mid n\in\mathbb{N}\}\)_._
_This defines a compact Hausdorff topology, thus making \(X\) normal. Once again, \(\bigcup_{n\in\mathbb{N}}U_{n}\) inherits the original topology. Therefore, this latter set is a SHD dense subset of \(X\) but \(X\) is not SHD due to the countable character of \(p\)._
Note that one of the key conditions for a space to be SHD is that it does not have isolated points since every isolated point has a countable local base. Therefore, the first thing we need to verify to determine if a space is SHD is the complete absence of isolated points.
One of the approaches we considered was to investigate if any compactification or remainder of a compactification of a space is SHD. For example, \(\beta\mathbb{N}\) is not SHD because every point in \(\mathbb{N}\) is isolated in it. However, as we saw previously, \(\mathbb{N}^{*}\) is a SHD space. The key properties of \(\mathbb{N}^{*}\) are that the only convergent sequences in this space are eventually constant, and it has no isolated points. A way to generalize the previous property is as follows using a well known result:
**Theorem 3**.: _Let \(X\) be a realcompact non-compact space. Then \(X^{*}\) has no isolated points._
Unfortunately, the previous theorem is not sharp enough in the sense that the converse implication is not true. The following example illustrates this:
**Example 4**.: _By [10, Theorem 8.6.2] there exists a maximal almost disjoint family \(\mathscr{A}\) such that the associated Mrowka space, denoted as \(\Psi(\mathscr{A})\), satisfies that \(\Psi(\mathscr{A})^{*}\) is homeomorphic to \([0,1]\), which is a compact metric space without isolated points. However, \(\Psi(\mathscr{A})\) is not realcompact, as shown in [9, 8H.6]._
A very useful class of examples is based on the following known result:
**Proposition 3**.: _Let \(X\) be a non-compact, locally compact, \(\sigma\)-compact and Tychnoff space. Then \(X^{*}\) is a compact SHD space._
This result presents a new way to prove that \(\mathbb{N}^{*}\) is a SHD space. Also, \(\left(\mathbb{R}^{n}\right)^{*}\) is a connected SHD space when \(n\geq 2\) by [5, Theorem 1]. This contrasts with the previous examples because they are disconnected SHD spaces. Another result of considerable significance is the following:
**Theorem 4**.: _Let \(X\) be a non-compact realcompact space such that the only convergent sequences in \(X\) are those that are eventually constant. Then \(X^{*}\) is SHD._
Proof.: Let \(p\in X^{*}\). Consider \(\{x_{n}\}_{n\in\mathbb{N}}\subseteq X^{*}\) to be an injective sequence converging to \(p\). Since \(X\) is realcompact, there exists \(Z\) which is a zero set in \(\beta X\) with \(Z\subseteq X^{*}\) and \(Z\cap(x_{n}\mid n\in\mathbb{N}\cup\{p\})=\{p\}\). Let \(\{U_{n}\mid n\in\mathbb{N}\}\) be a sequence of cozero sets such that \(x_{n}\in U_{n}\) and \(U_{n}\cap U_{m}=\emptyset\) for \(n\neq m\), and \(p\notin\bigcup_{n\in\mathbb{N}}U_{n}\). Hence, \(p\in Z\setminus\bigcup_{n\in\mathbb{N}}U_{n}=\bigcap_{n\in\mathbb{N}}(Z_{0} \setminus U_{n})\), and the latter set is a zero set in \(\beta X\). Since \(\beta X\setminus Z\) is normal (as it is \(\sigma\)-compact), the function \(f:\{x_{n}\mid n\in\mathbb{N}\}\to\mathbb{R}\) defined as
\[f(x)=\left\{\begin{array}{ll}0,&\mbox{if $n$ is even}\\ 1,&\mbox{if $n$ is odd}\end{array}\right.\]
is continuous since \(\{x_{n}\mid n\in\mathbb{N}\}\) is closed in \(X^{*}\). Then there exists \(F:\beta X\to\mathbb{R}\) that extends \(f\). This is impossible as \(F\) is continuous and \(\{x_{n}\}_{n\in\mathbb{N}}\) converges to \(p\). Hence, the only convergent sequences in \(X^{*}\) are those that are eventually constant, and since \(X\) is realcompact, \(X^{*}\) has no isolated points. It follows that \(X^{*}\) is SHD.
In the search for more examples of SHD spaces, we came up with the \(G_{\delta}\) modification of a topological space \(X\), which involves considering the usual topology of \(X\) along with the \(G_{\delta}\) sets as a base for a new topology. We denote the space \(X\) with this topology as \(X_{\delta}\). For more information on this topic, see [2]. The next theorem sumarizes the work over the \(G_{\delta}\) modification:
**Theorem 5**.: _Let \(X\) be a regular topological space such that for each \(x\in X\), we have \(\psi(x,X)>\omega\). If \(X_{\delta}\) denotes the \(G_{\delta}\) modification of \(X\), then \(X_{\delta}\) is a SHD, zero-dimensional, and Tychonoff space. Furthermore, if \(X\) is also homogeneous, then \(X_{\delta}\) is also homogeneous._
Proof.: A routine argument shows that \(X_{\delta}\) is a \(P\)-space without isolated points. Moreover, since \(X\) is a regular \(P\)-space, \(X\) is zero-dimensional (see [7, 1W(1)]), and thus Tychonoff. By [7, 6L(3)], we have that \(X_{\delta}\) is an \(F\)-space without isolated points, and therefore, \(X_{\delta}\) is a SHD space.
Finally, if \(x,y\in X\) satisfy \(x\neq y\), then the homogeneity of \(X\) produces a homeomorphism \(f:X\to X\) such that \(f(x)=y\). It follows that \(f:X_{\delta}\to X_{\delta}\) is a homeomorphism with \(f(x)=y\). Thus, \(X_{\delta}\) is homogeneous.
From this theorem, we obtain some interesting examples:
**Example 5**: _Let \(\mathfrak{m}>\omega\). It is well known that if \(X=D(2)^{\mathfrak{m}}\), then for every \(x\in X\), we have \(\psi(x,X)>\omega\). By Theorem 5, it follows that \(X_{\delta}\) is a SHD, Tychonoff, zero-dimensional, and homogeneous space. But we can say more. Note that \(X\) is a topological group, and it can be easily shown using [7, 1W.(7)] that \(X_{\delta}\) is also a topological group. Also, if \(X_{\delta}\) were pseudocompact, then by [7, 4AG(6)(e)], \(X_{\delta}\) would be finite, which is absurd. Therefore, \(X_{\delta}\) is a SHD space that is not pseudocompact._
If we examine the construction in Theorem 5 in detail, we would expect that if we change the condition "for every \(x\in X\) it holds that \(\psi(x,X)>\omega\)" to "for every \(x\in X\) it holds that \(\chi(x,X)>\omega\)", the conclusion would still hold. However, the following example shows that this is not possible:
**Example 6**: _By [8, Chapter 1 11.4], there exists a countable regular space \(X\) such that for every \(x\in X\) we have \(\chi(x,X)=2^{\omega}\). Since \(X\) is countable and regular, it follows that \(\psi(x,X)=\omega\) for each \(x\in X\). As a result, \(X_{\delta}\) is a discrete space, and therefore not SHD._
A classical and very interesting space in general topology is the Pixley-Roy hyperspace (see [4] and [11]) associated to a \(T_{1}\) topological space \(X\), which is the set \(\mathscr{F}[X]=\{A\subseteq X:0<|A|<\omega\}\), i.e., the set of finite subsets of \(X\), equipped with the topology generated by local bases for \(F\in\mathscr{F}[X]\) of the form \(\{[F,V]:V\in\tau_{X}\text{ and }F\subseteq V\}\), where \([F,V]=\{H\in\mathscr{F}[X]:F\subseteq H\subseteq V\}\). Having presented it, let us observe the following propositions that tell us about the relationship between the SHD property in \(X\) and in \(\mathscr{F}[X]\). The proof of the next lemma is straightforward:
**Lemma 6**: _Let \(X\) be a \(T_{1}\) space in which every countable set is closed. If \(\{x_{n}\}_{n\in\mathbb{N}}\) is a sequence with no convergent subsequences and \(x\in X\), then there exists an open set \(U\) in \(X\) such that \(x\in U\) and moreover, \(|U\cap\{x_{n}\mid n\in\mathbb{N}\}|<\omega\)._
Notice that the previous lemma also works for \(P\)-spaces.
**Proposition 4**: _Let \(X\) be a \(T_{1}\) space where countable sets are closed. If \(X\) is SHD, then \(\mathscr{F}[X]\) is also SHD._
Let \(\{W_{n}\mid n\in\mathbb{N}\}\) be a countable collection of non-empty open sets in \(\mathscr{F}[X]\). Since they are non-empty, for each \(n\in\mathbb{N}\) we can choose \(F_{n}\in W_{n}\). As \(W_{n}\) is open, for each \(n\in\mathbb{N}\) there exists a non-empty open set \(U_{n}\) in \(X\) such that \(F_{n}\in[F_{n},U_{n}]\subseteq W_{n}\). Note that \(\{U_{n}\mid n\in\mathbb{N}\}\) is a countable collection of open sets in \(X\), and since \(X\) is SHD, for each \(n\in\mathbb{N}\) there exists \(x_{n}\in U_{n}\) such that \(\{x_{n}\}_{n\in\mathbb{N}}\) is a sequence without convergent subsequences. For each \(n\in\mathbb{N}\), let's define \(H_{n}=F_{n}\cup\{x_{n}\}\). Note that \(H_{n}\in\mathscr{F}[X]\). Moreover, since \(x_{n}\in U_{n}\) and recalling that \([F_{n},U_{n}]=\{J\in\mathscr{F}[X]\mid F_{n}\subseteq H\subseteq U_{n}\} \subseteq W_{n}\), then it follows that \(F_{n}\subseteq U_{n}\), and thus \(F_{n}\subseteq F_{n}\cup\{x_{n}\}\subseteq U_{n}\), i.e., \(H_{n}\in[F_{n},U_{n}]\subseteq W_{n}\). So, for all \(n\in\mathbb{N}\), it holds that \(H_{n}\in W_{n}\).
Let's prove that the sequence \(\{H_{n}\}_{n\in\mathbb{N}}\subseteq\mathscr{F}[X]\) has no convergent subsequences. For this, let's consider \(\{H_{n_{k}}\}_{k\in\mathbb{N}}\) as a subsequence of \(\{H_{n}\}_{n\in\mathbb{N}}\) along with a fixed \(H\in\mathscr{F}[X]\). Note that the subsequence \(\{H_{n_{k}}\}_{k\in\mathbb{N}}\) induces a subsequence \(\{x_{n_{k}}\}_{k\in\mathbb{N}}\) of \(\{x_{n}\}_{n\in\mathbb{N}}\). Since this sequence has no convergent subsequences, then \(\{x_{n_{k}}\}_{k\in\mathbb{N}}\) also has no convergent subsequences. Since \(H\) is finite, we can label its elements without repetitions so that \(H=\{z_{1},\ldots,z_{\ell}\}\). Thus, for each \(i\in\{1,\ldots,\ell\}\), by Lemma 6, there exists an open set \(O_{i}\) in \(X\) such that \(z_{i}\in O_{i}\) and \(|O_{i}\cap\{x_{n_{k}}\mid k\in\mathbb{N}\}|<\omega\). If we consider \(O=\bigcup\{O_{i}\mid i\in\{1,\ldots,\ell\}\}\), then \(O\) is an open set such that \(H\subseteq O\). Let's consider the neighborhood \([H,O]\). Thus, note that no tail of the sequence \(\{H_{n_{k}}\}_{k\in\mathbb{N}}\) is contained in \([H,O]\) since \(O\) only contains a finite number of terms from \(\{x_{n_{k}}\}_{k\in\mathbb{N}}\). Thus, \(\{H_{n_{k}}\}_{k\in\mathbb{N}}\) does not converge to \(H\). Therefore, \(\mathscr{F}[X]\) is a SHD space.
**Corollary 1**: _Let \(X\) be a \(T_{1}\) space and a \(P\)-space. If \(X\) is SHD, then \(\mathscr{F}[X]\) is SHD._
The following result can be proven with standard arguments:
**Proposition 5**: _Let \(X\) be a \(T_{1}\) space. Then \(X\) is a \(P\)-space if and only if the Pixley-Roy hyperspace \(\mathscr{F}[X]\) is also a \(P\)-space._
**Theorem 7**: _Let \(X\) be a regular topological space. Then, the Pixley-Roy hyperspace \(\mathscr{F}[X]\) is a \(P\)-space and SHD if and only if \(X\) is a \(P\)-space and SHD._
If \(X\) is a \(P\)-space and SHD, by Corollary 1 and Proposition 5, it follows that \(\mathscr{F}[X]\) is a \(P\)-space and SHD. Furthermore, if \(\mathscr{F}[X]\) is a \(P\)-space, then \(X\) is a \(P\)-space by Proposition 5. Since \(\mathscr{F}[X]\) is SHD, it does not have isolated points. Hence, \(X\) does not have isolated points either. Thus, \(X\) is a \(P\)-space without isolated points, and being regular, it is an \(F\)-space without isolated points by [7, 6L] and therefore \(X\) is SHD.
## 4 SHD spaces of all cardinalities
The purpose of this section is to show that there exist SHD spaces of all infinite cardinalities with various topological properties. We will be using the following result constantly (see Proposition 2).
**Theorem 8**: _If \(X\) is an extremally disconnected Tychonoff space with no isolated points, then \(X\) is SHD._
Let \(X\) be a Hausdorff space. The symbol \(\theta X\) will stand for the collection of all \(\mathsf{R}(X)\)-ultrafilters in \(X\). Furthermore, for each \(A\in\mathsf{R}(X)\) let us denote by \(\lambda(A):=\{\mathscr{U}\in\theta X:A\in\mathscr{U}\}\). It can be verified that the family \(\{\lambda(A):A\in\mathsf{R}(X)\}\) is a basis for a topology on \(\theta X\).
Let us use the symbol \(EX\) to refer to the subspace \(\{\mathscr{U}\in\theta X:\bigcap\mathscr{U}\neq\emptyset\}\) (i.e., the _Iliadis absolute_ of \(X\)) and, for each \(x\in X\), let \(F(x):=\{A\in\mathsf{R}(X):x\in\operatorname{int}_{X}A\}\). It is known that the function \(k_{X}:EX\to X\) determined by \(k_{X}(\mathscr{U})\in\bigcap\mathscr{U}\) is well defined and has the following properties which can be found in [7, Theorem (e), p. 459].
**Theorem 9**.: _Let \(X\) be a Hausdorff space._
1. \(EX\) _is an extremally disconnected zero-dimensional Hausdorff space._
2. _If_ \(\mathscr{U}\in\theta X\) _and_ \(x\in X\)_, then_ \(\mathscr{U}\in EX\) _and_ \(k_{X}(\mathscr{U})=x\) _if and only if_ \(F(x)\subseteq\mathscr{U}\)_._
All that remains is to mention the following lemma that appears in [7, Theorem (b), p. 445].
**Lemma 10**.: _If \(D\) is a dense subspace of an extremally disconnected space, then \(D\) is extremally disconnected._
With this background we can prove the following result.
**Theorem 11**.: _If \(X\) is a Hausdorff space with no isolated points, then there exists an space \(sX\) that satisfies the following conditions:_
1. \(sX\) _is an extremally disconnected zero-dimensional Hausdorff space (consequently, SHD)._
2. \(|X|=|sX|\)_,_ \(\pi w(X)=\pi w(sX)\) _and_ \(c(X)=c(sX)\)_._
Proof. For each \(x\in X\) fix \(\mathscr{U}_{x}\in k_{X}^{-1}\{x\}\) and consider \(sX:=\{\mathscr{U}_{x}:x\in X\}\) with the topology that it inherits as a subspace of \(EX\). Clearly, \(k_{X}\) is a bijection between \(sX\) and \(X\).
**Claim.**\(sX\) is a dense subspace of \(EX\) with no isolated points.
To verify the previous statement, it is enough to prove that if \(A\in\mathsf{R}(X)\setminus\{\emptyset\}\), then \(|\lambda(A)\cap sX|\geq 2\). Let \(A\) be a regular non-empty closed subset of \(X\). First, since \(A\) is regular closed, \(\operatorname{int}_{X}A\) is not empty and therefore, since \(X\) has no isolated points, distinct \(x,y\in\operatorname{int}_{X}A\) exist. Then, since \(k_{X}(\mathscr{U}_{x})=x\) and \(k_{X}(\mathscr{U}_{y})=y\), Theorem 9(2) implies that \(F(x)\subseteq\mathscr{U}_{x}\) and \(F(y)\subseteq\mathscr{U}_{y}\). Thus, since \(A\in F(x)\cap F(y)\), we deduce that \(A\in\mathscr{U}_{x}\cap\mathscr{U}_{y}\) and hence \(\{\mathscr{U}_{x},\mathscr{U}_{y}\}\subseteq\lambda(A)\); in particular, \(|\lambda(A)\cap sX|\geq 2\).
Now, since \(sX\) is a dense subspace of \(X\), we obtain that \(sX\) is Hausdorff, zero-dimensional and extremally disconnected (see Theorem 9(1) and Lemma 10). Thus, Theorem 8 and our Claim guarantee that \(sX\) is SHD.
Finally, to verify that \(\pi w(X)=\pi w(sX)\) and \(c(X)=c(sX)\), we only have to remember a couple of results. Recall that if \(D\) is a dense subspace of a \(T_{3}\) space \(Y\), then \(\pi w(D)=\pi w(Y)\) and \(c(D)=c(Y)\) (see [6, 2.6(a), p. 14] and [6, 2.7(a), p. 15]). Furthermore, for any Hausdorff space \(Y\) it is satisfied that \(\pi w(Y)=\pi w(EY)\) and \(c(Y)=c(EY)\) (see [7, 6B(4), p. 496]). In short, since \(X\) is \(T_{2}\), \(sX\) is a dense subspace of \(EX\) and \(EX\) is \(T_{3}\), the relations
\[\pi w(X)=\pi w(EX)=\pi w(sX)\quad\text{y}\quad c(X)=c(EX)=c(sX)\]
are verified.
Since for any infinite cardinal \(\kappa\) it is satisfied that the free topological sum \(\bigoplus_{\alpha<\kappa}\mathbb{Q}\) is a Hausdorff space with no isolated points having cardinality, \(\pi\)-weight and cellularity \(\kappa\), Theorem 11 implies:
**Corollary 2**: _For any infinite cardinal \(\kappa\) there exists a Hausdorff space \(X_{\kappa}\) that is zero-dimensional, SHD and has cardinality, \(\pi\)-weight and cellularity \(\kappa\)._
In particular, a fundamental example that we will constantly mention and that is obtained from what is stated in Theorem 11 is the following:
**Example 7**: _The space \(\mathpzc{S}\mathbb{Q}\) is \(T_{2}\), zero-dimensional, extremally disconnected (consequently, SHD), countable, and has countable \(\pi\)-weight._
What follows is to expose the details to obtain a result similar to Theorem 11 related to the cardinal function known as density.
If \(X\) is a Hausdorff space, the space \(\theta X\) is compact, \(T_{2}\), extremally disconnected, zero-dimensional, and if \(X\) has no isolated points, then \(\theta X\) also has no isolated points (see [7, SS6.3]). Furthermore, if \(X\) is infinite a consequence of [16, Theorem B.14, p. 270] is that \(w(\theta X)=|\operatorname{\mathsf{R}}(X)|\).
In the case of compact Hausdorff spaces, the density of the space \(\theta X\) coincides with the density of the original space. This fact is proven in [16, Corollary B.20, p. 272].
The previous remarks combined with Theorem 8 imply the following result.
**Theorem 12**: _If \(X\) is an infinite Hausdorff space with no isolated points, then \(\theta X\) is a zero-dimensional SHD compact Hausdorff space with \(w(\theta X)=|\operatorname{\mathsf{R}}(X)|\). Also, if \(X\) is compact, \(d(X)=d(\theta X)\)._
With the above theorem we are positioned to obtain a result similar to Theorem 11 with respect to density.
**Theorem 13**: _For any infinite cardinal \(\kappa\), there exists a space \(Y_{\kappa}\) that is Hausdorff, compact, zero-dimensional and SHD with \(d(Y_{\kappa})=2^{\kappa}\leq w(Y_{\kappa})\leq 2^{2^{\kappa}}\)._
Let \(\lambda\) stand for the cardinal \(2^{\kappa}\). We shall use the symbol \(\lambda^{*}\) to represent the residue of the Stone-Cech compactification of the discrete space of cardinality \(\lambda\). By Theorem 12 it is only necessary to argue that the space \(Y_{\kappa}:=\theta\left(\lambda^{*}\right)\) satisfies the relations \(d(Y_{\kappa})=\lambda\leq w(Y_{\kappa})\leq 2^{\lambda}\).
On the one hand, [14, Theorem, p. 229] implies that \(d(Y_{\kappa})=d\left(\lambda^{*}\right)=\lambda^{\omega}=\lambda\). On the other hand, by virtue of the equality \(w(Y_{\kappa})=|\operatorname{\mathsf{R}}(\lambda^{*})|\), it is enough to notice that usual arguments with cardinal functions (see [8]) show that \(\lambda=d\left(\lambda^{*}\right)\leq w\left(\lambda^{*}\right)\leq| \operatorname{\mathsf{R}}\left(\lambda^{*}\right)|\leq 2^{d(\lambda^{*})}=2^{\lambda}\).
For any infinite cardinal \(\kappa\), let us denote by \(D(2)^{\kappa}\) the Cantor cube of weight \(\kappa\). A routine argument shows that \(D(2)^{\kappa}\) is a compact Hausdorff space with no isolated points. Furthermore, a consequence of the Generalized Continuum Hypothesis, \(\mathsf{GCH}\), and of [8, Example 11.8, p. 44] is that
\[d\left(D(2)^{2^{\kappa}}\right)=\kappa\quad\text{y}\quad\left|\operatorname{ \mathsf{R}}\left(D(2)^{2^{\kappa}}\right)\right|=2^{\kappa}.\]
**Theorem 14**: **. [**Gch**]** _For any infinite cardinal \(\kappa\), there exists a space \(Z_{\kappa}\) that is Hausdorff compact, zero-dimensional and SHD with \(d(Z_{\kappa})=\kappa\) and \(w(Z_{\kappa})=2^{\kappa}\)._
Proof. By virtue of Theorem 12 and the observations in the previous paragraph, if \(\kappa\geq\omega\), it is clear that \(Z_{\kappa}:=\theta\left(D(2)^{2^{\kappa}}\right)\) satisfies the desired properties.
## 5 Non-semiregular SHD spaces
Although topological constructions are usually made in such a way that the resulting space possesses rich properties, we will show in this section that SHD spaces are also very versatile in the sense that SHD spaces can be found, again of all the possible infinite cardinalities, and satisfying not being semiregular (i.e., the set of regular open sets is not a basis for the corresponding spaces).
Recall that a space \(X\) admits **no** non-trivial convergent sequences if and only if the only convergent sequences in \(X\) are semiconstant. Let's recall Proposition 1, which has the following Corollary. These results will be used repeatedly in the development of this section.
**Corollary 3**: **.** _If \(X\) is a \(T_{1}\) space without isolated points that does not admit non-trivial convergent sequences, then \(X\) is SHD._
Next, we will present a general way of producing non-semiregular SHD topological spaces from topological spaces with certain characteristics.
**Definition 1**: **.** _If \(X\) is a topological space, we will denote by \(mX\) the set \(X\) equipped with the topology whose base is the collection_
\[\mathscr{B}:=\left\{U\setminus A:U\in\tau_{X}\ \wedge\ A\in[X]^{\leq\omega} \right\}.\]
**Theorem 15**: **.** _If \(X\) is a topological space such that, for any \(U\in\tau_{X}^{+}\), it is satisfied that \(|U|>\omega\), then \(mX\) is SHD. Furthermore, if \(X\) is \(T_{0}\), \(T_{1}\), \(T_{2}\), Urysohn, or completely Hausdorff, then so is \(mX\)._
Proof. First note that if \(U\in\tau_{mX}^{+}\), then there exist \(V\in\tau_{X}^{+}\) and \(A\in[X]^{\leq\omega}\) with \(V\setminus A\subseteq U\). Then, since \(|V|>\omega\) we deduce that \(|U|\geq|V\setminus A|>\omega\); in particular, \(U\) is infinite.
To prove that \(mX\) does not admit non-trivial convergent sequences, we note that if \(x\in mX\) and \(\{x_{n}\}_{n\in\mathbb{N}}\) is a sequence in \(mX\) with \(x\not\in\{x_{n}:n<\omega\}\), then \(U:=X\setminus\{x_{n}:n<\omega\}\) is an element of \(\tau_{mX}(x)\) such that \(U\cap\{x_{n}:n<\omega\}=\emptyset\); consequently, \(\{x_{n}\}_{n\in\mathbb{N}}\) does not converge to \(x\) in \(mX\) and, therefore, any convergent sequence in \(mX\) is necessarily semiconstant. Consequently, Proposition 1 guarantees that \(mX\) is SHD.
For the second part it is enough to observe that if \(X\) has any of the separation properties mentioned in the statement of our theorem, then the inclusion \(\tau_{X}\subseteq\tau_{mX}\) guarantees that \(mX\) satisfies the same property.
**Lemma 16**: _If \(X\) is a topological space and for any \(U\in\tau_{X}^{+}\) we have that \(|U|>\omega\), then \(\mathsf{RO}(mX)\subseteq\tau_{X}\). In particular, if there exists \(V\in\tau_{mX}\setminus\tau_{X}\), then \(mX\) is not semiregular._
Let \(U\in\mathsf{RO}(mX)\), \(x\in U\), \(V\in\tau_{X}\) and \(A\in[X]^{\leq\omega}\) be such that \(x\in V\setminus A\subseteq U\). We will prove first that \(\operatorname{cl}_{mX}(V\setminus A)=\operatorname{cl}_{X}V\).
Start by noticing that the relations \(\tau_{X}\subseteq\tau_{mX}\) and \(V\setminus A\subseteq V\) imply that \(\operatorname{cl}_{mX}(V\setminus A)\subseteq\operatorname{cl}_{mX}V\subseteq \operatorname{cl}_{X}V\). On the other hand, if \(y\in\operatorname{cl}_{X}V\), \(W\in\tau_{X}\) and \(B\in[X]^{\leq\omega}\) are such that \(y\in W\setminus B\), then \(W\cap V\in\tau_{X}^{+}\). Thus, it is satisfied that \(|W\cap V|>\omega\) and, therefore, we obtain that \(V\cap(W\setminus B)\neq\emptyset\). Hence, \(y\in\operatorname{cl}_{mX}V\).
To finish the argument we note that \(V\in\tau_{X}(x)\) (in particular, \(V\in\tau_{mX}(x)\)) and \(V\subseteq\operatorname{cl}_{X}V=\operatorname{cl}_{mX}(V\setminus A)\). Thus, \(x\in V\subseteq\operatorname{int}_{mX}\operatorname{cl}_{mX}(V\setminus A) \subseteq\operatorname{int}_{mX}\operatorname{cl}_{mX}U=U\); in short, \(x\in V\subseteq U\).
The second part is simple: if \(mX\) is semiregular, then any open set \(mX\) is a union of elements of \(\mathsf{RO}(mX)\) which, by the inclusion we proved above, turns out to be a union of elements of \(\tau_{X}\), i.e., it is open in \(X\).
**Lemma 17**: _There is a subspace \(X\) of \(\mathbb{R}\) with the following characteristics:_
1. \(|X|=\omega_{1}\)_;_
2. _for any_ \(U\in\tau_{X}^{+}\)_,_ \(|U|=\omega_{1}\) _and_ \(U\cap\mathbb{Q}\neq\emptyset\)_; and_
3. \(X\setminus\mathbb{Q}\) _is not an open subset of_ \(X\)_._
Let \(\mathscr{B}:=\{(a,b):a,b\in\mathbb{Q}\ \wedge\ a<b\}\). For each \(B\in\mathscr{B}\) choose \(X_{B}\in[B]^{\omega_{1}}\) with \(X_{B}\cap\mathbb{Q}\neq\emptyset\), and consider the subspace \(X:=\bigcup\{X_{B}:B\in\mathscr{B}\}\). Clearly, \(|X|=\omega_{1}\). Furthermore, if \(U\in\tau_{\mathbb{R}}^{+}\) and \(B\in\mathscr{B}\) are such that \(B\subseteq U\), then we have the relations
\[\omega_{1}=|X_{B}\cap B|\leq|X\cap B|\leq|X\cap U|\leq|X|\leq|\mathscr{B}| \cdot\sup\left\{|X_{B}|:B\in\mathscr{B}\right\}\leq\omega_{1}.\]
In conclusion, \(|X\cap U|=\omega_{1}\) and \((X\cap U)\cap\mathbb{Q}\neq\emptyset\). Finally, since (1) and (2) ensure that \(X\cap\mathbb{Q}\) is a proper and dense subset of \(X\), we infer that \(X\setminus\mathbb{Q}\) does not belong to the collection \(\tau_{X}\).
**Theorem 18**: _For any uncountable cardinal \(\kappa\) there exists a space \(X_{\kappa}\) that is completely Hausdorff, not semiregular, and SHD of cardinality \(\kappa\)._
Let \(X\) be as in Lemma 17. For each \(\alpha<\kappa\) let us denote by \(Y_{\alpha}\) the space \(X\times\{\alpha\}\). We will use the symbol \(Y\) to refer to the topological sum \(\bigoplus_{\alpha<\kappa}Y_{\alpha}\). Finally, let \(X_{\kappa}:=mY\). Observe that since Lemma 17(1) ensures the equality \(|X|=\omega_{1}\), it is verified that \(X_{\kappa}\) has cardinality \(\kappa\).
Now, since \(X\) is a subspace of \(\mathbb{R}\), \(\mathbb{R}\) is completely Hausdorff, and this property is hereditary and is preserved under topological additions, we get that \(Y\) is a completely Hausdorff space. On the other hand, Lemma 17(2) guarantees that \(|U|>\omega\) provided that \(U\in\tau_{Y}^{+}\). Thus, Theorem 15 certifies that \(X_{\kappa}\) is completely Hausdorff and SHD.
To verify that \(X_{\kappa}\) is not semiregular let us first observe that, by Lemma 17(3), \((X\setminus\mathbb{Q})\times\{0\}\) is not a open subset of \(Y_{0}\); in particular, \((X\setminus\mathbb{Q})\times\{0\}\) is not an element of \(\tau_{Y}\). However, since
\[(X\setminus\mathbb{Q})\times\{0\}=(X\times\{0\})\setminus(\mathbb{Q}\times\{0 \}),\]
the membership \((X\setminus\mathbb{Q})\times\{0\}\in\tau_{mY}\) is satisfied. Thus, Lemma 16 implies that \(X_{\kappa}\) is not semiregular.
The rest of the section is dedicated to constructing a countable space which is SHD and not semiregular with a method completely different from the construction exposed in Theorem 18.
Recall that a Hausdorff space \(X\) is _\(H\)-closed_ if it is closed in any Hausdorff space that contains it as a subspace. On the other hand, a function between topological spaces \(f:X\to Y\) is _\(\theta\)-continuous_ if for any \(x\in X\) and \(V\in\tau_{Y}(f(x))\), there exists \(U\in\tau_{X}(x)\) such that \(f[\operatorname{cl}_{X}U]\subseteq\operatorname{cl}_{Y}V\). Our next lemma requires familiarity with the construction of Theorem 11.
**Lemma 19**.: _Let \(X\) be a topological space. If \(A\) is a clopen subspace of \(X\) that is not \(H\)-closed, then \(\lambda(A)\cap sX\) is not \(H\)-closed._
Proof. We will first argue that \(k_{X}[\lambda(A)\cap sX]=A\). Note that [7, Theorem (e)(3), p. 459] guarantees the relations \(k_{X}[\lambda(A)\cap sX]\subseteq k_{X}[\lambda(A)\cap EX]=A\). On the other hand, if \(x\in A\), then \(x\in\operatorname{int}_{X}A\) and therefore \(A\in F(x)\). Then, since \(k_{X}(\mathscr{U}_{x})=x\), Theorem 9(2) guarantees that \(F(x)\subseteq\mathscr{U}_{x}\). In this way, \(A\in\mathscr{U}_{x}\), that is, \(\mathscr{U}_{x}\in\lambda(A)\cap sX\) and thus, \(x\in k_{X}[\lambda(A)\cap sX]\). In conclusion, \(k_{X}[\lambda(A)\cap sX]=A\).
Finally, since \(k_{X}\) is a \(\theta\)-continuous function (see [7, Theorem (e)(5), p. 459]) and \(A\) is not \(H\)-closed, [7, Proposition (h), p. 302] implies that \(\lambda(A)\cap sX\) is not \(H\)-closed.
If \(X\) is a Hausdorff space and \(\mathscr{F}\) is an open filter base on \(X\), then \(\mathscr{F}\) is free if and only if the _adherence of_\(\mathscr{F}\), \(a_{X}(\mathscr{F}):=\bigcap\{\operatorname{cl}_{X}F:F\in\mathscr{F}\}\), is empty (see [7, SS2.3]). Additionally, we will use the symbol \(\kappa X\) to denote the _Katetov extension of_\(X\), that is, \(\kappa X\) is the set \(\kappa X:=X\cup\{\mathscr{U}:\mathscr{U}\) is a free open ultrafilter on \(X\}\) whose topology is determined by the basis \(\tau_{X}\cup\{U\cup\{\mathscr{U}\}:\mathscr{U}\in\kappa X\setminus X\text{ y }U\in\mathscr{U}\}\) (see [7, SS4.8]).
**Lemma 20**.: _If \(X\) is a Hausdorff space and \(U\in\tau_{X}\), then \(\operatorname{cl}_{X}U\) is not \(H\)-closed if and only if there exists \(\mathscr{U}\in\kappa X\setminus X\) with \(U\in\mathscr{U}\)._
Proof. Let \(Y\) denote the space \(\operatorname{cl}_{X}U\) and suppose first that there exists \(\mathscr{U}\in\kappa X\setminus X\) with \(U\in\mathscr{U}\). Consider the collection \(\mathscr{F}:=\{U\cap V:V\in\mathscr{U}\}\). A routine argument shows that \(\mathscr{F}\) is an open filter base on \(Y\). Also, since \(a_{Y}(\mathscr{F})\subseteq\bigcap\{\operatorname{cl}_{X}(U\cap V):V\in \mathscr{U}\}\subseteq a_{X}(\mathscr{U})=\emptyset\), we deduce that \(\mathscr{F}\) is free, and therefore any open filter on \(Y\) that extends \(\mathscr{F}\) must be free. By [7, Proposition (b), p. 298], we deduce that \(Y\) is not \(H\)-closed.
Suppose now that \(Y\) is not \(H\)-closed and use [7, Proposition (b), p. 298] to find an ultrafilter \(\mathscr{W}\) that is open and free in \(Y\). Note that \(U\in\mathscr{W}\) because, otherwise, since \(\mathscr{W}\) is an open ultrafilter in \(Y\), we would have \(\emptyset=Y\setminus\operatorname{clv}U\in\mathscr{W}\), which is absurd. Furthermore, since \(Y\) is a closed subset of \(X\), for any \(W\in\mathscr{W}\) the equality \(\operatorname{clv}W=\operatorname{clv}X\,W\) is satisfied (see [7, 1A(2), p. 55]).
Now, consider the collection \(\mathscr{F}:=\{V\in\tau_{X}:V\cap Y\in\mathscr{W}\}\). A simple reasoning shows \(\mathscr{F}\) is an open filter on \(X\) that satisfies the membership \(U\in\mathscr{F}\). For this reason, there exists an open ultrafilter \(\mathscr{U}\) in \(X\) such that \(\mathscr{F}\subseteq\mathscr{U}\) (see [7, Proposition (d)(2), p. 93]). In order to see that \(a_{X}(\mathscr{U})=\emptyset\) suppose, in search of a contradiction, that \(x\in a_{X}(\mathscr{U})\). Given \(W\in\mathscr{W}\) there exists \(V\in\tau_{X}\) such that \(W=V\cap Y\). In particular, \(V\in\mathscr{F}\subseteq\mathscr{U}\). Then, as \(U\in\mathscr{U}\) we have \(V\cap U\in\mathscr{U}\) and therefore, \(x\in\operatorname{cl}_{X}(V\cap U)\). Finally, we can use [7, Proposition (a)(3), p. 81] to ensure the equalities
\[\operatorname{cl}_{X}(V\cap U)=\operatorname{cl}_{X}(V\cap\operatorname{cl}_{ X}U)=\operatorname{cl}_{X}(V\cap Y)=\operatorname{cl}_{X}W=\operatorname{cl}_{Y}W.\]
This shows that for any \(W\in\mathscr{W}\), \(x\in\operatorname{cl}_{Y}W\); in other words, \(x\in a_{Y}(\mathscr{W})\), which contradicts that \(\mathscr{W}\) is free. In sum, \(\mathscr{U}\in\kappa X\setminus X\) and \(U\in\mathscr{U}\).
In what follows we will be working with the space \(s\mathbb{Q}\) (see Example 7).
**Lemma 21**.: _There exists a collection \(\{\mathscr{V}_{n}:n<\omega\}\subseteq\kappa\left(s\mathbb{Q}\right)\setminus s \mathbb{Q}\) such that, for for any \(U\in\tau_{s\mathbb{Q}}^{+}\), there exists \(n<\omega\) with \(U\in\mathscr{V}_{n}\)._
Proof.: Since \(\{(a,b)\cap\mathbb{Q}:a,b\in\mathbb{R}\setminus\mathbb{Q}\ \wedge\ a<b\}\) is a basis for \(\mathbb{Q}\) and this space is second countable, the family admits a subset \(\{B_{n}:n<\omega\}\) enumerated faithfully that is a basis for \(\mathbb{Q}\). Note that if \(n<\omega\) then \(B_{n}\) is a clopen subset of \(\mathbb{Q}\). Furthermore, since \(B_{n}\) is not closed in \(\mathbb{R}\), we deduce that \(B_{n}\) is not \(H\)-closed. For this reason, for each \(n<\omega\) we can use Lemma 19 to verify that \(\lambda(B_{n})\cap s\mathbb{Q}\) is not \(H\)-closed. Thus, since \(\lambda(B_{n})\cap s\mathbb{Q}\) is a clopen subspace of \(s\mathbb{Q}\) and \(\operatorname{cl}_{s\mathbb{Q}}\left(\lambda(B_{n})\cap s\mathbb{Q}\right)\) is not \(H\)-closed, Lemma 20 implies the existence of \(\mathscr{V}_{n}\in\kappa\left(s\mathbb{Q}\right)\setminus s\mathbb{Q}\) with \(\lambda(B_{n})\cap s\mathbb{Q}\in\mathscr{V}_{n}\).
Finally, if \(U\in\tau_{s\mathbb{Q}}^{+}\), then there exists \(A\in\mathsf{R}(\mathbb{Q})\setminus\{\emptyset\}\) such that \(\lambda(A)\cap s\mathbb{Q}\subseteq U\). Then, since \(\operatorname{int}_{\mathbb{Q}}A\) is non-empty, there exists \(n<\omega\) with \(B_{n}\subseteq\operatorname{int}_{\mathbb{Q}}A\). Thus, the inclusion \(\lambda(B_{n})\cap s\mathbb{Q}\subseteq\lambda(A)\cap s\mathbb{Q}\) certifies that \(U\in\mathscr{V}_{n}\).
In the Example 7 it is mentioned that the space \(s\mathbb{Q}\) has the Hausdorff property, is extremally disconnected and has no isolated points. We will use these facts in the proof of Theorem 22. For the purposes of the following result, the symbol \(\mathbb{Q}_{*}\) will represent the subspace \(s\mathbb{Q}\cup\{\mathscr{V}_{n}:n<\omega\}\) of \(\kappa\left(s\mathbb{Q}\right)\), where \(\{\mathscr{V}_{n}:n<\omega\}\) is as in Lemma 21.
**Theorem 22**.: \(\mathbb{Q}_{*}\) _is Hausdorff, extremally disconnected, without isolated points and not semiregular. In particular, \(\mathbb{Q}_{*}\) is a countable space, SHD and not semiregular._
Proof.: First, since \(s\mathbb{Q}\) has no isolated points and is dense in \(\mathbb{Q}_{*}\), \(\mathbb{Q}_{*}\) also does not have isolated points. On the other hand, since \(s\mathbb{Q}\) is extremally disconnected, [7, Theorem (b)(7), p. 445] guarantees that \(\kappa\left(s\mathbb{Q}\right)\) is too. Thus, since
\(\mathbb{Q}_{*}\) is dense in \(\kappa\left(s\mathbb{Q}\right)\), [7, Theorem (b)(2), p. 445] ensures that \(\mathbb{Q}_{*}\) is extremally disconnected and has the Hausdorff property.
To argue that \(\mathbb{Q}_{*}\) is not semiregular let's remember that, in extremally disconnected spaces, semiregularity and zero-dimensionality coincide (see [7, Theorem, p. 451]). With this idea in mind, it is enough to verify that \(\mathbb{Q}_{*}\) is not completely regular.
Suppose, in search of an absurdity, that \(\mathbb{Q}_{*}\) is a completely regular space. Let us fix \(x\in s\mathbb{Q}\) and use that \(\left\{\mathscr{V}_{n}:n<\omega\right\}\) is a closed subspace of \(\mathbb{Q}_{*}\) (see [7, Proposition (p)(1), p. 309]) to find a continuous function \(f:\mathbb{Q}_{*}\to[0,1]\) such that \(f(x)=0\) and \(f\left[\left\{\mathscr{V}_{n}:n<\omega\right\}\right]=\left\{1\right\}\). Let us use the continuity of \(f\) to find \(U\in\tau_{s\mathbb{Q}}(x)\) with \(f[U]\subseteq[0,1/2)\). Hence, since \(\mathbb{Q}_{*}\) is regular and \(U\in\tau_{\mathbb{Q}_{*}}(x)\), there exists \(V\in\tau_{\mathbb{Q}_{*}}(x)\) with \(\operatorname{cl}_{\mathbb{Q}_{*}}V\subseteq U\).
Now, note that the inclusion \(V\subseteq U\) guarantees that, in fact, \(V\in\tau_{s\mathbb{Q}}^{+}\). Thus, Lemma 21 produces \(n<\omega\) with \(V\in\mathscr{V}_{n}\). Also, as \(\mathscr{V}_{n}\in\operatorname{cl}_{\kappa(s\mathbb{Q})}V\) (see [7, Proposition (p)(2), p. 309 ]), we deduce that \(\mathscr{V}_{n}\in\operatorname{cl}_{\mathbb{Q}_{*}}V\). Finally, since \(f\left[\operatorname{cl}_{\mathbb{Q}_{*}}V\right]\subseteq[0,1/2)\) and \(f(\mathscr{V}_{n})=1\), we have obtained the desired contradiction. Consequently, \(\mathbb{Q}_{*}\) is not semiregular.
## 6 Open questions
**Question 1**.: _Is it possible to obtain a result similar to Theorem 14 without additional axioms?_
**Question 2**.: _Is it true that if \(\kappa\) is an uncountable cardinal, then \(X=D(2)^{\kappa}\) is a SHD space?_
In the same way of Theorem 3 we have the next question:
**Question 3**.: _If \(X\) is realcompact, Tychonoff and non-compact, is it true that \(X^{*}\) is SHD?_
As we have seen in 3, the property of being SHD is not preserved under normal extensions; however, the Stone-Cech compactification is a very particular extension with strong properties. In this regard, we have the following questions:
**Question 4**.: _If \(X\) is Tychonoff, non-compact and SHD, does it hold that \(\beta X\) is SHD?_
**Question 5**.: _The property of being SHD is dense hereditary?_
**Question 6**.: _If \(X\) is Tychonoff and non-compact, is it always true that if \(\beta X\) is SHD then \(X\) is SHD?_
**Question 7**.: _Is \(\mathscr{F}[X]\) SHD whenever \(X\) is SHD and \(T_{1}\)?_
**Question 8**.: _Is it true that if \(X\) is \(T_{1}\) and \(\mathscr{F}[X]\) is SHD, then \(X\) is SHD?_ |
2307.07128 | Data-driven Polytopic Output Synchronization of Heterogeneous
Multi-agent Systems from Noisy Data | This paper proposes a novel approach to addressing the output synchronization
problem in unknown heterogeneous multi-agent systems (MASs) using noisy data.
Unlike existing studies that focus on noiseless data, we introduce a
distributed data-driven controller that enables all heterogeneous followers to
synchronize with a leader's trajectory. To handle the noise in the
state-input-output data, we develop a data-based polytopic representation for
the MAS. We tackle the issue of infeasibility in the set of output regulator
equations caused by the noise by seeking approximate solutions via constrained
fitting error minimization. This method utilizes measured data and a
noise-matrix polytope to ensure near-optimal output synchronization. Stability
conditions in the form of data-dependent semidefinite programs are derived,
providing stabilizing controller gains for each follower. The proposed
distributed data-driven control protocol achieves near-optimal output
synchronization by ensuring the convergence of the tracking error to a bounded
polytope, with the polytope size positively correlated with the noise bound.
Numerical tests validate the practical merits of the proposed data-driven
design and theory. | Yifei Li, Wenjie Liu, Jian Sun, Gang Wang, Lihua Xie, Jie Chen | 2023-07-14T02:52:51Z | http://arxiv.org/abs/2307.07128v1 | # Data-driven Polytopic Output Synchronization of Heterogeneous Multi-agent Systems from Noisy Data
###### Abstract
This paper proposes a novel approach to addressing the output synchronization problem in unknown heterogeneous multi-agent systems (MASs) using noisy data. Unlike existing studies that focus on noiseless data, we introduce a distributed data-driven controller that enables all heterogeneous followers to synchronize with a leader's trajectory. To handle the noise in the state-input-output data, we develop a data-based polytopic representation for the MAS. We tackle the issue of infeasibility in the set of output regulator equations caused by the noise by seeking approximate solutions via constrained fitting error minimization. This method utilizes measured data and a noise-matrix polytope to ensure near-optimal output synchronization. Stability conditions in the form of data-dependent semidefinite programs are derived, providing stabilizing controller gains for each follower. The proposed distributed data-driven control protocol achieves near-optimal output synchronization by ensuring the convergence of the tracking error to a bounded polytope, with the polytope size positively correlated with the noise bound. Numerical tests validate the practical merits of the proposed data-driven design and theory.
Data-driven control, heterogeneous MAS, output synchronization, noisy data, polytope
## I Introduction
The field of distributed control, particularly consensus control of multi-agent systems (MASs), has garnered significant attention in recent decades. Numerous research efforts have focused on achieving state consensus in homogeneous MASs, where all agents share identical dynamics, as evidenced in e.g., [1, 2, 3, 4, 5, 6] and their associated references. However, in real-world scenarios, the presence of inevitable variations between agents and system uncertainties stemming from physical characteristics introduce heterogeneity in the dynamics of MASs. Consequently, there is a growing need to investigate the problem of output synchronization in heterogeneous MASs, which consist of agents with different dimensions and dynamics and find extensive applications.
A well-documented approach for addressing the output synchronization problem is based on the internal model principle [7]. This principle has been widely applied to design protocols for output synchronization in various settings, including heterogeneous linear MASs [8, 9, 10], nonlinear MASs [11], and their generalizations with additional considerations [12, 13, 14].
However, these protocols rely on knowledge of the system dynamics model of each agent, rendering them inapplicable when first-principle models are unavailable or system identification is time-consuming or inaccurate. To overcome this challenge, several results have explored model-free approaches based on reinforcement learning (RL) [15, 16, 17]. Nevertheless, model-free RL-based approaches often require a large amount of training data, demanding considerable computing resources. Alternatively, the fundamental lemma introduced by _Willems et al._ in [18] provides an alternative avenue for designing controllers for unknown systems by using pre-collected data, such as input, output, and/or state data. This line of research has gained significant attention due to its advantages in terms of theoretical certification and computational tractability compared to other data-driven approaches [19, 20, 21]. Moreover, a data-based representation for linear time-invariant systems has been proposed in [22], which enables the design of stabilizing controllers using data-dependent linear matrix inequalities.
The fundamental lemma has lately found applications in diverse areas of data-driven control design and analysis, including data-driven model predictive control [23, 24, 25], data-enabled policy optimization [26], robust control [27, 28], event-triggered control [29, 30, 31, 32], and distributed control of network systems [33, 34, 35], among others.
The problem of data-driven control design and analysis for output synchronization in unknown heterogeneous MASs remains unexplored. While some progress has been made, such as the work presented in [34] where the process noise during offline data collection was assumed to be measurable and perfectly known, the general case of having unknown noise has not been addressed. This limitation arises from the fact that process noise cannot be accurately measured in practical scenarios. Consequently, there is a pressing need to revisit the data-driven output synchronization problem in heterogeneous MASs, considering the presence of unknown noise. Moreover, accurate system identification is hindered by noisy data, necessitating the development of robust data-based methods for controller design that can handle uncertainty. The fundamental premise of such methods is to impose reasonable constraints on the noise.
In the field of data-driven control, two primary approaches are commonly used to model noise in the data acquisition phase, namely zonotopic constraints and quadratic constraints.
The former approach, utilizing data-driven zonotopic reachability analysis [36], has led to the development of robust data-driven predictive controllers as demonstrated in [37, 38]. On the other hand, the latter approach, introduced as a general framework in [27] leveraging the matrix \(S\)-lemma, has been widely employed in designing controllers from data subject to noise modeled by quadratic constraints. Notable applications include robust event-triggered control [29], and distributed control [35]. Furthermore, the feasibility of the output regulator equations, a crucial component for achieving output synchronization using the internal model principle, is compromised by noisy data. To the best of our knowledge, no data-driven methods have been reported for solving the output regulator equations from noisy data.
Motivated by these observations, this paper aims to develop data-driven polytopic controllers for output synchronization of unknown heterogeneous MASs using noisy data obtained offline. The first step involves describing the noise using polytopes, which serves as the basis for a novel data-based polytopic representation of MASs. To address the infeasibility issue of output regulator equations, we seek approximate solutions by tackling a norm minimization problem that incorporates noise using matrix polytopes. Building on the polytopic MAS representation, we derive sufficient conditions for stabilizing feedback controllers in the form of data-dependent semidefinite programs. Further, we consider a new synchronization measure, termed \(\Delta\)-optimal output synchronization, which accounts for regulator equation errors in the presence of noisy data and where \(\Delta\) is proportional to the size of noise. We demonstrate that the proposed data-driven control protocol achieves \(\Delta\)-optimal output synchronization and exhibits robustness against the noise in data collected offline. This is achieved by ensuring the convergence of tracking error to a \(\Delta\)-bounded polytope. Notably, the synchronization recovers exact output synchronization when the noise vanishes.
The contributions of this paper are summarized as follows:
1. We propose a data-based polytopic representation for heterogeneous MASs based on state-input-output data corrupted by bounded noise;
2. We derive approximate solutions to the data-driven output synchronization problem by minimizing the norm of fitting error matrix polytopes; and,
3. We establish the near-optimality and robustness of the data-driven polytopic design of distributed output synchronization controllers against noise.
_Notation._ We adopt the following notation conventions throughout the paper. The set of non-negative integers (real numbers) is denoted by \(\mathbb{N}\) (\(\mathbb{R}\)). The sets of \(n\)-dimensional real vectors and \(n\times m\) real matrices are represented by \(\mathbb{R}^{n}\) and \(\mathbb{R}^{n\times m}\), respectively. For a vector \(x\in\mathbb{R}^{n}\), the notation \(x>0\) indicates that each entry of \(x\) is positive. The symbol \((\cdot)^{T}\) denotes the transpose operation, while \(\otimes\) represents the Kronecker product. The identity matrix of appropriate dimensions is denoted by \(I\), and the zero matrix is denoted by \(\mathbf{0}\). The Frobenius norm of a real matrix \(X\in\mathbb{R}^{n\times m}\) is denoted by \(\|X\|_{F}\). A symmetric matrix \(P\) is said to be positive definite (positive semi-definite) if \(P\succ\mathbf{0}\) (\(P\succeq\mathbf{0}\)). The expression \(\mathrm{diag}_{i=1}^{N}\{a_{i}\}\) represents a diagonal matrix with \(a_{1},a_{2},\ldots,a_{N}\) as its main diagonal elements.
## II Preliminaries and Problem Formulation
### _Graph Theory_
Consider a weighted graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) that represents the interactions among a set of agents. The graph consists of two components: a nonempty set of nodes \(\mathcal{V}=\{v_{1},\ldots,v_{N}\}\) and a set of edges \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\). An element in \(\mathcal{E}\), denoted as \((v_{i},v_{j})\), represents a link from node \(v_{j}\) to node \(v_{i}\).
The adjacency matrix \(\mathcal{A}=[a_{ij}]\in\mathbb{R}^{N\times N}\) is defined such that \(a_{ij}>0\) if \((v_{j},v_{i})\in\mathcal{E}\), and \(a_{ij}=0\) otherwise. The in-degree of node \(v_{i}\) in graph \(\mathcal{G}\) is given by \(d_{i}=\sum_{j=1}^{N}a_{ij}\), and can be represented by the diagonal matrix \(D=\mathrm{diag}_{i=1}^{N}\{d_{i}\}\). The Laplacian matrix \(\mathcal{L}=[l_{ij}]\in\mathbb{R}^{N\times N}\) associated with \(\mathcal{G}\) is defined as \(\mathcal{L}=D-\mathcal{A}\).
The neighbor set of node \(v_{i}\) is denoted as \(\mathcal{N}_{i}=\{j\in\mathcal{V}|(i,j)\in\mathcal{E}\}\). An extended graph is represented by \(\bar{\mathcal{G}}=(\bar{\mathcal{V}},\bar{\mathcal{E}})\), where \(\bar{\mathcal{V}}=\mathcal{V}\cup v_{0}\), and \(v_{0}\) corresponds to the node associated with the leader. The set \(\bar{\mathcal{E}}\) includes all the arcs in \(\mathcal{E}\) as well as the arcs between \(v_{0}\) and \(\mathcal{E}\).
A graph \(\bar{\mathcal{G}}\) is said to contain a directed spanning tree if there exists a node, known as the root, from which every other node in \(\bar{\mathcal{V}}\) can be reached through a directed path. The pinning matrix \(G=\mathrm{diag}_{i=1}^{N}\{g_{i}\}\) describes the accessibility of the leader node \(v_{0}\) to the remaining nodes \(v_{i}\in\mathcal{V}\). The pinning gain \(g_{i}>0\) if \((v_{0},v_{i})\in\bar{\mathcal{E}}\), and \(g_{i}=0\) otherwise.
### _Output Synchronization of Discrete-time MASs_
Consider a discrete-time heterogeneous leader-following MAS consisting of a leader indexed by \(0\) and \(N\) followers indexed by \(1,2,\ldots,N\). The dynamics of follower \(i\in\{1,2,\ldots,N\}\) is described by
\[\begin{split} x_{i}(t+1)&=\bar{A}_{i}x_{i}(t)+\bar{B }_{i}u_{i}(t)\\ y_{i}(t)&=\bar{C}_{i}x_{i}(t),\quad\forall t\in \mathbb{N}\end{split} \tag{1}\]
where \(x_{i}(t)\in\mathbb{R}^{n_{i}}\) represents the state, \(u_{i}(t)\in\mathbb{R}^{p_{i}}\) denotes the control input, and \(y_{i}(t)\in\mathbb{R}^{q}\) is the measurement output.
In this paper, the true system matrices \(\bar{A}_{i}\in\mathbb{R}^{n_{i}\times n_{i}}\), \(\bar{B}_{i}\in\mathbb{R}^{n_{i}\times p_{i}}\), and \(\bar{C}_{i}\in\mathbb{R}^{q\times n_{i}}\) are assumed unknown.
The dynamics of the leader is given by
\[\begin{split} x_{0}(t+1)&=Sx_{0}(t)\\ y_{0}(t)&=Hx_{0}(t)\end{split} \tag{2}\]
where \(x_{0}(t)\in\mathbb{R}^{n_{0}}\) and \(y_{0}(t)\in\mathbb{R}^{q}\) represent the state and output of the leader, respectively. The leader's system matrices \(S\) and \(H\) are assumed real, constant, and known. The pair \((S,H)\) is assumed to be observable.
The dynamics and state dimensions are allowed to differ across agents, while the output dimensions must be identical for synchronization. The objective is to synchronize the outputs of all followers with that of the leader by implementing a distributed feedback control protocol for the MAS described by (1)-(2), such that \(\lim_{t\to\infty}\|y_{i}(t)-y_{0}(t)\|=0\) holds for all \(i\in\{1,2,\ldots,N\}\). To address this problem, the following assumptions are made.
**Assumption 1** (Communication topology).: _The graph \(\vec{G}\) contains a directed spanning tree with the leader node as the root._
**Assumption 2** (Stabilizability and detectability).: _The pair \((\bar{A}_{i},\bar{B}_{i})\) is stabilizable, and \((\bar{C}_{i},\bar{A}_{i})\) is detectable for all \(i\in\{1,2,\ldots,N\}\)._
**Assumption 3** (Oscillating leader dynamics).: _The leader dynamics \(S\) has all its poles on the unit circle and non-repeated._
Regarding the assumptions, we have a remark.
_Remark 1_.: Assumptions 1-3 are standard for achieving output synchronization in linear heterogeneous MASs and have been utilized in several existing results, such as [15, 34]. It is important to note that the state of the leader is bounded and does not converge to zero under Assumption 3.
Based on these assumptions, we consider a distributed feedback control protocol for each follower in (1) as follows
\[u_{i}(t)=K_{i}[x_{i}(t)-\Pi_{i}\eta_{i}(t)]+\Gamma_{i}\eta_{i}(t) \tag{3}\]
where \(K_{i}\in\mathbb{R}^{p_{i}\times n_{i}}\) is the feedback gain matrix to be designed, and \(\Pi_{i}\in\mathbb{R}^{n_{i}\times n_{0}}\) and \(\Gamma_{i}\in\mathbb{R}^{p_{i}\times n_{0}}\) are the solutions to output regulator equations given by
\[\bar{A}_{i}\Pi_{i}+\bar{B}_{i}\Gamma_{i} =\Pi_{i}S \tag{4}\] \[\bar{C}_{i}\Pi_{i} =H.\]
An observer \(\eta_{i}(t)\in\mathbb{R}^{n_{0}}\) is employed to estimate the state of the leader, governed by the following distributed observer
\[\eta_{i}(t+1) =S\eta_{i}(t)+(1+d_{i}+g_{i})^{-1}F \tag{5}\] \[\quad\times\Big{[}\sum_{j=1}^{N}a_{ij}(\eta_{j}(t)-\eta_{i}(t))+g _{i}(x_{0}(t)-\eta_{i}(t))\Big{]}\]
where \(d_{i}\) and \(g_{i}\) are the in-degree and pinning gain of node \(i\), respectively, and \(F\in\mathbb{R}^{n_{0}\times n_{0}}\) is a gain matrix to be designed.
Next, we define the observer's estimation error \(\delta_{i}(t):=\eta_{i}(t)-x_{0}(t)\). It follows from (2) and (5) that the dynamics of \(\delta_{i}(t)\) satisfies
\[\delta_{i}(t+1) =S\delta_{i}(t)+(1+d_{i}+g_{i})^{-1}F \tag{6}\] \[\quad\times\Big{[}\sum_{j=1}^{N}a_{ij}(\delta_{j}(t)-\delta_{i}( t))-g_{i}\delta_{i}(t)\Big{]}.\]
For the entire system, (6) can be expressed in a compact form as follows
\[\delta(t+1)=\big{[}I_{N}\otimes S-(I_{N}+D+G)^{-1}(\mathcal{L}+G)\otimes F \big{]}\delta(t) \tag{7}\]
where \(\delta(t)=[\delta_{1}^{\top}(t)\ \delta_{2}^{\top}(t)\ \cdots\ \delta_{N}^{\top}(t)]^{\top}\).
Next, we introduce the virtual tracking error \(\xi_{i}(t):=x_{i}(t)-\Pi_{i}\eta_{i}(t)\). By substituting (1), (3), and (5) into the definition of \(\xi_{i}(t)\), we obtain its dynamics as follows
\[\xi_{i}(t+1)=(\bar{A}_{i}+\bar{B}_{i}K_{i})\xi_{i}(t)+(1+d_{i}+g_{i})^{-1}\Pi_ {i}Fz_{i}(t) \tag{8}\]
where \(z_{i}(t)=\sum_{j=1}^{N}a_{ij}(\delta_{i}(t)-\delta_{j}(t))+g_{i}\delta_{i}(t)\).
When the true system matrices \((\bar{A}_{i},\bar{B}_{i},\bar{C}_{i})\) are known, the next lemma provides necessary and sufficient conditions for achieving output synchronization, see e.g., [15, Theorem 1].
**Lemma 1** ([15, Theorem 1]).: _Suppose Assumptions 1-3 hold. The output synchronization of the MAS (1)-(2) is achieved for all initial conditions under the distributed feedback control strategy (3)-(5), if and only if there exist matrices \(F\) and \(K_{i}\) such that \(I_{N}\otimes S-(I_{N}+D+G)^{-1}(\mathcal{L}+G)\otimes F\) and \(\bar{A}_{i}+\bar{B}_{i}K_{i}\) are Schur stable for all \(i\in\{1,2,\ldots,N\}\)._
### _Pre-collecting Noisy Data_
In order to address the challenge of unknown system matrices for each follower, we propose a distributed data-driven approach. Specifically, we excite each follower with some control inputs, obtaining a set of data denoted by \(\mathbb{D}_{i}=\{(x_{i}(T),u_{i}(T),y_{i}(T)):T\in\{0,1,\ldots,\rho\}\}\) for each follower \(i\). The set \(\mathbb{D}_{i}\) consists of state, input, and output measurements, and is obtained through an open-loop experiment on the following perturbed system
\[x_{i}(T+1) =\bar{A}_{i}x_{i}(T)+\bar{B}_{i}u_{i}(T)+w_{i}(T) \tag{9}\] \[y_{i}(T) =\bar{C}_{i}x_{i}(T)+v_{i}(T)\]
where \(w_{i}(T)\in\mathbb{R}^{n_{i}}\) and \(v_{i}(T)\in\mathbb{R}^{q}\) represent unknown process and measurement noise, respectively. These noises satisfy the following assumption.
**Assumption 4** (Polytopic noise).: _For every \(T\) and \(i\), the process noise \(w_{i}(T)\) and measurement noise \(v_{i}(T)\) belong respectively to polytopic sets \(\mathcal{P}_{w_{i}}\) and \(\mathcal{P}_{v_{i}}\) given by_
\[\mathcal{P}_{w_{i}} =\Big{\{}w_{i}|w_{i}=\sum_{k=1}^{\gamma_{v_{i}}}\beta_{w,i}^{(k)} \hat{w}_{i}^{(k)},\beta_{w,i}^{(k)}\geq 0,\sum_{k=1}^{\gamma_{v_{i}}}\beta_{w,i}^{(k)}=1 \Big{\}} \tag{10}\] \[\mathcal{P}_{v_{i}} =\Big{\{}v_{i}|v_{i}=\sum_{k=1}^{\gamma_{v_{i}}}\beta_{v,i}^{(k)} \hat{v}_{i}^{(k)},\beta_{v,i}^{(k)}\geq 0,\sum_{k=1}^{\gamma_{v_{i}}}\beta_{v,i}^{(k)}=1 \Big{\}}\]
_where \(\hat{w}_{i}^{(k)}\) and \(\hat{v}_{i}^{(k)}\) represent the \(k\)-th vertex of polytopes \(\mathcal{P}_{w_{i}}\) and \(\mathcal{P}_{v_{i}}\), respectively, and \(\gamma_{w_{i}}\) and \(\gamma_{v_{i}}\) denote the number of vertices._
To store the collected data, we define the following matrices per agent
\[X_{i+} :=[x_{i}(1)\ x_{i}(2)\ \cdots\ x_{i}(\rho)] \tag{11}\] \[X_{i} :=[x_{i}(0)\ x_{i}(1)\ \cdots\ x_{i}(\rho-1)]\] (12) \[U_{i} :=[u_{i}(0)\ u_{i}(1)\ \cdots\ u_{i}(\rho-1)]\] (13) \[Y_{i} :=[y_{i}(0)\ y_{i}(1)\ \cdots\ y_{i}(\rho-1)]\,.\]
The unknown process noise of length \(\rho\) is denoted as \(\{w_{i}(T)\}_{T=0}^{\rho-1}\). Consequently, the stacked matrix per agent, \(W_{i}=[w_{i}(0)\ w_{i}(1)\ \cdots\ w_{i}(\rho-1)]\), belongs to the matrix polytope \(\mathcal{M}_{W_{i}}\), given by
\[\mathcal{M}_{W_{i}}=\Big{\{}W_{i}\Big{|}W_{i}=\sum_{k=1}^{\gamma_{v_{i}}\rho} \beta_{W_{i}}^{(k)}\hat{W}_{i}^{(k)},\beta_{W_{i}}^{(k)}\geq 0,\sum_{k=1}^{\gamma_{v_{i}}\rho} \beta_{W,i}^{(k)}=1\Big{\}} \tag{14}\]
which results from the concatenation of multiple disturbance polytopes \(\mathcal{P}_{w_{i}}\) as follows
\[\hat{W}_{i}^{(1+(k-1)\rho)} =\big{[}\hat{w}_{i}^{(k)}\ \ \ \ \ \mathbf{0}_{n_{i}\times(\rho-1)}\big{]} \tag{15}\] \[\hat{W}_{i}^{(m+(k-1)\rho)} =\big{[}\mathbf{0}_{n_{i}\times(m-1)}\ \ \ \hat{w}_{i}^{(k)}\ \ \ \ \mathbf{0}_{n_{i}\times(\rho-m)}\big{]}\] (16) \[\hat{W}_{i}^{(\rho+(k-1)\rho)} =\big{[}\mathbf{0}_{n_{i}\times(\rho-1)}\ \ \ \hat{w}_{i}^{(k)}\big{]} \tag{17}\]
for each \(k\in\{1,2,\ldots,\gamma_{w_{i}}\}\), \(m\in\{2,3,\ldots,\rho-1\}\), and \(i\in\{1,2,\ldots,N\}\).
Similarly, upon denoting the sequence of measurement noise \(\{v_{i}(T)\}_{T=0}^{\rho-1}\) as \(V_{i}=[v_{i}(0)\ v_{i}(1)\ \cdots\ v_{i}(\rho-1)]\), we can deduce that \(V_{i}\) belongs to the following matrix polytope
\[\mathcal{M}_{V_{i}}=\Big{\{}V_{i}\Big{|}V_{i}=\sum_{k=1}^{\gamma_{w_{i}}\rho} \beta_{V_{i}}^{(k)}\hat{V}_{i}^{(k)},\beta_{V_{i}}^{(k)}\geq 0,\sum_{k=1}^{ \gamma_{v_{i}}\rho}\beta_{V_{i}}^{(k)}=1\Big{\}} \tag{12}\]
with the vertices \(\hat{V}_{i}^{(k)}\) defined in the same way as \(\hat{W}_{i}^{(k)}\) in (11).
### _Problem Statement_
Having introduced the output synchronization of discrete-time heterogeneous MASs and the pre-collected data, we now formally state the problem of interest.
**Problem 1** (Data-driven output synchronization).: _Given the state-input-output measurements \(\mathbb{D}_{i}\) for each follower \(i\), and under Assumptions 1-3, the objective is to design a distributed controller of the form (3)-(5) such that approximate output synchronization of the heterogeneous MAS (1)-(2) is achieved for any initial states._
The main challenge in addressing Problem 1 lies in solving the output regulator equations in (4), designing the controller gain \(K_{i}\), and performing synchronization analysis without knowledge of the true system matrices, but using only available data. To tackle this challenge, we propose a data-driven polytopic reachability analysis technique in this paper, inspired by the zonotopic reachability analysis presented in [36].
## III Distributed Data-driven Output Synchronization of Heterogeneous MASs
This section addresses the challenging problem of output synchronization in the unknown heterogeneous MAS (1)-(2). Due to the presence of noisy data, achieving asymptotic output synchronization for the unknown heterogeneous MAS is impractical, unlike the model-based scenario depicted in Lemma 1. Instead, we propose achieving \(\Delta\)-optimal output synchronization by ensuring the stability of reachable error trajectories, which will be formalized in the following sections.
Our approach begins by introducing a data-based polytopic representation that characterizes an open-loop MAS using noisy data \(\mathbb{D}_{i}\). Next, we address the output regulator equations by solving a data-dependent norm minimization problem. Subsequently, a polytopic controller is designed directly from the data. Leveraging this controller and the approximate solution to the output regulator equations, we propose a data-driven output synchronization algorithm and provide a proof of its UBB property, effectively addressing Problem 1. See Fig. 1 for an illustration of the leader-following heterogeneous MAS architecture.
### _Data-based Polytopic Representation of MASs_
In this part, we assume the next rank condition on the richness of the data in \(\mathbb{D}_{i}\).
**Assumption 5**.: _The data matrix \(\begin{bmatrix}U_{i}\\ X_{i}\end{bmatrix}\) has full row rank for each \(i\in\{1,2,\ldots,N\}\)._
The verification of Assumption 5 can be easily performed for a given dataset \(\mathbb{D}_{i}\). Note that this rank condition can be satisfied if the measurements \((u_{i}(T),w_{i}(T))\) are persistently exciting of order \(n_{i}+1\)[22]. We now proceed with representing the heterogeneous MAS using data.
Since the actual realization of noise \((w_{i}(T),v_{i}(T))\) is unknown, there generally exist many systems \((A_{i},B_{i},C_{i})\) that are consistent with the data \((X_{i},X_{i+},U_{i},Y_{i})\). We denote this set by \(\Sigma_{i}\) as follows:
\[\begin{split}\Sigma_{i}:=\big{\{}&(A_{i},\,B_{i},\,C_{ i})|X_{i+}=A_{i}X_{i}+B_{i}U_{i}+W_{i},\\ & Y_{i}=C_{i}X_{i}+V_{i},W_{i}\in\mathcal{M}_{W_{i}},V_{i}\in \mathcal{M}_{V_{i}}\big{\}}.\end{split} \tag{13}\]
As mentioned in Sec. II-C, we assume knowledge of the polytopes \(\mathcal{P}_{w_{i}}\) and \(\mathcal{P}_{v_{i}}\), which bound the noise \(w_{i}(T)\) and \(v_{i}(T)\), along with their associated matrix polytopes \(\mathcal{M}_{W_{i}}\) and \(\mathcal{M}_{V_{i}}\), respectively. Our objective now is to compute a set \(\mathcal{M}_{i}\) that provides an overapproximation of all possible \((A_{i},B_{i},C_{i})\) consistent with the state-input-output data and given noise bounds. To achieve this, we construct a data-based polytopic representation of \((A_{i},B_{i},C_{i})\) in the following lemma, inspired by [37]. This representation yields a matrix polytope \(\mathcal{M}_{i}\supseteq\Sigma_{i}\).
**Lemma 2** (Data-based polytopic representation of MASs).: _Suppose Assumptions 4-5 hold. For each \(i\in\{1,2,\ldots,N\}\), given state-input-output data \((X_{i},X_{i+},U_{i},Y_{i})\) of the MAS (9), the matrix polytope is defined as:_
\[\begin{split}\mathcal{M}_{i}=\Big{\{}&(\mathcal{M}_{Z _{i}},\mathcal{M}_{C_{i}})\Big{|}\mathcal{M}_{Z_{i}}=(X_{i+}-\mathcal{M}_{W_{i}} )\begin{bmatrix}U_{i}\\ X_{i}\end{bmatrix}^{\dagger},\\ &\mathcal{M}_{C_{i}}=(Y_{i}-\mathcal{M}_{V_{i}})X_{i}^{\dagger} \Big{\}}.\end{split} \tag{14}\]
_This matrix polytope characterizes all matrices \((A_{i},B_{i},C_{i})\) consistent with the data \((X_{i},X_{i+},U_{i},Y_{i})\) and noise bounds, i.e., \(\Sigma_{i}\subseteq\mathcal{M}_{i}\)._
Proof.: Consider any \((A_{i},B_{i},C_{i})\in\Sigma_{i}\). From (13), we can find \(W_{i}\in\mathcal{M}_{W_{i}}\) and \(V_{i}\in\mathcal{M}_{V_{i}}\) such that:
\[\begin{split} A_{i}X_{i}+B_{i}U_{i}&=X_{i+}-W_{i},\\ C_{i}X_{i}&=Y_{i}-V_{i}.\end{split} \tag{15}\]
Fig. 1: Distributed data-driven output synchronization.
By using (10), we can express every \(W_{i}\in\mathcal{M}_{W_{i}}\) uniquely as \(W_{i}=\sum_{k=1}^{\gamma_{w},\rho}\beta_{W_{i}}^{(k)}\hat{W}_{i}^{(k)}\) with coefficients \(\{\beta_{W_{i},i}^{(k)}\}\). Furthermore, multiplying \(\begin{bmatrix}U_{i}\\ X_{i}\end{bmatrix}^{\dagger}\) from the right on both sides of the first equation in (15) yields:
\[Z_{i}:=[B_{i}\ A_{i}]=\left(X_{i+}-\sum_{k=1}^{\gamma_{w_{i}}\rho}\beta_{W_{i} }^{(k)}\hat{W}_{i}^{(k)}\right)\begin{bmatrix}U_{i}\\ X_{i}\end{bmatrix}^{\dagger}. \tag{16}\]
Similarly, it can be easily deduced that:
\[C_{i}=\Big{(}Y_{i}-\sum_{k=1}^{\gamma_{w_{i}}\rho}\beta_{V,i}^{(k)}\hat{V}_{i} ^{(k)}\Big{)}X_{i}^{\dagger}. \tag{17}\]
Hence, for every \((A_{i},B_{i},C_{i})\in\Sigma_{i}\), there exist \(\beta_{W,i}^{(k)}\), \(k=1,2,\ldots,\gamma_{w_{i}}\rho\), and \(\beta_{V,i}^{(k)}\), \(k=1,2,\ldots,\gamma_{v_{i}}\rho\), such that \(\sum_{k=1}^{\gamma_{w_{i}}\rho}\beta_{W_{i}}^{(k)}=1\) and \(\sum_{k=1}^{\gamma_{w_{i}}\rho}\beta_{v,i}^{(k)}=1\), satisfying (16) and (17). Therefore, for every \((A_{i},B_{i},C_{i})\in\Sigma_{i}\), it also holds that \((A_{i},B_{i},C_{i})\in\mathcal{M}_{\Sigma_{i}}\) for \(i=1,2,\ldots,N\) as defined in (14), which concludes the proof.
It is worth noting that a prevalent approach in prior literature for modeling unknown, yet bounded noise is to use the energy form, typically in terms of a quadratic full-block bound; see, e.g., [22, 24, 27, 29, 35]. In contrast, we propose a novel approach by describing unknown noise using polytopes. We then develop a data-based polytopic representation of MASs in Lemma 2, which allows us to characterize all possible system matrices. This representation paves the way for addressing Problem 1 in the subsequent analysis. Notably, compared with the quadratic matrix inequality-based representation in [27, 29, 35], our proposed polytopic representation maintains the simplicity and compactness, while providing a more precise characterization of the system and resulting in less conservative data-based stability conditions.
### _Solution to Output Regulation Equations with Noisy Data_
In this subsection, we introduce a data-driven approach to solve the output regulator equations by minimizing the norm of noise-matrix polytopes. This approach yields approximate solutions \((\Pi_{i},\Gamma_{i})\) for each follower directly from noisy data, without requiring exact knowledge of the true system matrices \((\bar{A}_{i},\bar{B}_{i},\bar{C}_{i})\).
We point out that the presence of \(w_{i}(T)\) and \(v_{i}(T)\) in data \(\mathbb{D}_{i}\) prevents us from obtaining accurate solutions to the initial output regulator equations (4). Thus, we define \(\Delta_{i1}\) and \(\Delta_{i2}\) as the errors in the regulator equations caused by the noise-corrupted data. As an intermediate step, we modify the output regulator equations (4) for any \((A_{i},B_{i},C_{i})\in\mathcal{M}_{i}\) as follows:
\[\begin{split}\Delta_{i1}&=A_{i}\Pi_{i}+B_{i}\Gamma_{ i}-\Pi_{i}S,\\ \Delta_{i2}&=C_{i}\Pi_{i}-H.\end{split} \tag{18}\]
Next, we make a crucial observation that directly finding the solutions \(\Pi_{i}\) and \(\Gamma_{i}\) from (18) is infeasible due to the presence of unknown terms \(\Delta_{i1}\) and \(\Delta_{i2}\). Accordingly, we formulate the problem of determining the gains \(\Pi_{i}\) and \(\Gamma_{i}\) for any \((A_{i},B_{i},C_{i})\in\mathcal{M}_{i}\) as an optimization problem that minimizes \(\Delta_{i1}\) and \(\Delta_{i2}\) with respect to a chosen norm. According to the data-based polytopic representation in Lemma 2, we define the following optimization problem:
\[\begin{split}\min_{\Pi_{i},\Gamma_{i}}\ \Big{\|}\mathcal{M}_{Z_{i}} \begin{bmatrix}\Gamma_{i}\\ \Pi_{i}\end{bmatrix}-\Pi_{i}S\Big{\|}_{F}+\|\mathcal{M}_{C_{i}}\Pi_{i}-H\|_{F}. \end{split} \tag{19}\]
Overall, Problem (19) is convex and can be efficiently solved using off-the-shelf solvers. Let \((\Pi_{i}^{*},\Gamma_{i}^{*})\) denote any optimal solution of (19) and \((\Delta_{i1}^{*},\Delta_{i2}^{*})\) denote the resulting error of output regulation equations in (18).
The following result provides upper bounds on the unknown regulator equation errors \(\Delta_{i1}^{*}\) and \(\Delta_{i2}^{*}\), which serves as a solid basis for the subsequent analysis and design of data-driven output synchronization. For simplicity, we omit the subscript \(F\) in the sequel, using \(\|\cdot\|\) for \(\|\cdot\|_{F}\).
**Lemma 3** (Bounded regulator equation errors).: _Consider the MAS (1)-(2) with the relaxed output regulator equations (18). Suppose that Assumptions 1-5 hold. For any \((A_{i},B_{i},C_{i})\in\mathcal{M}_{i}\) and \(i\in\{1,2,\ldots,N\}\), there exist two bounded matrix polytopes \(\mathcal{M}_{\Delta_{i1}}\) and \(\mathcal{M}_{\Delta_{i2}}\) such that the regulator equation errors \(\Delta_{i1}\in\mathcal{M}_{\Delta_{i1}}\) and \(\Delta_{i2}\in\mathcal{M}_{\Delta_{i2}}\)._
Proof.: It follows from the data-based polytopic representation in Lemma 2 that the true system matrices \((\bar{A}_{i},\bar{B}_{i},\bar{C}_{i})\in\mathcal{M}_{i}\) can be expressed by a unique set of \(\bar{\beta}_{W,i}^{(k)}\) and \(\bar{\beta}_{V,i}^{(k)}\), respectively, as follows:
\[\begin{split}[\bar{B}_{i}\ \bar{A}_{i}]&=\Big{(}X_{i+}-\sum_{k=1}^{ \gamma_{w_{i}}\rho}\bar{\beta}_{W,i}^{(k)}\hat{W}_{i}^{(k)}\Big{)}\begin{bmatrix} U_{i}\\ X_{i}\end{bmatrix}^{\dagger}\\ \bar{C}_{i}&=\Big{(}Y_{i}-\sum_{k=1}^{\gamma_{w_{i}}\rho}\bar{ \beta}_{V,i}^{(k)}\hat{V}_{i}^{(k)}\Big{)}X_{i}^{\dagger}.\end{split} \tag{20}\]
When \(\bar{\beta}_{W,i}^{(k)}\) and \(\bar{\beta}_{V,i}^{(k)}\) are known, we denote the solution of (4) associated with \((\bar{A}_{i},\bar{B}_{i},\bar{C}_{i})\) by \((\Pi_{i}^{*},\Gamma_{i}^{*})\), which also serves as a candidate solution for the optimization problem (19). Nonetheless, it may not necessarily yield the optimal objective value. By utilizing (4), (14), and (20), it can be shown that the first term of (19) satisfies
\[\begin{split}&\Big{\|}M_{Z_{i}}\begin{bmatrix}\Gamma_{i}^{*}\\ \Pi_{i}^{*}\end{bmatrix}-\Pi_{i}^{*}S\Big{\|}\\ &=\Big{\|}(X_{i+}-\mathcal{M}_{W_{i}})\begin{bmatrix}U_{i}\\ X_{i}\end{bmatrix}^{\dagger}\begin{bmatrix}\Gamma_{i}^{*}\\ \Pi_{i}^{*}\end{bmatrix}-[\bar{B}_{i}\ \bar{A}_{i}]\begin{bmatrix}\Gamma_{i}^{*}\\ \Pi_{i}^{*}\end{bmatrix}\Big{\|}\\ &=\Big{\|}\sum_{k=1}^{\gamma_{w_{i}}\rho}\big{[}(\bar{\beta}_{W,i}^{(k)}- \beta_{W,i}^{(k)})\hat{W}_{i}^{(k)}\begin{bmatrix}U_{i}\\ X_{i}\end{bmatrix}^{\dagger}\begin{bmatrix}\Gamma_{i}^{*}\\ \Pi_{i}^{*}\end{bmatrix}\Big{\|}\\ &\leq 2\gamma_{w_{i}}\rho\tilde{W}_{i}\Big{\|}\begin{bmatrix}U_{i}\\ X_{i}\end{bmatrix}^{\dagger}\begin{bmatrix}\Gamma_{i}^{*}\\ \Pi_{i}^{*}\end{bmatrix}\Big{\|}\end{split} \tag{21}\]
with \(\tilde{W}_{i}:=\max_{k\in[1,\gamma_{w_{i}}\rho]}\|\hat{W}_{i}^{(k)}\|\).
Similarly, for the second term of (19), we have
\[\begin{split}\|M_{C_{i}}\Pi_{i}^{*}-H\|&=\Big{\|}\sum_{k=1}^{ \gamma_{w_{i}}\rho}[(\bar{\beta}_{V,i}^{(k)}-\beta_{V,i}^{(k)})\hat{V}_{i}^{(k)} ]X_{i}^{\dagger}\Pi_{i}^{*}\Big{\|}\\ &\leq 2\gamma_{w_{i}}\rho\tilde{V}_{i}\|X_{i}^{\dagger}\Pi_{i}^{*}\| \end{split} \tag{22}\]
with \(\tilde{V}_{i}:=\max_{k\in[1,\gamma_{w_{i}}\rho]}\|\hat{V}_{i}^{(k)}\|\).
By substituting the optimal solution \((\Pi_{i}^{*},\Gamma_{i}^{*})\) of problem (19) into (18), it follows from (21) and (22) that the regulator equation errors \(\Delta_{i1}^{*}\) and \(\Delta_{i2}^{*}\) adhere to
\[\Delta_{i1}^{*}\in\sum_{k=1}^{\gamma_{\kappa},\rho}\left[(\bar{ \beta}_{W,i}^{(k)}-\beta_{W,i}^{(k)})\hat{W}_{i}^{(k)}\right]\begin{bmatrix}U_{ i}\\ X_{i}\end{bmatrix}^{\dagger}\begin{bmatrix}\Gamma_{i}^{*}\\ \Pi_{i}^{*}\end{bmatrix}\triangleq\mathcal{M}_{\Delta_{i1}} \tag{23a}\] \[\Delta_{i2}^{*}\in\sum_{k=1}^{\gamma_{\kappa},\rho}\left[(\bar{ \beta}_{V,i}^{(k)}-\beta_{V,i}^{(k)})\hat{V}_{i}^{(k)}\right]X_{i}^{\dagger} \Pi_{i}^{*}\triangleq\mathcal{M}_{\Delta_{i2}} \tag{23b}\]
where \(\mathcal{M}_{\Delta i1}\) and \(\mathcal{M}_{\Delta_{i2}}\) are bounded polytopes, implying the boundedness of the regulator equation errors \(\Delta_{i1}^{*}\) and \(\Delta_{i2}^{*}\).
It is worth emphasizing that (19) provides a data-driven method to compute the gain matrices \(\Pi_{i}\) and \(\Gamma_{i}\) independently of system matrices. Furthermore, Lemma 3 ensures the practicability of the obtained \(\Pi_{i}\) and \(\Gamma_{i}\) for the subsequent analysis and design of data-driven output synchronization by constraining the regulator equation errors \(\Delta_{i1}\) and \(\Delta_{i2}\) within the bounded matrix polytopes \(\mathcal{M}_{\Delta_{i1}}\) and \(\mathcal{M}_{\Delta_{i2}}\).
_Remark 2_ (Relationship between noise and output regulator equations).: In fact, problem (19) addresses a variant of the output regulation problem that incorporates unknown noise. This implies that the solutions \(\Pi_{i}\) and \(\Gamma_{i}\) of (19) satisfy the relaxed equations (18) when the noise \(w_{i}(T)\) and \(v_{i}(T)\) is not identically equal to zero. According to (23), the size of the matrix polytopes \(\mathcal{M}_{\Delta_{i1}}\) and \(\mathcal{M}_{\Delta_{i2}}\) depends on the upper bound of noise, indicating that \(\Delta_{i1}\) and \(\Delta_{i2}\) increase with the noise levels \(w_{i}(T)\) and \(v_{i}(T)\). However, in the noise-free case where \(w_{i}(T)=0\) and \(v_{i}(T)=0\), the solution of (19) achieves zero cost (i.e., \(\Delta_{i1}=0\) and \(\Delta_{i2}=0\)) and satisfies (4).
### _Controller Design and Output Synchronization Analysis_
In the following, we focus on Problem 1, which involves learning stabilizing controllers \(K_{i}\) for each follower and analyzing the output synchronization of the MAS (1)-(2) under the proposed distributed control protocol ((3), (5), and (18)). To begin with, we reconstruct the dynamics of the virtual tracking error \(\xi_{i}(t)\) as follows:
\[\xi_{i}(t+1)=(\bar{A}_{i}+\bar{B}_{i}K_{i})\xi_{i}(t)+\tilde{\delta}_{i}(t) \tag{24}\]
where we utilize (1), (3), (5), (18), and define \(\tilde{\delta}_{i}(t):=\Delta_{i1}(x_{0}(t)+\delta_{i}(t))+\Pi_{i}(1+d_{i}+g_{ i})^{-1}Fz_{i}(t)\).
We observe that \(\xi_{i}(t)\) belongs to a well-defined polytope \(\mathcal{P}_{\xi_{i},t}\), i.e., \(\xi_{i}(t)\in\mathcal{P}_{\xi_{i},t}\) for \(t\in\mathbb{N}\) and \(i\in\{1,2,\ldots,N\}\). The next lemma provides the definition and boundedness guarantee of the polytope \(\mathcal{P}_{\xi_{i},t}\).
**Lemma 4** (Virtual tracking error polytope).: _Under Assumptions 1-5, let \(\mathcal{P}_{\xi_{i},0}=\xi_{i}(0)\). At time \(t\in\mathbb{N}\), the reachable set of the virtual tracking error is given by:_
\[\mathcal{P}_{\xi_{i},t}:=(\bar{A}_{i}+\bar{B}_{i}K_{i})^{t}\xi_{i}(0)+\sum_{ \ell=0}^{t-1}(\bar{A}_{i}+\bar{B}_{i}K_{i})^{\ell}\mathcal{P}_{\delta_{i},t- \ell-1} \tag{25}\]
_where \(\mathcal{P}_{\bar{\delta}_{i},t}:=\Delta_{i1}^{*}\mathcal{P}_{x_{0}}+\Delta_{i1 }^{*}\delta_{i}(t)+\Pi_{i}(1+d_{i}+g_{i})^{-1}Fz_{i}(t)\), and \(\mathcal{P}_{x_{0}}\) represents a well-defined polytope. Moreover, if there exist matrices \(K_{i}\) and \(F\) such that \(\bar{A}_{i}+\bar{B}_{i}K_{i}\) and \(I_{N}\otimes S-(I_{N}+D+G)^{-1}(\mathcal{L}+G)\otimes F\) are Schur stable, then \(\mathcal{P}_{\xi_{i},t}\) is a uniformly bounded set for any \(t\in\mathbb{N}\) and \(i\in\{1,2,\ldots,N\}\)._
Proof.: First, we invoke Assumption 3 to guarantee the boundedness of the leader's state \(x_{0}(t)\) for any \(t\in\mathbb{N}\). By constructing a well-defined polytope \(\mathcal{P}_{x_{0}}\), we ensure that \(x_{0}(t)\in\mathcal{P}_{x_{0}}\) holds for all \(t\in\mathbb{N}\). Moreover, considering the dynamics of the observer estimation error \(\delta_{i}(t)\) in (7), we observe that if \(I_{N}\otimes S-(I_{N}+D+G)^{-1}(\mathcal{L}+G)\otimes F\) is Schur stable, the observer convergence is assured, i.e., \(\lim_{t\rightarrow\infty}\delta_{i}(t)=0\). As a result, \(\delta_{i}(t)\) and, consequently, \(z_{i}(t)\) defined in (8), are uniformly bounded. Building upon these findings, we deduce from (24) that \(\tilde{\delta}_{i}(t)\) forms a uniformly bounded sequence. Thus, there exists a bounded polytope \(\mathcal{P}_{\tilde{\delta}_{i},t}\) defined as \(\mathcal{P}_{\tilde{\delta}_{i},t}:=\Delta_{i1}^{*}\mathcal{P}_{x_{0}}+\Delta_{i 1}^{*}\delta_{i}(t)+\Pi_{i}(1+d_{i}+g_{i})^{-1}Fz_{i}(t)\) such that \(\delta_{i}(t)\in\mathcal{P}_{\tilde{\delta}_{i},t}\) holds for \(t\in\mathbb{N}\).
Furthermore, due to the Schur stability of \(\bar{A}_{i}+\bar{B}_{i}K_{i}\) and the boundedness of \(\delta_{i}(0)\), there exists a uniformly bounded set, defined as in (25), denoted by \(\mathcal{P}_{\xi_{i},t}\). This ensures that \(\xi_{i}(t)\in\mathcal{P}_{\xi_{i},t}\) for any \(t\in\mathbb{N}\) and \(i\in\{1,2,\ldots,N\}\), thereby completing the proof.
_Remark 3_ (Design a stabilizing matrix \(F\)).: We would like to emphasize that there exist techniques for computing a stabilizing matrix \(F\) while ensuring the condition that \(I_{N}\otimes S-(I_{N}+D+G)^{-1}(\mathcal{L}+G)\otimes F\) is Schur stable. An effective approach is to solve discrete-time Riccati equations, as described in [39], which provides a matrix \(F\) satisfying the desired stability condition.
At this stage, two challenges need to be addressed. First, the polytope \(\mathcal{P}_{\xi_{i},t}\) derived in Lemma 4 cannot be directly applied in practical scenarios due to the unknown true system matrices \(\bar{A}_{i}\) and \(\bar{B}_{i}\). Second, it is crucial to ensure the stability of \(\bar{A}_{i}+\bar{B}_{i}K_{i}\). To tackle the former, we aim to construct a conservative approximation \(\bar{\mathcal{P}}_{\xi_{i},t}\) of \(\mathcal{P}_{\xi_{i},t}\) such that \(\mathcal{P}_{\xi_{i},t}\subseteq\bar{\mathcal{P}}_{\xi_{i},t}\). The latter issue can be addressed by designing a controller gain \(K_{i}\) that stabilizes all \((A_{i},B_{i})\) in \(\Sigma_{i}\).
As previously mentioned, we proceed to construct an approximation \(\bar{\mathcal{P}}_{\xi_{i},t}\) of agent \(i\). First, we observe from Lemma 4 that
\[\mathcal{P}_{\xi_{i},t}=(\bar{A}_{i}+\bar{B}_{i}K_{i})\mathcal{P}_{\xi_{i},t-1}+ \mathcal{P}_{\tilde{\delta}_{i},t}. \tag{26}\]
To this end, we define the matrix polytope as
\[\mathcal{M}_{Z_{i}}^{K}:=\mathcal{M}_{Z_{i}}\begin{bmatrix}K_{i}\\ I\end{bmatrix}. \tag{27}\]
The approximation \(\bar{\mathcal{P}}_{\xi_{i},t}\) is obtained through the following lemma.
**Lemma 5** (Virtual tracking error polytope approximation).: _For \(i\in\{1,2,\ldots,N\}\), let \(\bar{\mathcal{P}}_{\xi_{i},t}\) be defined as_
\[\bar{\mathcal{P}}_{\xi_{i},t}:=\mathcal{M}_{Z_{i}}^{K}\bar{\mathcal{P}}_{\xi_{ i},t-1}+\mathcal{P}_{\tilde{\delta}_{i},t} \tag{28}\]
_with \(\bar{\mathcal{P}}_{\xi_{i},0}=\
\[\mathcal{P}_{\xi_{i},t-1}\subseteq\bar{\mathcal{P}}_{\xi_{i},t-1}\text{ and }\bar{A}_{i}+\bar{B}_{i}K_{i}=Z_{i}\begin{bmatrix}K_{i}\\ I\end{bmatrix}\in\mathcal{M}_{Z_{i}}\begin{bmatrix}K_{i}\\ I\end{bmatrix}\text{.}\]
Consequently, we obtain \((\bar{A}_{i}+\bar{B}_{i}K_{i})\mathcal{P}_{\xi_{i},t-1}\subseteq\mathcal{M}_{Z _{i}}^{K}\bar{\mathcal{P}}_{\xi_{i},t-1}\), which completes the proof.
Drawing on the aforementioned lemma, we can build on the result in [38] and establish the following lemma to ensure the stability of the virtual tracking error polytope \(\bar{\mathcal{P}}_{\xi_{i},t}\).
**Lemma 6** (Stability of the virtual tracking error polytope).: _Given Assumptions 1-5 and an initial value \(\xi_{i}(0)\) for agent \(i\in\{1,2,\ldots,N\}\), there exists a polytope \(\bar{\mathcal{P}}_{i}\subset\mathbb{R}^{n_{i}}\) satisfying the following properties: i) \(\bar{\mathcal{P}}_{\xi_{i},t}\subset\bar{\mathcal{P}}_{i}\) for all \(t\in\mathbb{N}\); and ii) \(\bar{\mathcal{P}}_{i}\) is an invariant set, i.e., for \(\xi_{i}(t)\in\bar{\mathcal{P}}_{i}\), it holds that \(\mathcal{M}_{Z_{i}}^{K}\xi_{i}(t)+\bar{\delta}_{i}(t)\in\bar{\mathcal{P}}_{i}\) for all \(\delta_{i}(t)\in\bar{\mathcal{P}}_{\delta_{i},t}\) and \(t\in\mathbb{N}\)._
Proof.: The proof consists of two steps. In the first step, we compute the reachable set of \(\bar{\mathcal{P}}_{\xi_{i},t}\). In the second step, we ensure the stability of this reachable set by proving its invariance, which in turn guarantees the stability of \(\bar{\mathcal{P}}_{\xi_{i},t}\).
First, recalling Lemma 4, we refer to the bounded and compact set \(\mathcal{P}_{\bar{\delta}_{i},t}\) as the disturbance set. Building on this set, \(\bar{\mathcal{P}}_{\xi_{i},t}\), and \(\mathcal{M}_{Z_{i}}^{K}\) defined in (27), the reachable set for the virtual tracking error \(\xi_{i}(t)\) at time \(t\) can be formulated as
\[\Xi_{i,t}=\Big{\{}Q_{i}\Xi_{i,t-1}+\mathcal{P}_{\bar{\delta}_{i},t}:Q_{i}\in \mathcal{M}_{Z_{i}}^{K}\Big{\}} \tag{29}\]
with \(\xi_{i}(0)=\Xi_{i,0}\). Hence, it follows from Lemma 5 that \(\bar{\mathcal{P}}_{\xi_{i},t}\subset\Xi_{i,t}\).
Next, we prove the stability of the reachable set \(\Xi_{i,t}\). Assumption 2 states that the system is stabilizable, implying the existence of a positive definite symmetric matrix \(P_{i}\) such that \(Q_{i}^{\top}P_{i}Q-P_{i}\prec 0\) for all \(Q_{i}\in\mathcal{M}_{Z_{i}}^{K}\). Therefore, for sufficiently small \(\mathcal{P}_{\bar{\delta}_{i},t}\), we have \(\Xi_{i,t+1}\subseteq\Xi_{i,t}\) for \(t\in\mathbb{N}\), ensuring the stability of the reachable trajectories \(\Xi_{i,t}\). Consequently, according to Lemma 5, there exists an invariant set \(\bar{\mathcal{P}}_{i}\) satisfying \(\bar{\mathcal{P}}_{i}\supset\Xi_{i,t}\supset\bar{\mathcal{P}}_{\xi_{i},t}\) for all \(t\in\mathbb{N}\), with \(\bar{\mathcal{P}}_{i}:=\Xi_{i,0}\). The proof is complete.
Lemma 6 provides a stability guarantee for \(\bar{\mathcal{P}}_{\xi_{i},t}\), which also implies the stability of the virtual tracking error polytope \(\mathcal{P}_{\xi_{i},t}\) since \(\mathcal{P}_{\xi_{i},t}\subseteq\bar{\mathcal{P}}_{\xi_{i},t}\). With this result in hand, we proceed to the problem of identifying a gain matrix \(K_{i}\) for follower \(i\). This gain matrix ensures that \(A_{i}+B_{i}K_{i}\) is Schur stable for all \((A_{i},B_{i})\in\mathcal{M}_{Z_{i}}\). We derive a convex program, specifically a SDP, in the next theorem. This program aims to search for a stabilizing matrix \(K_{i}\) based on \((X_{i},X_{i+},U_{i},Y_{i})\).
**Theorem 1**.: _Consider the MAS (1)-(2) under the distributed data-driven feedback protocol (3) and (5) over graph \(\bar{\mathcal{G}}\). Let Assumptions 1-5 hold. Define \(\Omega_{i}=X_{i+}-\sum_{k=1}^{\gamma_{w}\rho}\beta_{W_{i}i}^{(k)}\dot{W}_{i}^{( k)}\). Then, the following SDP is feasible, and the gain matrix \(K_{i}=U_{i}M_{i}(X_{i}M_{i})^{-1}\) with any \(M_{i}\in\mathbb{R}^{p\times n_{i}}\) satisfying (30) renders \(A_{i}+B_{i}K_{i}\) Schur stable for all \((A_{i},B_{i})\in\mathcal{M}_{Z_{i}}\)_
\[X_{i}M_{i}-\Omega_{i}M_{i}(X_{i}M_{i})^{-1}(\Omega_{i}M_{i})^{ \top} \succ 0 \tag{30}\] \[X_{i}M_{i} \succ 0.\]
Proof.: Consider any matrix \(K_{i}\) that makes \(A_{i}+B_{i}K_{i}\) Schur stable. Based on the proof of Lemma 6, for all \((A_{i},B_{i})\in\mathcal{M}_{Z_{i}}\), there exists a matrix \(P_{i}\succ 0\) such that
\[(A_{i}+B_{i}K_{i})^{\top}P_{i}(A_{i}+B_{i}K_{i})-P_{i}\prec 0. \tag{31}\]
According to Assumption 5, any vector of length \(n_{i}+m_{i}\) can be expressed as a linear combination of the data matrix. That is, there exists a matrix \(G_{i}\in\mathbb{R}^{p\times n_{i}}\) satisfying
\[\begin{bmatrix}K_{i}\\ I\end{bmatrix}=\begin{bmatrix}U_{i}\\ X_{i}\end{bmatrix}G_{i} \tag{32}\]
which implies
\[\begin{split} A_{i}+B_{i}K_{i}&=Z_{i}\begin{bmatrix}U_{i}\\ X_{i}\end{bmatrix}G_{i}\\ &=\Big{(}X_{i+}-\sum_{k=1}^{\gamma_{w_{i}}\rho}\beta_{W_{i}i}^{(k)}\dot{W}_{i}^{ (k)}\Big{)}G_{i}\end{split} \tag{33}\]
with the second identity following from (16). Substituting (33) into (31), we have
\[\begin{split}\Big{[}\Big{(}X_{i+}&-\sum_{k=1}^{\gamma_{w_{i} }\rho}\beta_{W_{i}i}^{(k)}\dot{W}_{i}^{(k)}\Big{)}G_{i}\Big{]}^{\top}P_{i}\Big{[} \Big{(}X_{i+}\\ &-\sum_{k=1}^{\gamma_{w_{i}}\rho}\beta_{W_{i}i}^{(k)}\dot{W}_{i}^{ (k)}\Big{)}G_{i}\Big{]}-P_{i}\prec 0.\end{split} \tag{34}\]
Let \(G_{i}=M_{i}P_{i}\) and \(\Omega_{i}=X_{i+}-\sum_{k=1}^{\gamma_{w}\rho}\beta_{W_{i}}^{(k)}\dot{W}_{i}^{(k)}\). By a Schur complement argument, condition (34) implies
\[P_{i}^{-1}-\Omega_{i}M_{i}P_{i}(\Omega_{i}M_{i})^{\top}\succ 0. \tag{35}\]
Recall from (32) that \(X_{i}G_{i}=I\) and \(K_{i}=U_{i}G_{i}\). By exploiting \(P_{i}=(X_{i}M_{i})^{-1}\), seeking stability is equivalent to finding a matrix \(M_{i}\) such that
\[X_{i}M_{i}-\Omega_{i}M_{i}(X_{i}M_{i})^{-1}(\Omega_{i}M_{i})^{\top}\succ 0\]
with the choice \(K_{i}=U_{i}M_{i}(X_{i}M_{i})^{-1}\). Then, it can be concluded that \(A_{i}+B_{i}K_{i}\) is stable as \(P_{i}\) is symmetric and \(P_{i}\succ 0\) for all \((A_{i},\,B_{i})\in\mathcal{M}_{Z_{i}}\).
Before presenting our stability result, we define the tracking error \(e_{i}(t)\) of agent \(i\) as \(e_{i}(t):=y_{i}(t)-y_{0}(t)\). Our objective is to analyze the convergence of the tracking error, ensuring that the induced closed-loop system exhibits desired stability properties. Leveraging previous results, including the optimization problem (19), Lemmas 3-6, and Theorem 1, we present our distributed data-driven output synchronization solution for unknown heterogeneous MASs (1)-(2) in Algorithm 1, accompanied by the stability guarantees detailed below.
**Theorem 2**.: _Consider the MAS described by (1)-(2) and the graph \(\bar{\mathcal{G}}\). Suppose Assumptions 1-5 hold. Denoting the optimal solution of Problem (19) by \((\Pi_{i}^{*},\Gamma_{i}^{*})\), the tracking errors \(e_{i}(t)\) are ultimately uniformly bounded (UUB) under the distributed data-driven feedback protocol (3) and (5) for any initial state and all \(i\in\{1,2,\ldots,N\}\), if the following two conditions
Iv-B2 Choose matrix \(F\) such that \(I_{N}\otimes S-(I_{N}+D+G)^{-1}(\mathcal{L}+G)\otimes F\) is Schur stable.
Proof.: It can be observed that the solutions \(\Pi_{i}\) and \(\Gamma_{i}\) to the optimization problem (19) satisfy (18) for all \((A_{i},B_{i},C_{i})\in\mathcal{M}_{i}\). Considering the true system matrices \((\bar{A}_{i},\bar{B}_{i},\bar{C}_{i})\in\mathcal{M}_{i}\), we deduce from (1), (2), and (18) that the tracking error \(e_{i}(t)\) can be expressed as
\[e_{i}(t)=\bar{C}_{i}x_{i}(t)-(\bar{C}_{i}\Pi_{i}^{*}-\Delta_{i2}^{*})x_{0}(t). \tag{36}\]
Moreover, based on (7), if we select a stabilizing matrix \(F\) that satisfies the condition \(I_{N}\otimes S-(I_{N}+D+G)^{-1}(\mathcal{L}+G)\otimes F\) being Schur stable, the observer's state asymptotically converges to the leader's state, i.e., \(\lim_{t\to\infty}\delta_{i}(t)=\eta_{i}(t)-x_{0}(t)=0\). Therefore, as \(t\to\infty\), we have
\[\lim_{t\to\infty}e_{i}(t)=\lim_{t\to\infty}\bar{C}_{i}\xi_{i}(t)+\Delta_{i2}^ {*}x_{0}(t), \tag{37}\]
where the fact that \(\xi_{i}(t):=x_{i}(t)-\Pi_{i}^{*}\eta_{i}(t)\) has been used.
Furthermore, we can recursively obtain from (28) that for \(t\in\mathbb{N}\), the polytope \(\bar{\mathcal{P}}_{\xi_{i},t}\) satisfies
\[\bar{\mathcal{P}}_{\xi_{i},t}=(\mathcal{M}_{Z_{i}}^{K})^{t}\bar{\mathcal{P}}_ {\xi_{i},0}+\sum_{\ell=0}^{t-1}(\mathcal{M}_{Z_{i}}^{K})^{\ell}\mathcal{P}_{ \bar{\delta}_{i},t-\ell-1} \tag{38}\]
As \(\lim_{t\to\infty}\delta_{i}(t)=0\), it follows that \(\lim_{t\to\infty}z_{i}(t)=0\). Consequently, as \(t\to\infty\), the set \(\bar{\mathcal{P}}_{\xi_{i},t}\) converges to the following set:
\[(\mathcal{M}_{Z_{i}}^{K})^{t}\bar{\mathcal{P}}_{\xi_{i},0}+\sum_{\ell=0}^{t-1 }(\mathcal{M}_{Z_{i}}^{K})^{\ell}\Delta_{i1}^{*}\mathcal{P}_{x_{0}}. \tag{39}\]
Based on Lemmas 2-6, it can be concluded that matrices \(\bar{C}_{i}\), \(\Delta_{i1}\), and \(\Delta_{i2}\), as well as the sequences \(\xi_{i}(t)\) and \(x_{0}(t)\), can be bounded and constrained within compact polytopes, respectively. Specifically, we have \(\bar{C}_{i}\in\mathcal{M}_{C_{i}}\), \(\Delta_{i1}^{*}\in\mathcal{M}_{\Delta_{i1}}\), \(\Delta_{i2}^{*}\in\mathcal{M}_{\Delta_{i2}}\), \(\xi_{i}(t)\in\mathcal{P}_{\xi_{i},t}\subseteq\bar{\mathcal{P}}_{\xi_{i},t}\), and \(x_{0}(t)\in\mathcal{P}_{x_{0}}\) for all \(t\in\mathbb{N}\). By combining (37) and (39), we can compute the reachable set of the tracking error \(e_{i}(t)\) at time \(t\) (as \(t\to\infty\)) as follows:
\[\mathcal{P}_{e_{i},t} :=\mathcal{M}_{C_{i}}(\mathcal{M}_{Z_{i}}^{K})^{t}\bar{\mathcal{P} }_{\xi_{i},0}\] \[+\big{[}\mathcal{M}_{C_{i}}\sum_{\ell=0}^{t-1}(\mathcal{M}_{Z_{i} }^{K})^{\ell}\mathcal{M}_{\Delta_{i1}}+\mathcal{M}_{\Delta_{i2}}\big{]} \mathcal{P}_{x_{0}}. \tag{40}\]
According to Theorem 1, the controller gain matrix \(K_{i}=U_{i}M_{i}(X_{i}M_{i})^{-1}\) renders \(A_{i}+B_{i}K_{i}\) Schur stable for all \((A_{i},B_{i})\in\mathcal{M}_{Z_{i}}\), which implies the existence of \(P_{i}:=(X_{i}M_{i})^{-1}\) such that any \(Q_{i}\in\mathcal{M}_{Z_{i}}^{K}\) is also Schur stable. This means that there exist constants \(\mu>0\) and \(\beta=\max_{Q_{i}\in\mathcal{M}_{Z_{i}}^{K}}\lambda_{\min}(Q_{i})\in(0,1)\) such that \((\mathcal{M}_{Z_{i}}^{K})^{t}\bar{\mathcal{P}}_{\xi_{i},0}\subseteq\mu\beta^{t }\bar{\mathcal{P}}_{\xi_{i},0}\) holds. Consequently, (40) obeys
\[\begin{split}\mathcal{P}_{e_{i},t}&\subseteq\mu \beta^{t}\mathcal{M}_{C_{i}}\bar{\mathcal{P}}_{\xi_{i},0}\\ &+\Big{[}\mu\mathcal{M}_{C_{i}}\sum_{\ell=0}^{t-1}(\mathcal{M}_{Z _{i}}^{K})^{\ell}\mathcal{M}_{\Delta_{i1}}+\mathcal{M}_{\Delta_{i2}}\Big{]} \mathcal{P}_{x_{0}}.\end{split} \tag{41}\]
Hence, as \(t\to\infty\), \(e_{i}(t)\) converges to an adjustable bounded set as (41).
Therefore, it can be concluded that the tracking error \(e_{i}(t)\) is UUB for all \(i\in\{1,2,\ldots,N\}\) when considering the proposed distributed data-driven feedback protocol given by (3) and (5). This property enables the outputs of all followers to approximately synchronize with the output of the leader.
_Remark 4_ (\(\Delta\)-optimal output synchronization).: The proposed distributed data-driven feedback control protocol (3)-(5) achieves \(\Delta\)-optimal output synchronization for the leader-following MAS (1)-(2) with unknown system matrices. This result, presented in Theorem 2, addresses Problem 1 effectively. Specifically, for any \((A_{i},B_{i},C_{i})\in\Sigma_{i}\) with \(i\in\{1,2,\ldots,N\}\), the tracking error \(e_{i}(t)\) converges to a bounded and compact reachable set as given in (41). The size of this set is influenced by the size of noise polytopes \(\mathcal{M}_{\Delta_{i1}}\) and \(\mathcal{M}_{\Delta_{i2}}\), as defined in (23). Notably, the magnitude of tracking error \(e_{i}(t)\) is positively correlated with the vertices of noise polytopes \(\mathcal{P}_{w_{i}}\) and \(\mathcal{P}_{v_{i}}\), corresponding to the noise levels \(w_{i}(T)\) and \(v_{i}(T)\), respectively. It is interesting that when \(w_{i}(T)=0\) and \(v_{i}(T)=0\), the regulator equation errors \(\Delta_{i1}=0\) and \(\Delta_{i2}=0\), resulting in the virtual tracking error \(\xi_{i}(t)\) asymptotically converge to zero, as shown in Lemma 6. In this case, the tracking error \(e_{i}(t)\) asymptotically converges to zero too.
Based on the above remark, we can derive the following corollary for the noise-free case.
**Corollary 1**.: _Consider the leader-following MAS (9) with \(w_{i}(T)=0\) and \(v_{i}(T)=0\). Under the same conditions as in Theorem 2, the tracking error \(e_{i}(t)\) asymptotically converges to zero for all \((A_{i},B_{i},C_{i})\in\Sigma_{i}\) and \(i\in\{1,2,\ldots,N\}\)._
_Remark 5_ (Solvability).: It is important to highlight that the solution to Problem (19) and the SDP (30) only needs to consider the vertices of the matrix polytopes \(\mathcal{M}_{Z_{i}}\) and \(\mathcal{M}_{C_{i}}\). This is due to the convexity of the polytope, which implies
that any point within the polytope can be expressed as a convex combination of its vertices. Therefore, if the problem is addressed at all vertices of the polytope, it covers the entire polytope, including its interior.
_Remark 6_ (Comparison).: Several existing approaches have explored data-driven output synchronization for unknown MASs, e.g., the behavioral approach in [34] and the model-free RL-based approach in [15, 16, 17]. In comparison to these existing works, our approach exhibits the following key distinctions. Firstly, the proposed data-driven method is robust to unknown noisy data, eliminating the requirement for accurately measurable noise as seen in [34]. Secondly, in contrast to a large amount training data required for [16, 17], our approach achieves output synchronization only using limited data, i.e., as long as Assumption 5 is satisfied. Furthermore, we propose a static data-driven design method using historical data, while ensuring system stability, which circumvents real-time iterative computations using online data, as in [16, 17]. In this sense, the proposed method significantly reduces the computational burden, making it more efficient and suitable for real-world implementation.
## IV Numerical Examples
In this section, we present a numerical example to demonstrate the effectiveness of the proposed data-driven method. We consider a discrete-time heterogeneous MAS consisting of seven agents, including one leader and six followers. The dynamics of the leader are described by (2), where
\[S=\begin{bmatrix}0&1\\ -1&0\end{bmatrix},\quad H=\begin{bmatrix}1&0\end{bmatrix}.\]
The true dynamics of the six followers are given by (1) with
\[\bar{A}_{1} =2,\bar{B}_{1}=3,\bar{C}_{1}=1\] \[\bar{A}_{2} =\begin{bmatrix}0&1\\ 1&-1\end{bmatrix},\bar{B}_{2}=\begin{bmatrix}0\\ 1\end{bmatrix},\bar{C}_{2}=\begin{bmatrix}1&1\end{bmatrix}\] \[\bar{A}_{3} =\begin{bmatrix}0&1\\ 1&-2\end{bmatrix},\bar{B}_{3}=\begin{bmatrix}1\\ 0\end{bmatrix},\bar{C}_{3}=\begin{bmatrix}0&1\end{bmatrix}\] \[\bar{A}_{4} =\begin{bmatrix}0&1\\ -1&-3\end{bmatrix},\bar{B}_{4}=\begin{bmatrix}1\\ 1\end{bmatrix},\bar{C}_{4}=\begin{bmatrix}-1&1\end{bmatrix}\] \[\bar{A}_{5} =\begin{bmatrix}0&1&0\\ 0&0&1\\ 0&0&-4\end{bmatrix},\bar{B}_{5}=\begin{bmatrix}0\\ 0\\ 4\end{bmatrix},\bar{C}_{5}=\begin{bmatrix}1&0&0\end{bmatrix}\] \[\bar{A}_{6} =\begin{bmatrix}0&1&0\\ 0&0&1\\ 0&0&-5\end{bmatrix},\bar{B}_{6}=\begin{bmatrix}0\\ 0\\ 5\end{bmatrix},\bar{C}_{6}=\begin{bmatrix}2&0&0\end{bmatrix}.\]
The network topology of these agents is shown in Fig. 2, which represents the communication topology \(\bar{\mathcal{G}}\) between agents.
In the data-driven setting, the true system matrices \((\bar{A}_{i},\bar{B}_{i},\bar{C}_{i})\) are assumed unknown. To collect data for each follower, we run the open-loop system (9) offline and gather a set of noisy data \(\mathbb{D}_{i}\) with a length of \(\rho=20\). The inputs are uniformly distributed within a polytope \(\mathcal{P}_{u_{i}}\) defined as
\[\mathcal{P}_{u_{i}}=\Big{\{}u_{i}\Big{|}u_{i}=\sum_{k=1}^{\gamma_{u_{i}}} \beta_{u,i}^{(k)}\hat{u}_{i}^{(k)},\beta_{u,i}^{(k)}\geq 0,\sum_{k=1}^{\gamma_{u_{i}}} \beta_{u,i}^{(k)}=1\Big{\}}\]
where \(\gamma_{u_{i}}=2\) and the two vertices are \(\hat{u}_{i}^{(1)}=1\) and \(\hat{u}_{i}^{(2)}=-1\) for \(i\in\{1,2,\ldots,6\}\). Furthermore, the random process noise \(w_{i}(T)\) is sampled from the polytope \(\mathcal{P}_{w_{i}}\) with four vertices: \(\hat{w}_{i}^{(1)}=\bar{w}_{i}*[1\,1]^{\top}\), \(\hat{w}_{i}^{(2)}=\bar{w}_{i}*[1-1]^{\top}\), \(\hat{w}_{i}^{(3)}=\bar{w}_{i}*[-1\,1]^{\top}\), and \(\hat{w}_{i}^{(4)}=\bar{w}_{i}*[-1-1]^{\top}\). Similarly, the transient noise \(v_{i}(T)\) is bounded by \(\mathcal{P}_{v_{i}}\) with two vertices: \(\hat{v}_{i}^{(1)}=\bar{v}_{i}\) and \(\hat{v}_{i}^{(2)}=-\bar{v}_{i}\). Here, select \(\bar{w}_{i}=\bar{v}_{i}=0.01\).
To solve the output regulation equations (18), we need to compute the optimization problem (19) to determine the values of \((\Pi_{i}^{*},\Gamma_{i}^{*})\).
\[\Pi_{1}^{*} =\begin{bmatrix}1.0000&-0.0000\end{bmatrix},\quad\Gamma_{1}^{*}= \begin{bmatrix}-0.6672&0.3336\end{bmatrix},\] \[\Pi_{2}^{*} =\begin{bmatrix}0.5013&-0.4995\\ 0.4987&0.4995\end{bmatrix},\quad\Gamma_{2}^{*}=\begin{bmatrix}-0.5014&1.4994 \end{bmatrix},\] \[\Pi_{3}^{*} =\begin{bmatrix}1.9914&0.9954\\ 0.9952&-0.0020\end{bmatrix},\quad\Gamma_{3}^{*}=\begin{bmatrix}-1.9907&1.9898 \end{bmatrix},\] \[\Pi_{4}^{*} =\begin{bmatrix}-0.7995&-0.1998\\ 0.1996&-0.2002\end{bmatrix},\quad\Gamma_{4}=\begin{bmatrix}-0.0002&-0.6007\end{bmatrix},\] \[\Pi_{5}^{*} =\begin{bmatrix}0.9999&0.0008\\ -0.0010&1.0010\\ -1.0001&-0.0011\end{bmatrix},\;\Gamma_{5}^{*}=\begin{bmatrix}-1.0002&-0.2515 \end{bmatrix}\] \[\Pi_{6}^{*} =\begin{bmatrix}0.5004&0.0001\\ -0.0004&0.5004\\ -0.4996&-0.0004\end{bmatrix},\;\Gamma_{6}^{*}=\begin{bmatrix}-0.4991&-0.1006 \end{bmatrix}.\]
Then, by solving the SDP in Theorem 1, we obtained the stabilizing controller gain \(K_{i}\) for each follower \(i\) in (3). The specific values of \(K_{i}\) are as follows:
\[K_{1} =-16.6608,\quad K_{2}=[-7.0686\,\,-1.8060],\] \[K_{3} =[-13.3323\,\,-10.3354],K_{4}=[-13.5107\,\,-15.1270],\] \[K_{5} =[-0.5139\,\,-0.9054\,\,-0.5120],\] \[K_{6} =[-0.7749\,\,-1.3071\,\,-0.5160].\]
#### Iv-1 Comparison with the model-based approach
The simulation of the heterogeneous MAS was carried out using the feedback control protocol (3) and (5). The initial states of the leader, followers, and observers were randomly selected. The tracking errors \(e_{i}\) for \(i\in\{1,2,\ldots,N\}\) under the data-driven control (according to Theorem 2) and model-based control (according to Lemma 1) are shown in Fig. 3, respectively.
From this figure, it is evident that output synchronization is achieved under both control paradigms. The results demonstrate that the proposed data-driven polytopic method achieves comparable performance to the model-based approach and exhibits excellent robustness to noisy and limited data, highlighting the effectiveness of the data-based polytopic controller. Additionally, the absence of model information
Fig. 2: The communication topology \(\bar{\mathcal{G}}\) between agents.
during implementation further emphasizes the superiority of the data-driven method.
#### Iv-B2 Comparison of different noise levels
The proposed data-driven approach was tested under different disturbance levels to investigate their effect on system performance and tracking errors. First, let us define the error between the solutions \((\Pi_{i}^{s},\Gamma_{i}^{s})\) and \((\Pi_{i}^{s},\Gamma_{i}^{s})\) of the output regulation equation computed by (4) and (19), respectively, as follows
\[\phi_{1}=\max_{i\in\{1,2,\ldots,6\}}\|\Pi_{i}^{s}-\Pi_{i}^{s}\|_{2},\quad\phi_{ 2}=\max_{i\in\{1,2,\ldots,6\}}\|\Gamma_{i}^{s}-\Gamma_{i}^{s}\|_{2}.\]
Table I tabulates \(\phi_{1}\) and \(\phi_{2}\) for five different noise levels \(\bar{w}_{i}\), \(\bar{v}_{i}\). It can be observed that a larger noise level results in a lower accuracy of \((\Pi_{i}^{s},\Gamma_{i}^{s})\) relative to the exact solution \((\Pi_{i}^{s},\Gamma_{i}^{s})\).
Furthermore, taking follower 6 as an example, Fig. 4 displays the tracking error \(e_{6}(t)\) (the orange solid line) and the bounds of \(\mathcal{P}_{e_{6},t}\) (the blue dashed line) under three different noise levels. As expected, the tracking error \(e_{6}(t)\) remains within \(\mathcal{P}_{e_{6},t}\) at each time step. Moreover, the error bound (41) is on the order (precisely, several times) of the noise size. This can be attributed to the fact that the escalating uncertainties caused by noise contribute to the expansion of noise polytopes \(\mathcal{M}_{\Delta_{i1}}\) and \(\mathcal{M}_{\Delta_{i2}}\). As a result, the polytope \(\mathcal{M}_{i}\) of allowable system matrices becomes larger, significantly augmenting the conservatism of the obtained data-driven control solutions.
## V Conclusions
In conclusion, this paper has presented a data-driven polytopic approach for output synchronization of unknown heterogeneous MASs, utilizing data instead of explicit knowledge of each follower's dynamics model. The proposed method offers a certified data-driven feedback control protocol that can handle perturbed offline data and uncertainties in the system matrices. By means of a unique data-based polytopic representation of the MASs, an approximate solution of the output regulator equations and a stabilizing control gain are obtained. The stability of the tracking error polytope is ensured, and sufficient data-based conditions for near-optimal output synchronization are provided. Future research directions could explore less conservative approximations of the tracking error polytope and investigate output synchronization under event-triggered control. These advancements would further enhance the performance and applicability of the data-driven approach in real-world scenarios.
Fig. 3: Tracking error of each follower under data-driven control and model-based control. |
2304.09455 | A quantum pseudo-integrable Hamiltonian impact system | Quantization of a toy model of a pseudointegrable Hamiltonian impact system
is introduced, including EBK quantization conditions, a verification of Weyl's
law, the study of their wavefunctions and a study of their energy levels
properties. It is demonstrated that the energy levels statistics are similar to
those of pseudointegrable billiards. Yet, here, the density of wavefunctions
which concentrate on projections of classical level sets to the configuration
space does not disappear at large energies, suggesting that there is no
equidistribution in the configuration space in the large energy limit; this is
shown analytically for some limit symmetric cases and is demonstrated
numerically for some nonsymmetric cases. | Omer Yaniv | 2023-04-19T07:03:58Z | http://arxiv.org/abs/2304.09455v1 | # MSc thesis- A quantum pseudo-integrable Hamiltonian impact system
###### Abstract
A quantization of a toy model of a pseudointegrable Hamiltonian impact system is introduced, including EBK quantization conditions, a verification of Weyl's law, the study of their wavefunctions and a study of their energy levels properties. It is demonstrated that the energy levels statistics are similar to those of pseudointegrable billiards. Yet, here, the density of wavefunctions which concentrate on projections of classical level sets to the configuration space does not disappear at large energies, suggesting that there is no equidistribution in the configuration space in the large energy limit; this is shown analytically for some limit symmetric cases and is demonstrated numerically for some nonsymmetric cases.
## 1 Introduction
Hamiltonian systems are a mathematical branch of the theory of dynamical systems, which investigates the time-evolution of a system's
state, attempting to provide information on the evolution of all possible initial states of the system. When such a system evolves continuously, it is called a continuous dynamical system, and in the smooth case it is described by a set of differential equations.
### Classical Hamiltonian dynamical systems
Hamiltonian systems arise in various fields of Physics: for example, they describe the dynamics of a body moving under conservative forces.
A Hamiltonian system on a symplectic manifold [2][18]\((M,\omega)\), where \(\omega\) is a closed non-degenerate 2 form, e.g \(\omega=\sum_{i=1}^{n}dq_{i}\wedge dp_{i}\) for set of canonical coordinates \((p,q)\), is defined by a smooth Hamiltonian function \(H\in C^{\infty}(M,\mathbb{R})\), and \(X_{H}\) which is the associated vector field to H:
\[dH(Y)=\omega(X_{H},Y) \tag{1}\]
The dynamics of any observable \(f\) obeys the equation of motion:
\[\frac{d}{dt}f=\{f,H\} \tag{2}\]
where \(\{f,H\}\) are called Poisson brackets and are defined as: \(\omega(X_{f},X_{H})\). In canonical coordinates \((q,p)\), we can write \(X_{H}=\sum_{i=1}^{n}(\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q _{i}}-\frac{\partial H}{\partial q_{i}}\frac{\partial}{\partial p_{i}})\).
The Hamiltonian flow \(F^{t}:M\to M\) is generated by \(X_{H}\) as
\[\frac{d}{dt}F^{t}_{t=0}=X_{H} \tag{3}\]
\(F^{t}:M\to M\) is symplectomorphism:
\[F^{t*}\omega=\omega \tag{4}\]
Thus,
\[\frac{F^{t*}\omega^{n}}{n!}=\frac{\omega^{n}}{n!} \tag{5}\]
where \(\omega^{n}\) is the phase space volume differential form
e.g, given a region \(R\in M\)
\[\int_{R}\frac{\omega^{n}}{n!}=\int_{R}dq_{1}\wedge...\wedge dq_{n}\wedge dp_{1} \wedge...\wedge dp_{n}=Vol(R) \tag{6}\]
Thus, eq. 5 implies that flow \(F^{t}\) is volume preserving.
We concentrate here on Hamiltonian systems of the mechanical form, where the Hamiltonian function is the sum of a diagonal kinetic energy term and a potential energy defined on \(\mathbb{R}^{n}\), and the natural canonical variables are the momenta, \(p_{i}\) and the configuration coordinates \(q_{i},i=1,...n\):
\[H(q,p)=\frac{1}{2}\sum_{i=1}^{n}p_{i}^{2}+V(q),\qquad(q,p)\in\mathbb{R}^{n} \times\mathbb{R}^{n}\]
These are called \(n\) degrees of freedom mechanical Hamiltonian systems. The goal of the analysis is to determine the evolution of all initial conditions in the phase space \((q,p)\). In these coordinates the equations of motion are:
\[\frac{d}{dt}p_{i} =-\frac{\partial H}{\partial q_{i}} \tag{7}\] \[\frac{d}{dt}q_{i} =\frac{\partial H}{\partial p_{i}}\]
The simplest type of \(n\) degrees of freedom Hamiltonian systems are integrable systems. Such systems exhibit at least \(n\) independent smooth constants of motions which are pairwise in involution (their pair-wise Poisson bracket vanishes). A full description of the dynamics in such systems is provided by the Arnold- Liouville theorem [2], which tells us that for almost all values of \((q,p)\) there exists a transformation \((q_{i},p_{i})\rightarrow(\theta_{i},I_{i})\) such that \(H(q,p)=H(I)\) and the motion occurs along connected components of level sets of \(I\). Each such component is an \(n\) dimensional torus:
\[\dot{I}_{i}=-\frac{\partial H}{\partial\theta_{i}}=0 \tag{8}\]
\[\dot{\theta}_{i}=-\frac{\partial H}{\partial I_{i}}=\omega_{i}\]
Moreover, in open neighborhoods at which such transformation exists, it is smooth.
On the other hand, Hamiltonian systems can also be chaotic, which is an extreme case of a disordered motion. The definition of chaotic systems is community dependent[11][23][24][21].
In ergodic theory [22], chaos is defined by the global properties of a measure preserving map on a probability space \((X,\mathcal{B},\mu,T)\), which in our case is the symplectomorphism associated with the Hamiltonian flow.
In order to define chaos in such systems we first need to define **ergodicity**: T is said to be **ergodic** if the only T-invariant sets are of measure 0 or of measure 1.
An ergodic map T is said to have the strong mixing property if for all \(A,B\in\mathcal{B}\), \(\lim_{k\rightarrow\infty}\mu(A\cap T^{-k}(B))=\mu(A)\mu(B)\).
An ergodic map which is strongly mixing is usually said to be chaotic within the notion of ergodic theory (yet, this definition depends on the measure \(\mu\), and, notably, choosing a "natural invariant measure" is a delicate issue in ergodic theory. For Hamiltonian systems the natural measure is the Liouville measure: Lebesgue measure restricted to the energy surface [22]).
A remarkable property of ergodic systems, which is highly important in physics, relates to the properties of an observable f, defined on the system \((X,\mathcal{B},\mu,T)\) :
if \(T\) is ergodic and \(f\circ T\equiv f\), then f is constant almost everywhere.
From this we conclude the general property that for any measurable function \(g:X\rightarrow\mathbb{R}\) the time average equals the space average:
\[\int_{X}gd\mu=\frac{1}{T}\lim_{T\rightarrow\infty}\int_{0}^{\infty}gdt \tag{9}\]
The mathematical study of ergodicity in Hamiltonian systems focuses mainly on the dynamics of billiards: systems of free particles which reflect elastically from the billiard table \(D\). Formally we take \(V(q)\equiv 0\) in the interior of the billiard table and on its boundary the motion is determined by elastic reflections: \((q,p_{\perp})\rightarrow(q,-p_{\perp}),(q,p_{\parallel})\rightarrow(q,p_{ \parallel})\)[10].
In this work, we will look at Hamiltonian impact systems (HIS), which is an extension of the class of billiards: the dynamics is determined by a smooth Hamiltonian in the table interior and by elastic reflections: \((q,p_{\perp})\rightarrow(q,-p_{\perp}),(q,p_{\parallel})\rightarrow(q,p_{ \parallel})\) on the table's boundry.
### Pseudo integrable billiards
Pseudo integrable billiards arise in the study of plane polygonal rational billiards (polygonal tables with all corners being rational fractions of \(\pi\)).
The path of a particle in a billiard is independent of the energy. For polygonal billiards, it can be considered as lying in a 3 dimensional space whose coordinates are the position \(q=(q_{1},q_{2})\) and \(\theta\), the path direction. This path lies in a sequence of replicas of the enclosure situated at different values of \(\theta\). In the case of plane polygonal rational billiards this sequence is finite(the number of possible directions in each trajectory). Since reflection at an edge is equivalent for continuing the path into a reflection of the enclosure and the sequence of directions is finite, the trajectory lies on a 2-dimensional compact surface. Such surfaces are two-dimensional surfaces of genus \(g\geq 1\)[20, 15]. Pseudointegrable dynamics, correspond to systems with intermediate complexity: they are not ergodic nor quasi-periodic, there can be level sets that include several ergodic components as well as bands of periodic orbits.
### Quantum dynamical systems and correspondence principle
In the early 20th century, it was clear that physical properties of some systems cannot be predicted by classical physics. A new
theory, quantum mechanics, was found to be useful in their prediction. According to quantum mechanics the physical properties of an n-degrees of freedom system are not defined by smooth function on \(\mathbb{R}^{2n}\), but rather is defined by a Hermitian operator in \(Op(\mathcal{S}(\mathbb{R}^{n}))\) where: \(S(\mathbb{R}^{n})=\{\phi\in C^{\infty}|\sup_{\mathbb{R}^{n}}|x^{\alpha}\partial ^{\beta}\phi|<\infty\quad\) for all multiindices \(\alpha,\beta\}\).
Given an operator \(\hat{F}=Q(F(q_{i},p_{i}))\) with \(F\in C^{\infty}(\mathbb{R}^{2n})\), its time evolution, the operator \(\hat{F}\) at time \(t>0\) is: \(\hat{F}_{t}=e^{i\frac{t\hat{H}}{\hbar}}\hat{F}e^{-i\frac{t\hat{H}}{\hbar}}\), where \(\hat{H}=Q(H(q_{i},p_{i}))\) is the operator associated with the Hamiltonian. Then, the time evolution equation, in analogy to the classical equation, becomes:
\[\dot{\hat{F}}_{t}=-i\frac{[\hat{H},\hat{F}_{t}]}{\hbar} \tag{10}\]
Another equivalent form to express the time evolution of the system is via the time-dependent Schrodinger equation, that considers evolving in time the wave function \(\Psi(q,t)\) rather than the operator:
\[i\hbar\frac{\partial}{\partial t}\Psi(q,t)=\hat{H}\Psi(q,t) \tag{11}\]
The time-independent Schrodinger equation is an eigenvalue problem, whose solutions are called the spectrum of the quantum system:
\[\hat{H}\Psi(q)=E\Psi(q) \tag{12}\]
It is believed, that the classical dynamics can be derived as a limit of the quantum dynamics in the limit of high energies, \(\lim\hbar\to 0\). This assumption is called the correspondence principle.
In order to relate quantum and classical mechanics we are interested in the class of transformations \(Q:C^{\infty}(\mathbb{R}^{2n})\to Op(\mathcal{S}(\mathbb{R}^{n}))\), which are called quantization of classical phase space [1].
We require Q to satisfy to following conditions:
* Q is linear
* Q(1) = I, where 1 is the constant function 1, and I the identity operator
* for any function \(\Phi:R\to R\) for which \(Q(\Phi\circ f)\) and \(\Phi(Q(f))\) are well-defined, \(Q(\Phi\circ f)=\Phi(Q(f))\)
* The operators \(Q(p_{i})\) and \(Q(q_{i})\) corresponding to the coordinate functions \(F(p,q)=p_{i}\), \(G(p,q)=q_{i}\) for (i = 1,...,n) are given by \(Q(G)\Psi=q_{i}\Psi\) and \(Q(F)\Psi=-i\hbar\frac{\partial\Psi}{\partial q_{i}}\)
To obtain a correspondence between classical(as described in eq. 2) and quantum dynamics(as described in eq. 10) we would want to find Q such that:
\(Q(\{f,g\})=\frac{[Q(f),Q(g)]}{i\hbar}\)
However, a quantization Q that satisfies this condition for all f, g \(\in C^{\infty}(\mathbb{R}^{2n})\) altogether with conditions 1 to 4, mentioned above, doesn't exist[1].
The common quantization scheme which is used is Weyl's quatization[12]: For a function \(a(q,p)\in C^{\infty}(\mathbb{R}^{2n})\) in classical phase space, \(a^{w}\) the operator accepted by Weyl quantization is defined by:
\[a^{w}\Psi(x)=\frac{1}{\hbar^{n}}\int_{\mathbb{R}^{\kappa}}\int_{\mathbb{R}^{ \kappa}}e^{\frac{i<x-y,p\geq}{\hbar}}a(\frac{x+y}{2},p)\Psi(y)dydp \tag{13}\]
Weyl's quantization satisfies the following condition:
\[Q(\{f,g\})=\frac{[Q(f),Q(g)]}{i\hbar}+O(\hbar^{2}) \tag{14}\]
Thus, in the limit \(\hbar\to 0\) (so called semiclassical limit) we get a correspondence between classical and quantum dynamics, described by **Egorov's Theorem**[12]:
\[\|\hat{F}_{t}-Q(F\circ\Phi_{t}(q,p))\|=O(\hbar)\text{ where }Q(F)=\hat{F}_{0} \tag{15}\]
Another important theorem regarding classic-quantum correspondence is **Weyl's theorem[12]**: For each \(a<b\), we can define the operator \(\mathbb{1}\):
\[\mathbb{1}(\Psi_{E_{j}})=\begin{cases}\Psi_{E_{j}}&\text{if}:a<E_{j}<b\\ 0&\text{else}\end{cases} \tag{16}\]
Weyl's theorem states that:
\[N[E_{j}:a\leq E_{j}\leq b]=\sum_{j}<1\Psi_{E_{j}},\Psi_{E_{j}}>=\frac{1}{\hbar^{n }}Vol(a\leq H\leq b)+o(1)\text{ as }\hbar\to 0 \tag{17}\]
where \(E_{j}\) and \(\Psi_{E_{j}}\) are the eigenvalues and eigenvectors (correspondingly) of \(\hat{H}=Q(H(q_{i},p_{i}))\) and \(H(q_{i},p_{i})\) is the classical Hamiltonian.
Semiclassical approximations can also be used within the framework of Schrodinger equation. An important approximation for one degree of freedom systems is **WKB aproximation**[16] which is an approximation for time independent Schrodinger equation:
In the limit \(\hbar\to 0\), \(\Psi(q)\), an eigenfunction of \(\hat{H}=Q(H(q_{i},p_{i}))\) can be approximated by:
\[\Psi(x)\approx C_{0}\frac{e^{\theta+i\int\hbar^{-1}\sqrt{2m(E-V(x))}\,dx}}{ \hbar^{-1/2}\sqrt[4]{2m\left(E-V(x)\right)}} \tag{18}\]
From this approximation we can derive **EBK quantization conditions**, that are used to find the spectrum of integrable systems:
\[I(E)=\hbar\left(n+\frac{\mu}{4}+\frac{b}{2}\right) \tag{19}\]
where I is the classical action of a periodic orbit, \(\mu\) is the number of classical turning points along a period and \(b\) is the number of impacts from the table's boundries along a period [9](chapter 2).
### Quantum chaos and ergodicity
Quantum chaos studies how classical dynamics (integrable and non-integrable) are reflected in the properties (e.g. eigenvalues and eigenfunctions) of the correspondent quantum system. It is accepted that in integrable systems, the distribution of the level spacing is provided by the Poisson distribution \(e^{-s}\)[6], while that in chaotic systems (hereafter, meaning mixing system on energy surfaces, studied by simulating chaotic billiards) they distribute as eigenvalues of
random matrix ensembles (GOE) [8]. When a system has a mixed phase space, which is the common behavior of smooth Hamiltonian systems, it is found that a Berry-Robink distribution, a convex hall of the Poisson and the GOE distributions, describes the level spacing [5, 19]. This distribution reflects the existence of eigenfunctions supported on the islands of stability and of eigenfunctions supported on the chaotic components of the classical phase-space [3].
In quantum pseudo integrbale billiards the level spacing appears to have intermediate statistics: the nearest-neighbor distribution displays repulsion at small distances and an exponential decay at large distances [7].
The quantum ergodicity of a system describes the wavefunction properties of classically ergodic systems, namely that almost all of them are equidistributed. An important result in quantum ergodicity is the equidistribution of eigenfunctions:
Let \(\{\Psi_{n}\}_{n=1}^{\infty}\) be an orthonormal basis of eigenfunctions of the Laplacian operator on a compact domain D. Provided the billiard flow is ergodic in phase space, there is a density-one sequence \(n_{j}\in\mathbb{N}\) such that for any \(A\subset D\):
\(\lim_{j\rightarrow\infty}\oint_{A}|\Psi_{nj}|^{2}(s)ds=\frac{area(A)}{area(D)}\)[26].
This result was generalized for polygonal billiards with rational angles as it is known that the motion of a particles in such billiards is dense in configuration space for almost all directions[17].
## 2 Main results
We investigate eigenvalues statistics and eigenfunctions properties of a class of systems that belongs to the recently discovered family of classical pseudointegrable Hamiltonian systems with impacts. Such systems combine motion under a smooth potential field with continuous symmetries and reflections from a corresponding family of billiards that keeps the continuous symmetries only locally and not globally. For example, trajectories of a separable Hamiltonian
\[H=H_{1}+H_{2},\ H_{i}(q_{i},p_{i})=\frac{p_{i}^{2}}{2m}+V_{i}(q_{i}),\ i=1,2 \tag{20}\]
in a right-angled polygonal billiard with at least one concave corner are pseudointegrable [4; 13].
Here, we study the quantum step oscillators: we take \(V_{i}\) to be confining potentials which are even smooth functions with a single minimum at the origin and are monotone elsewhere, and take the right angled polygon to be \(\mathbb{R}^{2}\setminus S_{q^{wall}}\), where
\[S_{q^{wall}}=\{(q_{1},q_{2})|\:q_{1}<q_{1}^{wall}\leq 0\text{ and }q_{2}<q_{2}^{wall}\leq 0\}. \tag{21}\]
The trajectories are confined by the potential and reflect from the step \(S_{q^{wall}}\)[4], see Figure 1a. Since the step boundaries are parallel to the axes, the vertical and horizontal momenta are conserved at reflections, so the motion occurs along the level sets \(H_{i}(q_{i},p_{i})=E_{i},\ i=1,2\). Passing to the action angled coordinates of the smooth separable system, provided \(E_{i}>V_{i}(q_{i}^{wall}),\ i=1,2\), the motion on each level set is conjugated to the directed motion on the flat cross-shaped surface, see Figure 1b. The direction of motion on this surface is given by \(\omega_{2}(E_{2})/\omega_{1}(E_{1})\) and the cross shaped concave corners are at \(\{\pm\theta_{1}^{wall}(E_{1}),\pm\theta_{2}^{wall}(E_{2})\}\), where \(\omega_{i}(E_{i})\) denotes the frequency of the smooth periodic motion under \(H_{i}\) and \(\theta_{i}^{wall}(E_{i})\) denotes the angle of an impacting trajectory (with the convention that \(\theta_{i}=0\) at the maximum of \(q_{i}\)). So, the direction of motion and the surface dimensions depend continuously on \((E_{1},E_{2})\). For the case of harmonic oscillators, i.e. when \(V_{i}(q_{i})=\frac{1}{2}\omega_{i}q_{i}^{2}\), the frequencies are fixed at \(\omega_{i}\) and the values of \(\theta_{i}^{wall}(E_{i})\) can be explicitly computed. Equivalently, by folding the surface, the motion on such level sets is conjugated to the directed billiard motion on an L-shaped billiard, see Figure (1)c. Thus, this system is pseudointegrable [4]. In general, the dynamics on such surfaces has non-trivial ergodic properties. It was proven that if \(q_{i}^{wall}<0\) for \(i=1,2\), the motion is typically uniquely ergodic, and, for the case of resonant harmonic oscillators, there are level sets with co-existing periodic ribbons and dense orbits on some parts of the cross-shaped surface [13].
In this work we described the quantum-classical correspondence in pseudo-integrable HIS. More specifically:
* We describe classical and quantum dynamics of the pseudo-integrable step system for the case of a step in the origin and resonant(\(\omega_{1}=1,\omega_{2}=\frac{m}{n}\)) harmonic potentials (all regular trajectories are periodic). In particular, we prove the existence of two families of periodic orbits co-exisisting in the same level set when n is even, and the exsitence of one family of periodic orbits when n is odd. for m=1 we find the number of impacts and turning points along a period in each family and predict the allowed energies using EBK quantization conditions.
* We describe the quantum mechanical properties(eigenfunctions structure and eigenvalues) for the more general pseudo-integrable systems (i.e system which are not completely periodic):
Figure 1: A trajectory of a separable Hamiltonian reflecting from a step. (a) Projection to the configuration space. (b) The corresponding directed motion on the cross-shaped surface in the angles space. (c) Folding the surface to the lower left quadrant leads to the corresponding billiard motion on an L-shaped billiard. Here, Eq. (20) are integrated with elastic reflections from the step of Eq. (21), with \(V_{i}(q_{i})=\frac{1}{2}\omega_{i}q_{i}^{2},\omega_{1}=1,\omega_{2}=\sqrt{2}, q_{1}^{wall}=q_{2}^{wall}=-1,E_{1}=5.625,E_{2}=5.50\).
1. We find level-spacing distribution of the pseudo-integrable step system to be similar to the level-spacing distribution of the pseudo-integrable billiards(i.e semi-Poisson distribution)
2. For a step in the origin and anharmonic potential(for that case we give analytical results) we find that at least a fraction of \(\frac{1}{3}\) of the eigenfunctions concentrate on the classical level sets.
3. For a step not in the origin and harmonic potential we present numerical evidence that it is also likely to have positive fraction of the eigenfunctions which concentrate on the classical level sets.
For achieving that we used both analytic and numerical methods. The numerical method used is finite differences method via MATLAB for solving the time independent Schrodinger equation. We used the grid \([x,y]=[-15,15]X[-15,15]\) where \(\delta_{x}=0.05\) and \(\delta_{y}=0.05\), we set the potential to be \(10^{28}\) at the step.
These main results have been submitted for publication and are currently under review [25]
## 3 results
### Quantization of Periodic orbits
As we are interested in quantization, and, in particular, in studying the role of superscars(concentration of the wave functions along the classical families of periodic orbits) in the system, we look first for families of periodic orbits. Given a family of periodic orbits on a given level set \((E_{1},E_{2}=E-E_{1})\), with \(\mu=(\mu_{1},\mu_{2})\) turning points (\(\mu_{1}\) in the horizontal direction and \(\mu_{2}\) in the vertical one), and \(b=(b_{1},b_{2})\) impacts (\(b_{1}\) with the right side of the step and \(b_{2}\) with the upper part of the step), and an action \(I(E;\mu,b)\), we can quantize it by using the EBK quantization conditions [9, 16]:
\[I(E;\mu,b)=\hbar(n+\frac{\mu_{1}+\mu_{2}}{4}+\frac{b_{1}+b_{2}}{2}). \tag{22}\]
Moreover, denoting by \(I_{i}(E_{i})\) the action of the smooth \(H_{i}\) system and by \(I_{i}^{wall}(E_{i})=\int_{q_{i}\geq q_{i}^{wall}}p_{i}(q_{i};E_{i})dq_{i}=I_{i} \frac{2\theta_{i}^{wall}}{2\pi}\) the action of the impact \(H_{i}\) system, we obtain:
\[I(E_{1},E_{2};\mu,b)=\sum_{i=1}^{2}b_{i}I_{i}^{wall}+(\frac{\mu_{i}-b_{i}}{2})I _{i}, \tag{23}\]
namely, given \(\mu\), \(b,I_{i}(E_{i})\) and \(\theta_{i}^{wall}(E_{i})\), we expect that the EBK quantization rule will predict the energy levels. Yet, in general, it is non-trivial to find \(\mu\) and \(b\) (see e.g. section 7 in [13]) nor to invert \(I(E_{1},E_{2};\mu,b)\) on the given family of periodic orbits.
We consider first some simple limit cases in which periodic motion can be easily identified. When the step is at the origin (\(S_{0}=S_{q_{1}^{wall}=q_{2}^{wall}=0}\)), the corner angles are fixed at \(\theta_{i}^{wall}(E_{i})|_{q_{1}^{wall}=q_{2}^{wall}=0}=\frac{\pi}{2}\), so the dimensions of the cross-shaped surface are independent of the energy. When the potentials are harmonic, the direction of motion, \(\frac{\omega_{2}}{\omega_{1}}\) is independent of the energy as well and \(I_{i}=\frac{E_{i}}{\omega_{i}}\). Thus, by choosing resonant harmonic potentials and a step at the origin, we conclude that for all partial energies the motion is periodic and of the same type and that \(I_{i}^{wall}=\frac{I_{i}}{2}\). In particular, setting: \(\omega_{1}=1,\omega_{2}=\frac{n}{m}\) (with \(gcd(n,m)=1\)), it can be shown that there are exactly 2 options for dynamics:
**Theorem 3.1**.: _For harmonic oscillator with a step in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m}{n}\) with odd m and n, there is one family of periodic orbits and the quantization condition is: \(E_{(k)}=\frac{2k}{3n}+\frac{1+\frac{m}{n}}{2}+\frac{n+m}{3n}=\frac{2k}{3n}+ \frac{5(m+n)}{6n}\)_
**Theorem 3.2**.: _For harmonic oscillator with a step in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m}{n}\) where n is even and m is odd, there are two families of periodic orbits where the first family satisfies: \(\mu_{1}^{I}=2n\), \(\mu_{2}^{I}=2m\) and the number of impacts \(b_{1}^{I}\) and \(b_{2}^{I}\) can be calculated inductively (see appendix). The second family satisfies: \(\mu_{1}^{II}=n\), \(\mu_{2}^{II}=m\) and the number of impacts \(b_{1}^{II}\) and \(b_{2}^{II}\) can be calculated inductively(see appendix). For the case of m=1 we can directly calculate that the number of impacts is \(b_{1}^{I}=n\), \(b_{2}^{I}=0\) and \(b_{1}^{II}=0\), \(b_{2}^{II}=1\)_
Note that on a given resonant level set \(m\omega_{1}(E_{1})=n\omega_{2}(E_{2})\), by rescaling time and energy: \(\hat{t}=\omega_{1}t,\hat{H}=\frac{H}{\omega_{1}}\), we can set \(\hat{\omega}_{1}=1\)
and \(\hat{\omega}_{2}=\frac{m}{n}\). Clearly \(b_{i}\) and \(\mu_{i}\) are unchanged by the rescaling of time, so we study their dependence on \(\frac{m}{n}\) for the rescaled time case. Equivalently, the direction of the flow on the crossed surface depends only on the ratio of the frequencies.
Finally by replacing the roles of \(\theta_{1}\) and \(\theta_{2}\) Theorem 3.2 can be applied to the case of odd n and even m.
In figure 8, we validate the above results(Theorems 3.1 and 3.2). Notice that for even \(m\) there are infinite number of energy levels at which \(E_{k_{1}}^{I}=E_{k_{2}}^{II}\) (marked with green lines), and in particular, for \(m=2,n=1\) and \(m=2,n=5\), \(E_{k_{1}}^{I}=E_{2k_{1}}^{II}\) (as shown in Fig. 8). Since the system here is symmetric, all these energy levels are degenerate, and, as shown in 8b and 8c, the common energy levels for the two families have higher degeneracy.
Next we use Weyl's law to validate our computations of correspondence between the classical families of periodic orbits and the energy levels.
Figure 2: Energy levels for resonant harmonic oscillator with a step at the origin: numerical and expected (EBK) values. (a) Odd \(m\) (\(\omega_{1}=1,\ \omega_{2}=1\)): the expected values for the single family of the periodic orbits denoted by horizontal red lines agree with the numerical values (blue dots). (b) Even \(m\ \omega_{1}=1,\ \omega_{2}=2\): The expected values (family I:red and green horizontal lines, Family II: only green horizontal lines) agree with the numerical values, and the common values have larger degeneracy. (c) Even \(m\) and odd \(n\)\(\omega_{1}=1,\ \omega_{2}=\frac{2}{5}\): The expected values (family I:red and green horizontal lines, Family II: only green horizontal lines) agree with the numerical values, and the common values have larger degeneracy.
Recall that for the two dimensional case, Weyl's law is:
\[N[E_{j}:E_{j}\leq b]=\frac{1}{\hbar^{2}}\text{Vol}(H\leq b)+o(1)\text{ as }\hbar\to 0 \tag{24}\]
and notice that the phase space volume for the step-oscillator is:
\[\text{Vol}(\mathcal{E})=\int_{0}^{I_{2}(\mathcal{E})}dI_{2}\int_{0 }^{I_{1}(\mathcal{E}-E(I_{2})))}-4\theta_{1}^{wall}(I_{1})\theta_{2}^{wall}(I _{2}) \tag{25}\] \[+4\pi(\theta_{1}^{wall}(I_{1})+\theta_{2}^{wall}(I_{2}))dI_{1}.\]
For the case of a step at the origin and harmonic oscillators, we obtain
\[\text{Vol}(\mathcal{E})|_{S_{0},\text{Harmonic oscillators}}=\frac{3\pi^{2}}{2 \omega_{1}\omega_{2}}\mathcal{E}^{2}. \tag{26}\]
Fig. 3 shows this expected correspondence. For the even \(m\) case the contribution of the larger degeneracy associated with the energy levels which are common to the 2 different families is evident.
Figure 3: Weyl’s law. Smooth curves correspond to the predicted phase space volume (Eq.26) for the three resonant cases (\(\omega_{1}=1,\omega_{2}=1,2,3\) yellow, red, blue lines respectively). These prediction fit the corresponding numerical results. The inset shows the non-uniform jump in \(N\) for the even \(m\) case.
### Wavefunctions structure and eigenvalues statistics for quantum step oscillators
We examine first non-resonant oscillators (and not necessarily harmonic) while keeping the step at the origin. Classically, the motion is ergodic within the level set for almost all partial energies. Hence, we expect wavefunctions to concentrate on the projection of such level sets to the configuration space. We show that at least for a sequence of density \(\frac{1}{3}\) of the wavefunctions this property holds and doesn't vanish at high energies.
In the correspondent smooth system the potential,\(V=V_{1}(q_{1})+V_{2}(q_{2})\) is separable. Thus, its wavefunctions, \(\Psi_{n}^{sm}\), can be written as a product of the wavefunctions of \(H_{i}\): \(\{\Psi_{n}^{sm}\}_{n=1}^{\infty}=\Psi_{1,k_{1}}(q_{1})\Psi_{2,k_{2}}(q_{2})\) where \(\{\Psi_{i,k_{i}}\}_{k_{i}=1}^{\infty}\) are the wavefunctions of the smooth one dimensional Hamiltonian \(H_{i}\) and \(E_{n(k_{1},k_{2})}^{sm}=E_{k_{1}}+E_{k_{2}}\).
Since \(V_{i}\) are even:
\[\Psi_{i,k_{i}}(q_{i})=\begin{cases}\Psi_{i,k_{i}}(-q_{i})&\text{if $k_{i}$ is even}\\ -\Psi_{i,k_{i}}(-q_{i})&\text{if $k_{i}$ is odd}\end{cases} \tag{27}\]
When both \(k_{1}\) and \(k_{2}\) are odd, the series of wavefunctions \(\{\Psi_{n_{j}(k_{1},k_{2})}^{sm}\}_{n_{j}=1}^{\infty}\) vanishes on both axes, hence, the non-smooth Hamiltonian for the case of step at the origin has a subsequence of wavefunctions of the form:
\[\Psi_{n_{j}(k_{1},k_{2})}^{S_{0}}(q_{1},q_{2})=\begin{cases}&\Psi_{\tilde{n}_{ j}(k_{1},k_{2})}^{sm}=\Psi_{1,k_{1}}(q_{1})\Psi_{2,k_{2}}(q_{2}),\\ &(q_{1},q_{2})\in\mathbb{R}^{2}/S_{0}\\ &\\ &0&(q_{1},q_{2})\in S_{0}\end{cases} \tag{28}\]
These solutions are smooth in the domain \((\mathbb{R}^{2}/S_{0})\) and satisfy Dirichlet boundary conditions on \(S_{0}\). Moreover, \(\Psi_{n_{j}(k_{1},k_{2})}^{S_{0}}\) concentrates on the projection of classical level sets; as the one-dimensional wavefunctions are well approximated by the WKB approximation [9], they decay exponentially outside of the classical allowed region of motion:
\[\Psi_{i,k_{i}}(q_{i})\approx C_{0}\frac{e^{\theta+i\hbar^{-1}\int\sqrt{2\left( E_{i,k_{i}}-V_{i}(q_{i})\right)}\,dq_{i}}}{\hbar^{-1/2}\sqrt[4]{2\left(E_{i,k_{i}}-V_{ i}(q_{i})\right)}}. \tag{29}\]
Next we show that the fraction of such odd wavefunctions for the case of a step at the origin is \(1/3\). From equation 22 for the smooth case (i.e. \(b=0\)) we deduce that wavefunction that are odd in both directions (odd \(k_{1},k_{2}\)) constitute one quarter of all wavefunctions:
\[\lim_{E\rightarrow\infty}\frac{\#\{\Psi^{sm}_{\tilde{n}_{j}(k_{1},k_{2})}:E_{ \tilde{n}_{j}}=E^{1}_{k_{1}}+E^{2}_{k_{2}}\leq E\}}{\#\{\Psi^{sm}_{n}:E_{n}\leq E\} }=\frac{1}{4}. \tag{30}\]
Since the step is at the origin:
\[\text{Vol}(E)^{S_{0}}=\frac{3}{4}\text{Vol}(E)^{sm} \tag{31}\]
and thus, by Weyl's law
\[\lim_{E\rightarrow\infty}\frac{\#\{\Psi^{S_{0}}_{n_{j}}:E_{n_{j}}\leq E\}}{ \#\{\Psi^{S_{0}}_{n}:E_{n}\leq E\}}=\lim_{E\rightarrow\infty}\frac{\#\{\Psi^{ sm}_{\tilde{n}_{j}}:E_{\tilde{n}_{j}}\leq E\}}{\frac{3}{4}\#\{\Psi^{sm}_{n}:E_{n} \leq E\}}=\frac{1}{3}. \tag{32}\]
We conclude that for a step at the origin there is no quantum ergodicity in configuration space, and, in fact, there is a positive measure set of eigenfunctions that concentrate on the classical level sets.
To examine the behavior for non-symmetric pseudointegrable cases, we study numerically the shifted corner in the harmonic case: we find the level spacing of the eigenvalues and study the projections to configuration space of the eigenfunctions. Both studies propose that the shift does not break the concentration of a large subset of eigenfunctions on classical level sets.
It is convenient for the study of the non-symmetric system to keep the step at the origin and shift the original harmonic potential to have a minimum at \((\epsilon_{1},\frac{\epsilon_{2}}{\omega_{2}^{2}})=\epsilon\cdot(\cos\alpha, sin\alpha)\). Then the potential is of the form: \(V=U_{0}+U_{1}\) where \(U_{0}=\frac{q_{1}^{2}}{2}+\frac{\omega_{2}^{2}q_{2}^{2}}{2}\) and \(U_{1}=-\epsilon_{1}q_{1}-\epsilon_{2}q_{2}\). Here, \(\epsilon=0\) corresponds to the system with a step at the origin, and we study the behavior for a non-resonant case at finite values of \(\epsilon\), beyond the small perturbation regime. Figure 4a compares the cumulative mean level spacing distribution of this shifted potential of the first 1500 energy levels to the cumulative Poisson distribution (characterizing integrable systems, \(N_{p}(s)=1-e^{-s}\), reflecting their locality in the classical phase space) and to the cumulative random matrix ensembles distribution, GOE (characterizing chaotic systems,
\(N_{W}(s)=1-e^{-\frac{\pi s^{2}}{4}}\), reflecting their non-local nature in the classical phase space). We obtain intermediate statistics as in pseudo integrable billiards, close to semi-Poisson distribution (\(N_{sp}(s)=1-e^{-2s}(2s+1)\)) [7] (such a behaviour was also observed in a certain range of parameters in step-like time dependent one d.o.f. Hamiltonian [14]).
Figure 4a shows that the dependence of the level spacing on \(\epsilon\) appears to be mild and similar to the case \(\epsilon=0\). Recall that in the case of a step at the origin, we showed that there is a positive density sequence of eigenfunctions concentrated on classical level sets. Namely, the level spacing distribution at \(\epsilon=0\) reflects this locality in phase space, together with the non-locality associated with pseudointegrability. Fig. 4a suggests that this behaviour persists when the step is shifted from the origin. In fact, Fig. 4b shows that the distribution with the largest repulsion is achieved at \(\epsilon=0\).
Figure 4: PDF and CDF of the level spacing for a non-resonant Hamiltonian for several positions of the step. The semi-Poisson distribution (solid thick green line) provides the best fit for all positions of the step (dashed lines), including a step at the origin (blue dashed line). (a) Cumulative distribution functions of Poisson, semi-poisson GOE and numerically calculated CDFs (b) Probability density functions of Poisson, semi-poisson GOE and numerically calculated PDFs. The level spacing are found by a finite differences scheme for the time independent Schrodinger Eq. for the Hamiltonian 20 with \(V=U_{0}+U_{1}\) where \(U_{0}=\frac{q_{1}^{2}}{2}+\frac{\omega_{2}^{2}q_{2}^{2}}{2}\) and \(U_{1}=-\epsilon_{1}q_{1}-\epsilon_{2}q_{2}\). The step is located at the origin and is numerically represented as \(V=10^{28}\). Here, \(\omega_{1}=1,\omega_{2}=\sqrt{2}\) and \((\epsilon_{1},\epsilon_{2})=(0,0),(0.5,0.25),(1,0.5),(1.5,0.75),(\sqrt{3}, \frac{\sqrt{3}}{2})\).
To substantiate the claim that, as suggested by the level spacing plots, at large energies, the general step system still has a positive fraction of wavefunctions that concentrate on classical level sets, we calculate the wavefunctions for such systems. Since the wavefunctions depend continuously on \(\epsilon\), for any given maximal energy, for small enough \(\epsilon\), such a fraction of concentrated wavefunctions exists. Hence, we first find the natural scaling of \(\epsilon\) with \(E\) and establish that our wavefunction calculations are far from the trivial limit of \(\epsilon\!\rightarrow\!0\), namely, that the perturbed wavefunctions do not correlate well with unperturbed wavefunctions.
Expanding the wavefunctions in \(\epsilon\), the first order correction to \(|n(\epsilon)\rangle=|n^{(0)}\rangle+\epsilon|n^{(1)}\rangle+O(\epsilon^{2})\), is:
\(\epsilon|n^{(1)}\rangle=\sum_{k\neq n}\frac{\langle k^{(0)}|U_{1}|n^{(0)} \rangle}{E_{n}^{(0)}-E_{k}^{(0)}}|k^{(0)}\rangle\)
where \(U_{1}=-\epsilon_{1}q_{1}-\epsilon_{2}q_{2}\). So for large energies, the number and power of terms that contribute significantly to the sum are expected to stabilize provided we use the scaling: \(\epsilon_{1}\propto\frac{E_{n+1}-E_{n}}{q_{1}}\) and \(\epsilon_{2}\propto\frac{E_{n+1}-E_{n}}{q_{2}}\). Since, for harmonic oscillators, \(q_{i}\propto\sqrt{E}\) and \(N(E)\propto\text{Vol}(E)\propto E^{2}\), so \(E_{n+1}-E_{n}\propto\frac{1}{E}\), we conclude that the stabilization is achieved provided \(\epsilon\propto\frac{1}{E^{1.5}}\). As higher orders of the perturbation series give the same result, we actually expect that \(|n(\epsilon)\rangle-|n^{(0)}\rangle=O(\epsilon E_{n}^{1.5})\). To capture the distance between eigenfunctions of the non-perturbed Hamiltonian to the perturbed one around an energy level \(E_{N}\), we calculate \(P\), the mean squared maximal projection on unperturbed wavefunctions, and \(T\), the mean number of above-threshold contributing unperturbed wavefunctions:
\[\begin{split} P(\epsilon,N;\Delta N,J)&=\frac{1}{ \Delta N}\sum_{n=N}^{N+\Delta N}\max_{j^{0}\leq J}|\langle j^{0}|n(\epsilon) \rangle|^{2}\\ T(\epsilon,N;\Delta N,J,\delta)&=\frac{\sum_{n=N}^ {N+\Delta N}\#(\left|\langle j^{0}|n(\epsilon)\rangle\right|^{2}>\delta)}{ \sum_{n=N}^{N+\Delta N}\sum_{j^{0}=0}^{J}\left\langle j^{0}|n(\epsilon)\rangle ^{2}}.\end{split} \tag{33}\]
Figure 5 shows that \(P(\epsilon E_{N}^{3/2},N;\Delta N,J)\) and \(T(\epsilon E_{N}^{3/2},N;\Delta N,J,\delta)\) are, to a good approximation, independent of \(N\), supporting the validity of our scaling. Moreover, while for small \(\epsilon(\frac{E_{N}}{E_{301}})^{3/2}\) we see that, as expected, there is a strong correlation between the perturbed and unperturbed wavefunctions, for \(\epsilon E_{N}^{3/2}\geq E_{301}^{3/2}\) the
maximal projection, \(P\), is small while the level of mixing, \(T\), is large, indicating that for such values of \(\epsilon E^{3/2}\) we are indeed far from the small \(\epsilon\) limit. Additional computations show that a further increase in \(\epsilon E_{N}^{3/2}\) leads to further decrease in \(P\).
Finally, we show that even when \(\epsilon(\frac{E_{n}}{E_{301}})^{3/2}\gg 1\), i.e. when the wavefunctions are not well approximated by the unperturbed wavefunctions, a substantial fraction of the wavefunctions concentrate on classical level sets. Figure 6 shows the 1481-1500 wavefunctions in Logarithmic scale normalized by the maximal absolute value of the wavefunctions for the unperturbed (step at the origin) and perturbed (\(\epsilon=(1.5,0.75)\)) wavefunctions (so \(\epsilon(\frac{E_{1500}}{E_{301}})^{3/2}=5.25\)). For both the perturbed and unperturbed systems, wavefunctions that are concentrated along the classical level sets, i.e., are essentially restricted to the configuration space region \((q_{1},q_{2})\in[q_{1}^{min}(E_{1},\epsilon_{1}),q_{1}^{max}(E_{1},\epsilon_{ 1})]\times[q_{2}^{min}(E_{2},\epsilon_{2},\omega_{2}),q_{2}^{max}(E_{2}, \epsilon_{2},\omega_{2})]\setminus S_{q^{wall}}\) where \(q_{i}^{max,min}\) correspond to the classical level set boundaries, are clearly seen (e.g. see wavefunction 1 in the unperturbed system and wavefunction 19 in the pertubed system). We call such wavefunctions concentrated wavefunctions.
Figure 5: Scaling of the perturbed wavefunctions with \(\epsilon\) and energy. (a) The mean maximal projection on unperturbed wavefunctions along the energy-scaled \(\epsilon\), \(\epsilon(\frac{E_{N}}{E_{301}})^{3/2}\): \(P(\epsilon(\frac{E_{N}}{E_{301}})^{3/2},N;10,400)\) (b) The mean number of above-threshold contributing unperturbed wavefunctions along the energy-scaled \(\epsilon\), \(\epsilon(\frac{E_{N}}{E_{301}})^{3/2}\): \(T(\epsilon(\frac{E_{N}}{E_{301}})^{3/2},N;10,400,0.01)\). These functions are plotted for \(N=151,201,251,301\) and for several \(\epsilon=(\epsilon_{1},\epsilon_{2}=\frac{\epsilon_{1}}{2})\) values.
To quantify this observation, we need to distinguish between concentrated wavefunctions from wavefunctions which are not concentrated. To this aim we define vertical and horizontal means of the wavefunctions:
\[\begin{split} M_{n}^{H}(q_{2})&=\int_{-\infty}^{\infty }|\Psi_{n}(q_{1},q_{2})|^{2}dq_{1}\\ M_{n}^{V}(q_{1})&=\int_{-\infty}^{\infty}|\Psi_{n}(q _{1},q_{2})|^{2}dq_{2}.\end{split} \tag{34}\]
and suggest that
\[\tilde{E}=\frac{V_{1}(arg\max_{q_{1}}M_{n}^{V}(q_{1}))+V_{2}(arg\max_{q_{2}}M_{ n}^{H}(q_{2})}{E} \tag{35}\]
provides a good indicator for the wavefunctions concentration: it is close to one for concentrated wavefunctions and has a much lower value for the rest of the wavefunctions.
Figures 7(a,b) present \(\tilde{E}\) values in the case of corner at the origin for low (a) and high (b) ranges of energies. Red points represent \(\tilde{E}\) values for the product wavefunctions of Eq. (28) and constitute around \(1/3\) of the \(20\)\(\tilde{E}\) values. We see that some of the blue points align with the red ones, while others, around \(1/5\) for the lower energies and \(1/2\) for the higher energies have a much lower value. The insets present \((M_{n}^{H},M_{n}^{V})\) in the positive quadrant for the three
Figure 6: High energy wavefunctions for a step at the origin and for a shifted step. (a) The unperturbed Hamiltonian. (b) The perturb Hamiltonian with \((\epsilon_{1},\epsilon_{2})=(1.5,0.75)\). The wavefunctions for \(n=1481-1500\) are plotted. To better visualize the main mass concentration we plot \(Log(|\Psi_{n}(q_{1},q_{2};\epsilon)|+\max_{q_{1},q_{2}}|\Psi_{n}((q_{1},q_{2}; \epsilon)|)\).
different types of wavefunctions: for a product wavefunction (red point, wavefunction 1 in 7(a) ), for a concentrated wavefunction with a similar \(\tilde{E}\) value (blue point, wavefunction 13 in 7(a) ) and for a non-concentrated wavefunction with a low \(\tilde{E}\) value (blue point, wavefunction 9 in 7(a) ). In the first two cases we recognize an oscillatory structure within the classically allowed region, and we observe that the maximal power appears close to the edge. In contrast, the insets corresponding to the low \(\tilde{E}\) value show a non oscillatory structure with peaks at arbitrary positions within the Hill region.
Figures 7(c,d) present a similar computation for the case of the shifted potential, \(\epsilon=(1.5,0.75)\), for which there are no product wavefunctions, yet concentrated and not concentrated wavefunction do appear, and the indicator \(\tilde{E}\) seems to distinguish between these two types of wavefunctions.
The reasoning for this suggestion is as follows; For step at the origin, for the product wavefunctions (eq. 28), \(M_{n}^{H}(q_{2})=|\Psi_{n,2}(q_{2})|^{2}\) for \(q_{2}>0\) and \(M_{n}^{H}(q_{2})=|\Psi_{n,2}(q_{2})|^{2}/2\) for \(q_{2}<0\), so by the WKB approximation (eq.29), and similarly for \(M_{n}^{V}(q_{1})\), we indeed expect \(\tilde{E}=1-f(E)\) for some function \(f(E)\) which tends to zero as \(E\) goes to infinity (e.g., Figures 7(a,b) suggest that \(f(E_{500})\approx 0.15,E_{500}=39.9\) and \(f(E_{1500})\approx 0.1,E_{1500}=70.5\)). For non-product yet concentrated wavefunctions on some classical configuration space region defined by the partial energies \((E_{1},E_{2})\), the \(argmax\) of \(M^{V,H}\) cannot be larger than the corresponding \(q_{i}^{max}\). Moreover, as classically, one of the momenta components vanishes at the edges of the classical region, the projection of the Liuoville measure to the configuration space there is expected to be larger, hence, by the correspondence principle, we expect maximal densities near the edges. Hence, \(\tilde{E}\) provides the approximate ratio between the sum of the potential energies at the classical region corners (belonging to the boundary of the classical Hill region) to the total energy, so we expect it to have a similar \(\tilde{E}\) values to the corresponding product wavefunctions. In contrast, for a wavefunction which does not concentrate on a single classical level set we do not expect the maxima in the horizontal and vertical directions to lie necessarily on the boundary of the Hill region (see insets corresponding to the lower \(\tilde{E}\) values), thus the sum of the potential energies at such an
interior point leads to a lower value of \(\tilde{E}\).
In conclusion, Figures 6 and 7 suggest that the fraction of concentrated wavefunctions does not vanish at high energies even when the step is shifted.
Figure 7: An indicator for the concentration of wavefunctions on classical level sets. The indicator \(\tilde{E}\) of Eq. 35 is plotted for the case of a step at the origin, (a) 481-500 and (b) 1481-1500 wavefunctions (the wavefunctions of Fig. 6a). The indicator \(\tilde{E}\) is plotted for the case of a shifted step ( \(\epsilon=(1.5,0.75)\)) (c) 481-500 and (d) 1481-1500 wavefunctions(the wavefunctions of Fig. 6b). The insets present \(M^{H},M^{V}\) for specific points
Summary and discussion
We studied the correspondence of a quantum step-oscillator - a two dimensional quantum oscillator in the presence of a step (a step-like region \(S\) in the configuration space at which the potential energy is infinite) to its classical analog, a pseudointegrable Hamiltonian impact system. For the case of harmonic resonant oscillators with a corner at the origin, for which families of periodic orbits can be explicitly constructed, we demonstrated that the EBK quantization condition provides a good predictor to the energy levels (Figure 8), and that Weyl's law provides a good approximation to the growth in the number of wavefunctions (Figure 3). Moreover, we observed that in even-resonance cases two different families of periodic orbits belonging to the same component of the level set co-exist, with distinct corresponding wavefunctions, each contributing a positive portion to the phase space volume (Figure 3). This demonstrates that the non-ergodicity of level sets has a quantum analog. We showed that the intermediate level spacing of the quantum step-oscillator for non-resonant and not necessarily harmonic potential hardly depends on the position of the step (taken in the negative quadrant) and is approximately semi-Poisson, indicating repulsion of energy levels, similar to the level spacing obtained for pseudointegrable billiards (Figure 4). When the step is at the origin, we showed that there is a positive fraction of wavefunctions that remain concentrated along the classical level sets at arbitrarily high energies, as occurs for integrable systems, namely they do not tend to equidistribute in the configuration space as is the case for pseudointegrable billiards (Eq. (28)-(32) and Figures 6a and 7a,b). Finally, when the corner is shifted from the origin, we conjecture, based on numerical evidence for non-resonant harmonic oscillators, that there is a positive density series of wavefunctions which are not equidistributed and concentrated along the classical level sets (Figures 6b and 7c,d).
## 5 appendix
**Definition 5.1**.: _A family of periodic orbits of the step system is the family of periodic orbits having identical number of turning points and impact points \(b_{i}\) and \(\mu_{i}\)_
**Proposition 5.2**.: _For a step at the origin, on level sets with \(\omega_{1}=1,\omega_{2}=\frac{1}{n}\) the step system has:_
_1) For an odd \(n\): exactly one family of p.o. with \(b_{1}=n,\mu_{1}=3n,b_{2}=1,\mu_{2}=3\) and action \(I=\frac{3nI_{1}+3I_{2}}{2}\). Its quantization condition is: \(E_{(k)}=\frac{k}{1.5n}+\frac{5(1+n)}{6n}\)_
_2) For an even \(n\): exactly two families of p.o. one with \(\mu_{1}^{I}=2n,\)\(b_{1}^{I}=n\), \(\mu_{2}^{I}=2\),\(b_{2}^{I}=0\) and action \(I^{I}=\frac{2nI_{1}+2I_{2}}{2}\) and another one with \(\mu_{1}^{II}=n,\mu_{2}^{II}=1,b_{1}^{II}=0,b_{2}^{II}=1\) and action \(I^{II}=\frac{nI_{1}+I_{2}}{2}\). Theirs quantization condition: \(E_{(k)}^{I}=\frac{k}{n}+\frac{4n+2}{4n}\), \(E_{(k)}^{II}=\frac{2k}{n}+\frac{n+3}{2n}\)_
Proof.: Consider the Poincare map restricted to a given level set, \(I(H_{1}=e_{1},H_{2}=E-e_{2})\), \(P_{1}:\Sigma_{1}^{wall}\rightarrow\Sigma_{1}^{wall}\), so \(P_{1}:\theta_{2}\rightarrow\bar{\theta}_{2},\theta_{2}\in[0,2\pi)\). Divide this section to \(2n\) equal length intervals starting from \(\frac{\pi}{2}\) (so \(s_{i}=(\frac{\pi}{2}+(i-1)\frac{\pi}{n},\frac{\pi}{2}+i\frac{\pi}{n}\) mod \(2\pi),\ i=1,...,2n\)). Since \(\omega_{1}=1,\omega_{2}=\frac{1}{n}\) and the step is at the origin (so the width of each strip in the cross is \(\pi\)), the transition between these segments and the corrresponding increase in the turning points and impacts is (notice that each crossing of the section \(\theta_{i}=0\) or \(\theta_{i}=\pi\) corresponds to an additional turning point in the \(q_{i}\) direction):
\[\begin{cases}s_{i}\to s_{i+1},(b_{1},\mu_{1},b_{2},\mu_{2}) \rightarrow(b_{1}+1,\mu_{1}+1,b_{2},\mu_{2})&\text{if: }i<n-1\,\,i\neq\left\lfloor\frac{n}{2}\right\rfloor\text{ or }i=2n\\ s_{i}\to s_{i+1},(b_{1},\mu_{1},b_{2},\mu_{2})\rightarrow(b_{1}+1,\mu_{1}+1,b_{2},\mu_{2}+1)&\text{if: }i=\left\lfloor\frac{n}{2}\right\rfloor\\ s_{i}\to s_{i+2},(b_{1},\mu_{1},b_{2},\mu_{2})\rightarrow(b_{1},\mu_{1}+2,b_{2},\mu_{2})&\text{if:}n\leq i<2n-1,i\neq\left\lfloor\frac{3n}{2}\right \rfloor,\left\lfloor\frac{3n}{2}\right\rfloor-1\\ s_{i}\to s_{i+2},(b_{1},\mu_{1},b_{2},\mu_{2})\rightarrow(b_{1},\mu_{1}+2,b_{2},\mu_{2}+1)&\text{if: }i=\left\lfloor\frac{3n}{2}\right\rfloor,i=\left\lfloor\frac{3n}{2}\right\rfloor- 1\\ s_{i}\to s_{n+1},(b_{1},\mu_{1},b_{2},\mu_{2})\rightarrow(b_{1},\mu_{1}+2,b_{2}+1,\mu_{2})&\text{if: }i=2n-1\end{cases}\]
Thus, orbits starting at \(s_{1}\) always reach \(s_{n}\) after \(n\) iterations, undergoing up to this point \(n-1\) impacts with the right wall, \(n\) turning points in the \(q_{1}\) direction (\(n\) crossing of the section \(\theta_{1}=0\)) and \(1\) turning point in the \(q_{2}\) direction.
For an even \(n\), \(s_{n}\) is mapped to \(s_{2n}\) after \(\frac{n}{2}\) iterations without passing through \(s_{2n-1}\), and then maps back to \(s_{1}\). Namely, such
orbits complete a period by visiting all segments \(s_{1},..,s_{n}\) and only even segments between \(s_{n+1}\) and \(s_{2n}\). Thus, \(\mu_{1}^{I}=2n\), \(b_{1}^{I}=n\), \(\mu_{2}^{I}=2\), \(b_{2}^{I}=0\).
The other family of orbits starts at \(s_{n+1}\) and visits only odd segments between \(s_{n+1}\) and \(s_{2n}\). Thus, it undergoes a single impact with the upper wall, has \(n\) turning points in the \(q_{1}\) direction and a single turning point in the \(q_{2}\) direction: \(\mu_{1}^{II}=n,\mu_{2}^{II}=1,b_{1}^{II}=0,b_{2}^{II}=1\).
For an odd \(n\), \(s_{n}\) is mapped to \(s_{2n-1}\) after \(\lfloor\frac{n}{2}\rfloor\) iterations, then impacts the upper wall and maps to \(s_{n+1}\). Then it visits only even segments between \(s_{n+1}\) and \(s_{2n}\) and maps back to \(s_{1}\). Thus, \(b_{1}=n,\mu_{1}=3n,b_{2}=1,\mu_{2}=3\)
Now consider \(\omega_{1}=1,\omega_{2}=\frac{m}{n}\):
**Definition 5.3**.: \(c\)_-series: Given a rational number \(\frac{m}{n}=\frac{m_{0}}{n_{0}}\) the associated even continued fraction series, \([c_{0};c_{1},...,c_{k}]\) and its partial
Figure 8: Trajectories of family I(green) and family II(red) in action angle coordinates for \(\omega=1\), \(\omega=\frac{1}{2}\) with the c-series \([0;2]\). Proposition 5.2 states: \((\mu^{I},b^{I})=((4,2),(2,0))\) and \((\mu^{II},b^{II})=((2,1),(0,1))\). The four different background color stands for the 4 \(s_{i}\) segments.
expansions, \(\frac{m_{i}}{n_{i}},i=0,..,k\) are defined inductively on \(i\); Given \(\frac{m_{i}}{n_{i}}\) and \([c_{0};c_{1},...,c_{i-1}]\) (so at \(i=0\) the previous c-series is empty) define:_
* _If_ \(\frac{m_{i}}{n_{i}}\) _is an integer then_ \(c_{i}=\frac{m_{i}}{n_{i}}\) _and the series ends, so_ \(k=i\)_._
* _Otherwise,_ \[c_{i}=\left\lfloor\frac{m_{i}}{n_{i}}\right\rfloor+N\] _where_ \[N=\begin{cases}1,&\text{if: }\left\lfloor\frac{m_{i}}{n_{i}}\right\rfloor \text{is odd}\\ 0,&\text{if: }\left\lfloor\frac{m_{i}}{n_{i}}\right\rfloor\text{is even}\end{cases}\] _and_ \[\frac{m_{i+1}}{n_{i+1}}=(\frac{m_{i}}{n_{i}}-c_{i})^{-1}\] _where_ \(n_{i+1}\) _always gets the positive sign._
**Lemma 5.4**.: _A rational number \(\frac{m_{i}}{n_{i}}\) associated with a c-series \([c_{i};...,c_{k}]\) is equal to \(c_{i}+\frac{n_{i+1}}{m_{i+1}}\) where \(\frac{m_{i+1}}{n_{i+1}}\) is associated with \([c_{i+1};...,c_{k}]\)_
Proof.: Derives directly from the definition.
**Lemma 5.5**.: _The c-series is finite and unique, \(k=0\) iff \(\frac{m}{n}\) is an integer, and, for all \(k>0\) and \(0\leq i\leq k-1\), \(c_{i}\) is even, whereas \(c_{k}\) can be either even or odd. For all \(i\), \(c_{i}\) can be negative, \(c_{k}\neq 1\), and for all \(i>0,\ c_{i}\neq 0\)._
Proof.: We use proof by induction: If \(k=0\), \(\frac{m}{n}\) is an integer number and the c-series,\([c_{k=0}=\frac{m}{n}]\), is finite and \(c_{k}\) can be even or odd.
Otherwise, assume - at the \(i^{\prime}th>0\) step, given a non integer rational number \(\frac{m_{i}}{n_{i}}\), and a sequence \([c_{i};c_{i+1},...,c_{k}]\), we compute, \(\frac{m_{i+1}}{n_{i+1}}=\begin{cases}-(\frac{n_{i}-(m_{i}\mod n_{i})}{n_{i}}) ^{-1},&\text{if: }\left\lfloor\frac{m_{i}}{n_{i}}\right\rfloor\text{is odd}\\ (\frac{m_{i}\mod n_{i}}{n_{i}})^{-1},&\text{if: }\left\lfloor\frac{m_{i}}{n_{i}} \right\rfloor\text{is even}\end{cases}\)
where \(n_{i+1}\) always gets the positive sign.
Thus, \(|m_{i+1}|=n_{i}\)
And, \(n_{i+1}<n_{i}\)
As a result of that after a finite number of steps \(t\leq n_{0}\), \(n_{i+t}=1\), \(\frac{m_{i+t}}{n_{i+t}}\) is an integer and the series is finite.
In addition: \(|\frac{m_{i+1}}{n_{i+1}}|>1\), which implies \(c_{i}\neq 0\)
It follows that for any given \(\frac{m_{0}}{n_{0}}\) the process defines all \(m_{i},\,n_{i}\) and \(c_{i}\) uniquely.
**Lemma 5.6**.: _given \(\frac{m_{i}}{n_{i}}\) associated with \([c_{i};...,c_{k}]\) and \(\frac{m_{i+1}}{n_{i+1}}\) associated with \([c_{i+1};...,c_{k}]\), the eveness properties of \((m_{i+1},n_{i+1})\) are identical to those of \((n_{i},m_{i})\), namely, \(m_{i+1}\mod 2=n_{i}\mod 2\) and \(n_{i+1}\mod 2=m_{i}\mod 2\)._
Proof.: by definition
\[\frac{m_{i+1}}{n_{i+1}}=(\frac{m_{i}}{n_{i}}-c_{i})^{-1}\]
where \(c_{i}\) is even:
Thus, \(|n_{i+1}|\mod 2=|m_{i}-c_{i}n_{i}|\mod 2=m_{i}\mod 2\)
\(|m_{i+1}|\mod 2=|n_{i}|\mod 2=n_{i}\mod 2\)
**Lemma 5.7**.: _given \(\frac{m_{0}}{n_{0}}\) associated with \([c_{0};...,c_{k}]\)_
_1) if_ \(m_{0}\) _is even and_ \(n_{0}\) _is odd then_ \(c_{k}\) _is even and k is even_
_2) if_ \(m_{0}\) _is odd and_ \(n_{0}\) _is even then_ \(c_{k}\) _is even and k is odd_
_3) if_ \(m_{0}\) _is odd and_ \(n_{0}\) _is odd then_ \(c_{k}\) _is odd_
Proof.: \(c_{k}=\frac{m_{k}}{n_{k}}\)
1) For an even \(m_{0}\) and an odd \(n_{0}\) we assume by contradiction an odd k: by lemma 5.6\(m_{k}\) is odd and \(n_{k}\) is even. However, \(c_{k}=\frac{m_{k}}{n_{k}}\) is an integer, which creates a contradiction. Thus, the only possible option is an even k, an even \(m_{k}\) and an odd \(n_{k}(=1)\) which implies an even \(c_{k}\).
2) For an odd \(m_{0}\) and an even \(n_{0}\) we assume by contradiction an even k: by lemma 5.6\(m_{k}\) is odd and \(n_{k}\) is even. However, \(c_{k}=\frac{m_{k}}{n_{k}}\) is an integer, which creates a contradiction. Thus, the only possible option is an odd k, an even \(m_{k}\) and an odd \(n_{k}(=1)\) which implies an even \(c_{k}\).
3)By lemma 5.6 for an odd \(m_{0}\) and an odd \(n_{0}\), \(m_{k}\) is odd and \(n_{k}\) is odd. Thus \(c_{k}\) is odd as well.
Examples: \(\frac{1}{3}=[0,3]_{c}\ \frac{3}{5}=[0,2,-3]_{c}\ \frac{2}{5}=[0,2,2]_{c}\ \frac{29}{12}=[2,2,2,2]\)\(\frac{9}{8}=[2,-2,2,-2,2,-2,2,-2]_{c}\)
**Lemma 5.8**.: _: The c-series is identical to the continued fraction series iff the continued fraction is of the form \([a_{0};a_{1}...,a_{k}]\) where \(a_{i}\) is even for \(0<i<k-1\) and \(a_{k}\) can be either even or odd._
Proof.: derives directly from the definition.
**Definition 5.9**.: _The right impact interval, \(J_{r}^{\omega_{2}}\),(For a level set with \(\omega_{1}=1\), \(\omega_{2}=\omega_{2}\)) is the set of all \(\theta_{2}\) at \(\Sigma_{1}^{wall}\), which are mapped by the flow to the step region, namely, satisfying \(|\theta_{2}+2\theta_{1}^{wall}\frac{\omega_{2}}{\omega_{1}}\mod 2\pi|\geq\theta_{2}^{wall}\)_
**Lemma 5.10**.: _The right impact interval of a step system with corner in the origin, with frequencies \(\omega_{2}\) that differ by an even integer are identical: if \(\omega_{1}=1\) and \(\omega_{2}=\frac{m}{n}\) and \(\omega_{1}^{\prime}=1\) and \(\omega_{2}^{\prime}=\frac{m}{n}+2L\) for an integer \(L\) then \(J_{r}^{\frac{m}{n}+2L}=J_{r}^{\frac{m}{n}}\)._
Proof.: \(J_{r}^{\frac{m}{n}}=\{\theta_{2}:|\theta_{2}+\pi\frac{m}{n}\mod 2\pi|>\theta_{2}^{wall}\}\)
Now since, \(|\theta_{2}+\pi\frac{m}{n}+2\pi L\mod 2\pi|=|\theta_{2}+\pi\frac{m}{n}\mod 2\pi|\), \(J_{r}^{\frac{m}{n}}=J_{r}^{\frac{m}{n}+2L}\)
**Lemma 5.11**.: _A step system with corner in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m}{n}\) and a step system with corner in the origin with \(\omega_{1}^{\prime}=1\) and \(\omega_{2}^{\prime}=\frac{m}{n}+2L\) for an integer \(L\) has the same number of families of p.o_
Proof.: Given \(\omega_{1}=1\) and the rational frequencies \(\omega_{2}=\frac{m}{n}\) and \(\omega_{2}^{\prime}=\frac{m}{n}+2L\) the Poincare map \(P_{1}^{\omega_{2}}:\Sigma_{1}^{wall}\rightarrow\Sigma_{1}^{wall}\) satisfies the relation \(P_{1}^{\frac{m}{n}+2L}\)=\(P_{1}^{\frac{m}{n}}\).This derives from lemma 5.10, \(J_{r}^{\frac{m}{n}}=J_{r}^{\frac{m}{n}+2L}\), realizing that for \(\theta_{1}\in[-\frac{\pi}{2},\frac{\pi}{2}]\)(the vertical center of the cross) the \(\theta_{2}\) coordinate is increased by \(2\pi L\) whereas for \(\theta_{1}\in[\frac{\pi}{2},\frac{3\pi}{2}]\)(the horizontal part of the cross) the \(\theta_{2}\) coordinate is increased by \(4\pi L\), so altogether we see that:
\(P_{1}^{\frac{m}{n}+2L}=\begin{cases}P_{1}^{\frac{m}{n}}+2\pi L,&\text{if: }\theta_{2}\in J_{r}^{\frac{m}{n}}\\ P_{1}^{\frac{m}{n}}+6\pi L,&\text{else}\end{cases}\)
\(\mod 2\pi\), the two maps are identical and have the same dynamics
**Lemma 5.12**.: _A step system with corner in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m}{n}\) and a step system with corner in the origin with \(\omega_{1}^{\prime}=1\) and \(\omega_{2}^{\prime}=\frac{n}{m}\) has the same number of families of p.o_
Proof.: Given \(\omega_{1}=1\) and the rational frequencies \(\omega_{2}=\frac{m}{n}\) and \(\omega_{2}^{\prime}=\frac{n}{m}\) the Poincare map \(P_{i}:\Sigma_{i}^{wall}\rightarrow\Sigma_{i}^{wall}\) satisfies the relation \(P_{1}^{\frac{m}{n}}\)=\(P_{2}^{\frac{n}{m}}\). This derives strictly from the definition of \(P_{i}\). The
number of families of p.o is the number of different cycles at \(P_{i}\) which has to be identical at both cases because of the above relation.
**Lemma 5.13**.: _A step system with corner in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m_{i}}{n_{i}}\) associated with a c-series \([c_{i};...,c_{k}]\) and a step system with corner in the origin with \(\omega_{1}^{\prime}=1\) and \(\omega_{2}^{\prime}=\frac{m_{i+1}}{n_{i+1}}\) associated with a c-series \([c_{i+1};...,c_{k}]\) have the same number of families of p.o._
Proof.: By lemma 5.4: \(\frac{m_{i}}{n_{i}}=c_{i}+\frac{m_{i+1}}{n_{i+1}}\). Thus, the lemma derives directly from lemmas 5.11 and 5.12.
**Proposition 5.14**.: _A step system with corner in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m_{0}}{n_{0}}\) associated with a c-series \([c_{0};...,c_{k}]\) has 2 families of periodic orbits if \(c_{k}\) is even and one if \(c_{k}\) is odd._
Proof.: By lemma 5.13 a step system with corner in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m_{0}}{n_{0}}\) associated with a c-series \([c_{0};...,c_{k}]\) and a step system with corner at the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{1}{c_{k}}\) have the same number of families of periodic orbits. By proposition 5.14 if \(c_{k}\) is even there are 2 families of periodic orbits and if \(c_{k}\) is odd there is one.
**Lemma 5.15**.: _For a step system with corner in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m_{i}}{n_{i}}\) associated with a c-series \([c_{i};...,c_{k}]\) with an odd \(c_{k}\)(\(m_{i}\) and \(n_{i}\) are odd) \(\mu_{2}=3|m_{i}|\) and \(\mu_{1}=3n_{i}\)_
Proof.: We prove the claim by induction:
For \(i=k\):
We recall that by proposition 5.2, \(\mu_{2}=3|c_{k}|,\mu_{1}=3\). Now assume the lemma holds for \(i=k-l\):
then, \(\omega_{2}=\frac{m_{i}}{n_{i}}\), thus \(\mu_{2}=3|m_{i}|\) and \(\mu_{1}=3n_{i}\).
Now, for \(i-1=k-(l+1)\):
\(\omega_{2}^{\prime}=\frac{m_{i-1}}{n_{i-1}}\) associated with \([c_{i-1};...,c_{k}]\).
From lemma 5.4, \(\omega_{2}^{\prime}=c_{i-1}+\frac{n_{i}}{m_{i}}\) and \(c_{i-1}\) is even.
Thus, \(P_{1}^{\omega_{2}^{\prime}}=P_{2}^{\omega_{2}}\)(and vise-versa) and \(\mu_{2}(\omega_{2}^{\prime})=|\mu_{1}(\omega_{2})+3c_{i-1}|=3|n_{i}+c_{i-1}|=3 |m_{i-1}|\) and also \(\mu_{1}(\omega_{2}^{\prime})=\mu_{2}(\omega_{2})=3|m_{i}|=3n_{i-1}\).
**Lemma 5.16**.: _For a step system with corner in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m_{i}}{n_{i}}\) associated with a c-series \([c_{i};...,c_{k}]\) with an
_even \(c_{k}\)(\(m_{i}\mod 2\neq n_{i}\mod 2\)): \(\mu_{2}^{I}=2|m_{i}|\) and \(\mu_{1}^{I}=2n_{i}\) and \(\mu_{2}^{II}=|m_{i}|\) and \(\mu_{1}^{II}=n_{i}\)_
Proof.: We prove the claim by induction:
For \(k=0\):
We recall that by propositon 5.2, \(\mu_{2}^{I}=2|c_{k}|\) and \(\mu_{2}^{II}=|c_{k}|\).
Now assume the lemma holds for \(i=k-l\):
then, \(\omega_{2}=\frac{m_{i}}{n_{i}}\), Thus \(\mu_{2}^{I}=2|m_{i}|\) and \(\mu_{1}^{I}=2n_{i}\) and \(\mu_{2}^{II}=|m_{i}|\) and \(\mu_{1}^{II}=n_{i}\).
Now, for \(i-1=k-(l+1)\):
\(\omega_{2}^{\prime}=\frac{m_{i-1}}{n_{i-1}}\) associated with \([c_{i-1};...,c_{k}]\).
By lemma 5.4\(\omega_{2}^{\prime}=c_{i-1}+\frac{n_{i}}{m_{i}}\) and \(c_{i-1}\) is even.
Thus by lemmas 5.11 and 5.12, \(P_{1}^{\omega_{2}^{\prime}}=P_{2}^{\omega_{2}}\)(and vise-versa) and \(\mu_{2}^{I}(\omega_{2}^{\prime})=|\mu_{1}^{I}(\omega_{2})+2c_{i-1}|=2|n_{i}+c _{i-1}|=2|m_{i-1}|\) and,similarly, \(\mu_{2}^{II}=|m_{i-1}|\). For the same reason, \(\mu_{1}^{I}=2n_{i-1}\) And \(\mu_{1}^{II}=n_{i-1}\)
**Proposition 5.17**.: _A step system with corner in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m}{n}\) with an odd m and odd n: \(\mu_{1}=3n,\mu_{2}=3m,b_{1}=n,b_{2}=m\)_
Proof.: By lemma 5.15\(\mu_{1}=3n\), Thus at a period, \(P_{1}\) goes through 2n iterations cycle. n of the iterations are for \(\theta_{2}\in J_{r}\), thus, \(b_{1}=n\).
n of the iterations are for \(\theta_{2}\notin J_{r}\). We can write \(\omega_{2}=\frac{m}{n}=\left\lfloor\frac{m}{n}\right\rfloor+m\mod n\). Thus we can divide the n iterations to \(m\mod n\) iterations with \(\left\lfloor\frac{m}{n}\right\rfloor+1\) impacts with the upper wall and \(n-m\mod n\) iterations with \(\left\lfloor\frac{m}{n}\right\rfloor\) impacts with the upper wall.
Thus, \(b_{2}=n\left\lfloor\frac{m}{n}\right\rfloor+m\mod n=m\).
**Proposition 5.18**.: _Quantized energy levels for Harmonic oscillator with corner in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m}{n}\) for odd m and n are: \(E_{(k)}=\frac{2k}{3n}+\frac{1+\frac{m}{n}}{2}+\frac{n+m}{3n}=\frac{2k}{3n}+ \frac{5(m+n)}{6n}\)_
Proof.: Quantization conditions derived from the equation:
\(\frac{\mu_{1}I_{1}+\mu_{2}I_{2}}{2}=k+\frac{\mu_{1}+\mu_{2}}{4}+\frac{b_{1}+b _{2}}{2}\)
For harmonic oscillator: \(I_{i}=\frac{E_{i}}{\omega_{i}}\)
1) for an odd m and odd n: \(\frac{3nE_{1}+3m\frac{E_{2}}{n}}{2}=k+\frac{3n+3m}{4}+\frac{n+m}{2}\)
\(E_{(k)}=\frac{k}{1.5n}+\frac{n+\frac{m}{n}}{2n}+\frac{n+m}{3n}=\frac{2k}{3n}+ \frac{5(n+m)}{6n}\)
Let \(b_{s,i}\) and \(\mu_{s,i}\), \(s\in\{1,2\}\), denote the number of impacts/turning points for a step system with corner in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m_{i}}{n_{i}}\) associated with a c-series \([c_{i};...,c_{k}]\).
**Proposition 5.19**.: _A step system with a corner in the origin with \(\omega_{1}=1\) and \(\omega_{2}=\frac{m}{n}\) associated with a c-series \([c_{0};...,c_{k}]\),with an even m and odd n has 2 families of periodic orbits such that: \(\mu_{1}^{I}=2n,\mu_{2}^{I}=2m,\mu_{1}^{II}=n,\mu_{2}^{II}=m\). The number of impacts can be calculated backwards inductively as: \(b_{1,i}^{I}=b_{2,i+1}^{I}\), \(b_{2,i}^{I}=b_{1,i+1}^{I}+c_{i}(\frac{\mu_{1,i}^{I}-b_{1,i}^{I}}{2})\), and \(b_{1,i}^{II}=b_{2,i+1}^{II}\), \(b_{2,i}^{II}=b_{1,i+1}^{II}+c_{i}(\frac{\mu_{1,i}^{II}-b_{1,i}^{II}}{2})\)_
Proof.: Recall that by lemma 5.4\(\omega_{2}=\frac{m_{i}}{n_{i}}=\frac{n_{i+1}}{m_{i+1}}+c_{i}\). For \(\omega_{2}^{\prime}=\frac{n_{i+1}}{m_{i+1}}=\frac{1}{\omega_{2,i+1}}\) we know that: \(b_{1}^{\prime I}=b_{2,i}^{I}\), \(b_{2}^{\prime I}=b_{1,i}^{I}\), and \(b_{1}^{\prime II}=b_{2,i}^{II}\), \(b_{2}^{\prime II}=b_{1,i}^{II}\). Now as \(c_{i}\) is even the step systems with \(\omega_{2}^{\prime}=\frac{n_{i+1}}{m_{i+1}}\) and \(\omega_{2}=\frac{n_{i+1}}{m_{i+1}}+c_{i}\) have the same Poincare map \(P_{1}\). By definition the number of rounds the periodic trajectory makes for both \(\omega_{2}\) and \(\omega_{2}^{\prime}\), in the horizontal part of the cross(\(\theta_{1}\in[\frac{\pi}{2},\frac{3\pi}{2}]\)) is \(\frac{\mu_{1,i}-b_{1,i}}{2}\) and in each such an iteration the \(\omega_{2}\) system has \(c_{i}\) additional impacts with the upper wall. The number of impacts with the right wall(\(b_{1}\)) doesn't change as \(P_{1}\) does not change. |
2307.10454 | Latent Gaussian dynamic factor modeling and forecasting for multivariate
count time series | This work considers estimation and forecasting in a multivariate, possibly
high-dimensional count time series model constructed from a transformation of a
latent Gaussian dynamic factor series. The estimation of the latent model
parameters is based on second-order properties of the count and underlying
Gaussian time series, yielding estimators of the underlying covariance matrices
for which standard principal component analysis applies. Theoretical
consistency results are established for the proposed estimation, building on
certain concentration results for the models of the type considered. They also
involve the memory of the latent Gaussian process, quantified through a
spectral gap, shown to be suitably bounded as the model dimension increases,
which is of independent interest. In addition, novel cross-validation schemes
are suggested for model selection. The forecasting is carried out through a
particle-based sequential Monte Carlo, leveraging Kalman filtering techniques.
A simulation study and an application are also considered. | Younghoon Kim, Marie-Christine Düker, Zachary F. Fisher, Vladas Pipiras | 2023-07-19T20:48:44Z | http://arxiv.org/abs/2307.10454v2 | # Latent Gaussian dynamic factor modeling and forecasting
###### Abstract
This work considers estimation and forecasting in a multivariate count time series model based on a copula-type transformation of a Gaussian dynamic factor model. The estimation is based on second-order properties of the count and underlying Gaussian models and applies to the case where the model dimension is larger than the sample length. In addition, novel cross-validation schemes are suggested for model selection. The forecasting is carried out through a particle-based sequential Monte Carlo, leveraging Kalman filtering techniques. A simulation study and an application are also considered.
_Keywords--_ Count time series, dynamic factor model, Hermite expansions, Yule-Walker equations, principle components, sequential Monte Carlo.
## 1 Introduction
This work develops theory, estimation, and forecasting methods for dynamic factor modeling of discrete-valued multivariate time series. Count time series are exceedingly common across the natural, health, and social sciences. Examples include monthly counts of earthquakes whose magnitude exceeds a certain threshold, daily counts of individuals infected with a virus, and minute-by-minute recordings of conversational interactions between two communicating partners. Specifically, we are interested in a \(d-\)vector time series \(X_{t}=(X_{i,t})_{i=1,\ldots,d}\), where \(X_{i,t}\in\mathbb{N}_{0}=\{0,1,2,\ldots\}\) and \(t\in\mathbb{Z}\) is
time. The considered model accommodates the range \(\mathbb{Z}\) easily as well but counts and the range \(\mathbb{N}_{0}\) are encountered more commonly in practice. Note that any set of finite discrete values can also be encoded as a subset of \(\mathbb{N}_{0}\). The focus is on stationary models throughout, though the possibilities of covariates and differencing will also be considered.
In general, modeling time series with discrete (count) values is delicate. In the continuous case, the class of autoregressive moving average (ARMA) models parsimoniously spans all non-deterministic stationary series (by the classical Wold decomposition). In the count setting, the landscape is far less determined, with class of models dominating in popularity. In fact, researchers have devised many different methods for constructing stationary count time series. The majority of work on count time series has been devoted to the univariate case. Popular approaches include those based on thinning operators (e.g. McKenzie, 1985, Alzaid and Al-Osh, 1993) and the generalized state-space models (e.g. Davis et al., 2016), including Markov chain and Hidden Markov Models (HMMs), Bayesian dynamic models (e.g. Gamerman et al., 2016). Integer-valued autoregressive conditional heteroskedasticity modeling (e.g. Ferland et al., 2006, Fokianos et al., 2009) is another popular observation-driven approach. Recent reviews of this research area are given in Weiss (2018) and Davis et al. (2021).
Compared to univariate count series, the analysis of multivariate, and potentially high-dimensional, count series has received considerably less attention. For a review of approaches for multivariate counts, see Karlis (2016) and for recent work on generalized linear model (GLM) constructions in high-dimensional settings, see Chen et al. (2017), Hall et al. (2018), Mark et al. (2018, 2019), and Fokianos et al. (2020). Dynamic factor models for multivariate count time series have been considered by Jung et al. (2011), Cui and Dunson (2014), Wang and Wang (2018), and Brauning and Koopman (2020) which posit conditional distributions akin to GLM in their likelihood maximization procedures. In a collection of articles on count time series, Karlis (2016) surveys relatively recent works on multivariate discrete-valued time series models.
In a recent paper, Jia et al. (2023) proposed a general Gaussian copula-based approach to model count series with a flexible and most general correlation structure that can accommodate any count marginal distribution. The marginal distribution can exhibit over- or under-dispersion, or have zero-inflation. The autocovariance function (ACVF) of the model is as general as possible for a given marginal distribution, capable of achieving the most negative pairwise correlations. In particular, the range of attainable correlations for this model can be obtained from the result of Whitt (1976). Kong and Lund (2023) further extended the model to incorporate periodic and seasonal features by
replacing the vanilla ARMA with periodic autoregressive moving average (PARMA) and seasonal autoregressive moving average (SARMA) models.
The goal of this work is to propose a multivariate extension of Jia et al. (2023) based on latent Gaussian dynamic factor models (DFMs) where the factors follow stationary vector autoregressive (VAR) models. The exact definition of our model is given in (1), (3) and (4) below. We propose new methods for estimating the dynamic factor model parameters and for forecasting with the estimated model. On the estimation side, our method can be applied when the dimension \(d\) is large, possibly much larger than the sample length itself. To make that possible, we exploit second-order properties of the count and underlying Gaussian models, in conjunction with principle component and Yule-Walker estimation. We additionally suggest novel cross-validation schemes related to model selection, namely, the number of factor series and the lag order in their VAR model. We expect our proposed estimators to be consistent in theory based on recent results from Duker et al. (2023) but the proof of consistency goes beyond the scope of this work and will appear elsewhere (see Remark 3.2 below). The forecasting is carried out through a particle-based sequential Monte Carlo method, leveraging Kalman filtering techniques. Lastly, the modeling and forecasting approach is illustrated on real psychometrics data. With these data, we find the model interpretation and the understanding of forecasting issues particularly interesting and informative.
In psychometrics, latent factor models for discrete-valued data have been in increasing demand because of the abundance of such categorical data, and an expectation for them to be explained by the several presumed latent factors (e.g. Lebo and Nesselroade, 1978). The most common approach is based on polychoric correlations (e.g. Muthen, 1984) with implementations in various structural equation modeling software, including the popular R package lavaan(Rosseel, 2012). In time series and Bayesian context, Zhang and Nesselroade (2007) and Zhang and Browne (2010) introduced a categorical dynamic factor model. These approaches are based on the discretization of Gaussian variables through thresholding, akin to the modeling approach taken here. However, the estimation methods are not well-suited to higher-dimensional settings, or time-dependent data more generally, where forecasting may be of interest. We refer readers to Robitzsch (2020) for a review of this approach to the factor analysis of ordinal data in psychometrics.
In summary, our main contributions and highlights of the paper are as follows:
* The introduction of a dynamic factor model for multivariate discrete-valued time series.
* The proposal of a relatively simple approach to parameter estimation, for which theoretical
foundations are laid elsewhere.
* The development of forecasting schemes specific to the discrete nature of the model and time series data.
* An instructive application to psychometrics showcasing the model and forecasting appeal in terms of interpretability and flexibility.
The rest of the paper is organized as follows. Section 2 introduces the latent Gaussian dynamic factor model and establishes relationships between the second-order dependence structures of count and underlying Gaussian models. The estimation procedure is provided in Section 3. Section 4 discusses forecasting. Numerical experiments can be found in Section 5, followed by an illustrative application in Section 6. We close our paper with comments for future work in Section 7.
## 2 Latent Gaussian dynamic factor model
### Model formulation
For a \(d-\)vector time series \(\{X_{t}\}_{t\in\mathbb{Z}}=\{(X_{i,t})_{i=1,\ldots,d}\}_{t\in\mathbb{Z}}\), a latent Gaussian dynamic factor model is defined as follows. For \(i=1,\ldots,d\), each component series \(X_{i,t}\) is given by
\[X_{i,t}=F_{i}^{-1}(\Phi(Z_{i,t}))=:G_{i}(Z_{i,t}), \tag{1}\]
where \(F_{i}\) is a CDF, \(F_{i}^{-1}(u)=\inf\{v:F_{i}(v)\geq u\}\) is its (generalized) inverse, \(\Phi(z)\) is the CDF of \(\mathcal{N}(0,1)\) distribution, and \(Z_{i,t}\) is a zero mean, unit variance, Gaussian stationary series defined below. The CDFs \(F_{i}\) are thought to come from parametric families, parameterized by a (possibly vector) parameter \(\theta_{i}\). Note that by construction (1), \(X_{i,t}\) is a stationary series and its marginal distribution is \(F_{i}\). We focus here on discrete distributions \(F_{i}\) taking nonnegative integer values \(\mathbb{N}_{0}\) such as Bernoulli, Poisson, negative binomial, and so on. For example, if \(F_{i}\) is the CDF of a Bernoulli distribution with parameter \(p_{i}\), \(\text{Bern}(p_{i})\) for short, the model (1) becomes \(X_{i,t}=1_{\{\Phi(Z_{i,t})>1-p_{i}\}}=1_{\{Z_{i,t}>\Phi^{-1}(1-p_{i})\}}\). More generally, if \(F_{i}\) is a CDF whose support lies in \(\mathbb{N}_{0}\), for example a Poisson distribution with parameter \(\lambda_{i}\), then \(X_{i,t}\) is represented through \(Z_{i,t}\) by
\[X_{i,t}=\sum_{n=1}^{\infty}n1_{\{\Phi^{-1}(C_{i,n-1})<Z_{i,t}\leq\Phi^{-1}(C_{ i,n})\}},\quad C_{i,n}=\mathbb{P}(X_{i,t}\leq n)=F_{i}(n),\ n=0,1,\ldots. \tag{2}\]
In fact, (2) defines a count random variable \(X_{i,t}\) represented through \(Z_{i,t}\) that follows any marginal distribution \(F_{i}\). However, we shall use the examples of Bernoulli, Poisson, and negative binomial marginal distributions for illustration throughout the paper. Whereas the Poisson distribution with a large parameter is close to Gaussian and the latent \(Z_{i,t}\)'s are effectively observed, note that the Bernoulli case lies at the other extreme and is expected to be most difficult to deal with in our tasks. Additionally, our model (2) can allow parameters \(\theta_{i}\) to vary over time, wherein the bins \((\Phi^{-1}(C_{i,n-1}),\Phi^{-1}(C_{i,n})]\) for \(Z_{i,t}\) can depend on time. This leads to nonstationary models which will be discussed briefly below.
We are interested in the scenario where the underlying Gaussian series \(Z_{i,t}\) obeys a DFM. More specifically, we suppose that the \(d-\)vector time series \(Z_{t}=(Z_{i,t})_{i=1,\ldots,d}\) satisfies
\[Z_{t}=\Lambda Y_{t}+\varepsilon_{t}, \tag{3}\]
where \(\Lambda\) is a \(d\times r\) loadings matrix, and \(\varepsilon_{t}\) are i.i.d. \(\mathcal{N}(0,\Sigma_{\varepsilon})\) random \(d-\)vectors (independent of \(Y_{t}\)'s) and \(r-\)vector factor series \(Y_{t}=(Y_{k,t})_{k=1,\ldots,r}\) follows a stationary VAR model of order \(p\), VAR\((p)\) for short, given by
\[Y_{t}=\Psi_{1}Y_{t-1}+\ldots+\Psi_{p}Y_{t-p}+\eta_{t}, \tag{4}\]
where \(\Psi_{1},\ldots,\Psi_{p}\) are \(r\times r\) matrices and \(\eta_{t}\) are i.i.d. \(\mathcal{N}(0,\Sigma_{\eta})\) random \(r-\)vectors. The VAR model (4) is flexible to capture any temporal dependence from a practical standpoint. Note that the DFM (3) is in the so-called static form. The generalized DFMs where (3) includes lags of \(Y_{t}\) (e.g. Forni et al., 2000, 2005) go beyond the scope of this work.
Note that the unit variance of \(Z_{i,t}\) is assumed in (1). For general \(Z_{i,t}\), one can standardize it to have unit variance. More generally, the autocovariance function (ACVF) \(\Sigma_{Z}(h)=\mathbb{E}Z_{t+h}Z_{t}^{\prime}\) of \(Z_{t}\) at lag \(h\) can similarly become autocorrelation function (ACF) \(R_{Z}(h)\) as
\[R_{Z}(h)=\text{diag}(\Sigma_{Z}(0))^{-1/2}\Sigma_{Z}(h)\text{diag}(\Sigma_{Z} (0))^{-1/2}. \tag{5}\]
We use \(R_{Z}(h)\) for the rest of the analysis so the unit variance assumption for \(Z_{i,t}\) is made throughout.
### Relation between count and Gaussian correlations
Our estimation procedure is based on the following property of the model (1). It is known (e.g. Pipiras and Taqqu, 2017) that, for any \(i,j=1,\ldots,d\),
\[R_{X,ij}(h)=L_{ij}(R_{Z,ij}(h)) \tag{6}\]
or, in short, and entry-wise,
\[R_{X}(h)=L(R_{Z}(h)), \tag{7}\]
where \(L_{ij}:[-1,1]\mapsto[-1,1]\) are functions to be referred to as link functions (and \(L\) as a link function). Furthermore, \(L_{ij}\) depends only on the CDFs \(F_{i}\) and \(F_{j}\) and can be expressed as described next.
For \(k=0,1,\ldots\), let \(H_{k}(z)=(-1)^{k}e^{z^{2}/2}(d^{k}e^{-z^{2}/2}/dz^{k})\) be the Hermite polynomial of order \(k\) and
\[g_{i,k}=\frac{1}{k!}\int_{-\infty}^{\infty}G_{i}(z)H_{k}(z)\frac{e^{-z^{2}/2} }{\sqrt{2\pi}}dz=\frac{1}{k!}\mathbb{E}[G_{i}(Z_{i,0})H_{k}(Z_{i,0})] \tag{8}\]
be the corresponding Hermite coefficient of the function \(G_{i}(z)\) in (1), so that \(G_{i}(z)=\sum_{k=0}^{\infty}g_{i,k}H_{k}(z)\). For \(G_{i}(z)\) associated with CDF \(F_{i}\) on nonnegative integers, Jia et al. (2023) showed that
\[g_{i,k}=\frac{1}{k!\sqrt{2\pi}}\sum_{n=0}^{\infty}e^{-\Phi^{-1}(C_{i,n})^{2}/2 }H_{k-1}(\Phi^{-1}(C_{i,n})), \tag{9}\]
where \(C_{i,n}=\mathbb{P}(X_{i,t}\leq n)=F_{i}(n)\). When \(\Phi^{-1}(C_{i,n})=\pm\infty\) for \(C_{i,n}=0\) or \(1\), the summand \(e^{-\Phi^{-1}(C_{i,n})^{2}/2}H_{k-1}(\Phi^{-1}(C_{i,n}))\) is interpreted as zero. For example, for \(F_{i}=\text{Bern}(p_{i})\), \(g_{i,k}=e^{-\Phi^{-1}(p_{i})^{2}/2}H_{k-1}(\Phi^{-1}(p_{i}))/(k!\sqrt{2\pi})\). Similarly, \(g_{i,k}\) can be computed when \(F_{i}=\text{Pois}(\lambda_{i})\) but (9) will have infinitely many terms. However, the number of terms will be finite and small in practice. This is because the Poisson distribution is light-tailed so that \(C_{i,n}\) is indistinguishable from \(1\) numerically even at moderate \(n\).
The link functions \(L_{ij}\) can now be expressed as
\[L_{ij}(u)=\sum_{k=1}^{\infty}\frac{k!g_{i,k}g_{j,k}}{\Sigma_{X,ii}(0)^{1/2} \Sigma_{X,jj}(0)^{1/2}}u^{k}=:\sum_{k=1}^{\infty}\ell_{ij,k}u^{k}. \tag{10}\]
Under mild assumptions, they can be shown to be monotonically increasing on the interval \((-1,1)\) (see Proposition A.1. in the appendix of Jia et al., 2023). Note that \(L_{ij}(0)=0\) regardless of the
marginal distributions. The quantities \(\rho_{+,ij}=L_{ij}(1)\) and \(\rho_{-,ij}=L_{ij}(-1)\) are given by
\[\rho_{+,ij}=L_{ij}(1)=\text{Corr}(G_{i}(Z),G_{j}(Z)),\quad\rho_{-,ij}=L_{ij}(-1) =\text{Corr}(G_{i}(Z),G_{j}(-Z)) \tag{11}\]
for \(Z=\mathcal{N}(0,1)\). When \(i=j\), \(\rho_{+,ij}=1\) but usually \(\rho_{-,ij}>-1\). As noted in Jia et al. (2023), \(\rho_{+,ij}\) and \(\rho_{-,ij}\) are the largest and smallest correlations that two dependent count variables with marginals \(F_{i}\) and \(F_{j}\) can achieve; by (11), they are achieved with construction \(G(Z)\) and hence within our considered model. For example, when \(F_{i}=\text{Bern}(p_{i})\), it can be shown by using (11) that
\[L_{ij}(1)=\left\{\begin{array}{ll}\sqrt{\frac{p_{i}(1-p_{j})}{p_{j}(1-p_{i}) }},&\text{if }p_{i}\leq p_{j},\\ \sqrt{\frac{p_{j}(1-p_{i})}{p_{i}(1-p_{j})}},&\text{if }p_{j}<p_{i},\end{array} \right.\qquad L_{ij}(-1)=\left\{\begin{array}{ll}-\sqrt{\frac{(1-p_{i})(1- p_{j})}{p_{i}p_{j}}},&\text{if }p_{i}+p_{j}\geq 1,\\ -\sqrt{\frac{p_{j}p_{i}}{(1-p_{i})(1-p_{j})}},&\text{if }p_{i}+p_{j}<1.\end{array}\right. \tag{12}\]
Since the link functions \(L_{ij}\) are monotonically increasing, the inverse link functions can be defined as \(L_{ij}^{-1}:[\rho_{-,ij},\rho_{+,ij}]\mapsto[-1,1]\). We will discuss the numerical calculation of the inverse \(L_{ij}^{-1}\) in Section 3.2 below. Thus, (6) and (7) imply that
\[R_{Z,ij}(h)=L_{ij}^{-1}(R_{X,ij}(h)) \tag{13}\]
or, in short and entrywise,
\[R_{Z}(h)=L^{-1}(R_{X}(h)). \tag{14}\]
### Nonstationary models with covariates and differencing
As discussed in Jia et al. (2023), covariates can be incorporated easily into the model through its marginal parameters \(\theta_{i}\) as follows. Suppose that \(\theta_{i}(t)\) varies over time and is driven by \(J-\)dimensional deterministic covariate vector \(M_{t}\) as
\[\theta_{i}(t)=f(\beta_{i}^{\prime}M_{t}), \tag{15}\]
where \(\beta_{i}^{\prime}M_{t}\) is a linear combination of a coefficient vector and the covariate, and \(f\) is a suitable function in the spirit of GLM. Then, the model (1) becomes
\[X_{i,t}=F_{i,\theta_{i}(t)}^{-1}(\Phi(Z_{i,t}))=:G_{i,\theta_{i}(t)}(Z_{i,t}). \tag{16}\]
This time-varying parameter leads to the cumulative distribution of \(X_{i,t}\) as \(C_{i,n}(t)=\mathbb{P}(X_{i,t}\leq n)=F_{i,\theta_{i}(t)}(n)\) and the computation of the Hermite coefficients in (9), now denoted by \(g_{\theta_{i}(t),k}\), is still valid. The cross-correlation between \(X_{i,t_{1}}\) and \(X_{j,t_{2}}\) is represented by the cross-correlation between \(Z_{i,t_{1}}\) and \(Z_{j,t_{2}}\) as
\[R_{X,ij}(t_{1}-t_{2})=L_{\theta_{i}(t_{1}),\theta_{j}(t_{2})}(R_{Z,ij}(t_{1}-t_ {2})), \tag{17}\]
where, from (10),
\[L_{\theta_{i}(t_{1}),\theta_{j}(t_{2})}(u)=\sum_{k=1}^{\infty}\frac{k!g_{ \theta_{i}(t_{1}),k}g_{\theta_{j}(t_{2}),k}}{\text{Var}(X_{i,t})^{1/2}\text{ Var}(X_{j,t})^{1/2}}u^{k}. \tag{18}\]
As we will see in Section 3.2, the numerical inverse of the link function is considered in the estimation. Although the computational burden is substantially increased with (18), the estimation is still feasible.
Another possibility to accommodate nonstationary with our model is to difference the count series \(X_{i,t}\). That is, consider the series \(\Delta X_{i,t}=X_{i,t}-X_{i,t-1}\), whose range now lies in \(\mathbb{Z}\). As noted above, the considered model (1) may as well be defined on \(\mathbb{Z}\), with a suitable choice of marginal distribution \(F_{i}\) (e.g. the Skellam distribution). Differencing of count time series and subsequent modeling of \(\mathbb{Z}\)-valued time series was considered by many researchers, including Kim and Park (2008), Zhang et al. (2010), Kachour and Truquet (2011), and others, but their models are generally non-trivial extension of the counterpart models for counts.
## 3 Estimation
### Estimation with known link function
We describe here how the parameters \(\Lambda,\Psi_{1},\ldots,\Psi_{p},\Sigma_{\varepsilon}\) and \(\Sigma_{\eta}\) with known \(r\), \(p\) can be estimated assuming that the marginal CDFs \(F_{i}\) and the link functions \(L_{ij}\) are known. (The next sections will discuss estimation and computation of \(L_{ij}\), and selection of \(r,p\).) The basic idea is quite simple. Note that the relation (14) allows one to estimate \(R_{Z}(h)=\Sigma_{Z}(h)\), \(h=0,\ldots,p\), as
\[\hat{R}_{Z}(h)=L^{-1}(\hat{R}_{X}(h)), \tag{19}\]
where \(\hat{R}_{X}(h)\) is the sample matrix ACF of the data \(X_{1},\ldots,X_{T}\). \(\hat{R}_{X}(h)\), \(h\neq 0\), are not necessarily symmetric. Similarly, \(\hat{R}_{Z}(0)\) is symmetric but not necessarily non-negative definite. The estimated covariance \(\hat{R}_{Z}(0)\) of the latent Gaussian process will be used in forecasting described in the next
section. If needed, we employ a small positive shift of the eigenvalues to make \(\hat{R}_{Z}(0)\) non-negative definite. However, we do not shift eigenvalues through the estimation procedure.
Since \(Z_{t}\) follows the dynamic factor model (3) and (4), the loadings matrix \(\Lambda\), \(\Sigma_{Y}(0)\), and \(\Sigma_{\varepsilon}\) can be estimated through principle components. More specifically, since the dynamic factor model (3) implies
\[R_{Z}(0)=\Lambda\Sigma_{Y}(0)\Lambda^{\prime}+\Sigma_{\varepsilon}, \tag{20}\]
it is natural to estimate \(\Lambda\Sigma_{Y}(0)\Lambda^{\prime}\) as an \(r\)-rank approximation of \(R_{Z}(0)\), and take \(\Sigma_{\varepsilon}\) as the approximation error. We thus proceed as follows. Let \(\hat{R}_{Z}(0)=\hat{U}\hat{E}\hat{U}^{\prime}\) be the eigendecomposition with \(\hat{E}=\text{diag}(\hat{e}_{1},\ldots,\hat{e}_{d})\) consisting of ordered eigenvalues \(\hat{e}_{1}\geq\ldots\geq\hat{e}_{d}\), and \(\hat{U}=(\hat{u}_{1},\ldots,\hat{u}_{d})\) being the orthogonal eigenvector matrix. Setting \(\hat{U}_{r}=(\hat{u}_{1},\ldots,\hat{u}_{r})\) and \(\hat{E}_{r}=\text{diag}(\hat{e}_{1},\ldots,\hat{e}_{r})\), a rank-\(r\) approximation of \(\Sigma_{Z}(0)\) can be taken as
\[\hat{U}_{r}\hat{E}_{r}\hat{U}_{r}^{\prime}=(\hat{U}_{r}\hat{E}_{r}^{1/2})( \hat{U}_{r}\hat{E}_{r}^{1/2})^{\prime}. \tag{21}\]
The relations (21) and (20) suggest setting
\[\hat{\Lambda}=\hat{U}_{r},\quad\hat{\Sigma}_{Y}(0)=\hat{E}_{r}. \tag{22}\]
By construction, the choice (22) identifies \(\Lambda,\Sigma_{Y}(0)\), after a possible non-singular \(r\times r\) transformation, assuming that
\[\Lambda^{\prime}\Lambda=I_{r},\quad\Sigma_{Y}(0)=\text{diagonal}. \tag{23}\]
As in Bai and Wang (2015), another way to have identifiability for the DFM is to make the first \(r\times r\) block of the loadings matrix be identity, that is,
\[\Lambda=\begin{pmatrix}I_{r}\\ \Lambda_{2}\end{pmatrix}, \tag{24}\]
where \(\Lambda_{2}\) is \((d-r)\times r\). We adopt the identification (24) for the rest of the paper.
Note that with the convention (24) above,
\[\Lambda\Sigma_{Y}(0)\Lambda^{\prime}=\begin{pmatrix}\Sigma_{Y}(0)&\Sigma_{Y}( 0)\Lambda_{2}^{\prime}\\ \Lambda_{2}\Sigma_{Y}(0)&\Lambda_{2}\Sigma_{Y}(0)\Lambda_{2}^{\prime}\end{pmatrix} =\begin{pmatrix}\Sigma_{Y}(0)^{1/2}\\ \Lambda_{2}\Sigma_{Y}(0)^{1/2}\end{pmatrix}\begin{pmatrix}\Sigma_{Y}(0)^{1/2}& \Sigma_{Y}(0)^{1/2}\Lambda_{2}^{\prime}\end{pmatrix}. \tag{25}\]
The relations (25) and (21) also suggest setting
\[\begin{pmatrix}\Sigma_{Y}(0)^{1/2}\\ \Lambda_{2}\Sigma_{Y}(0)^{1/2}\end{pmatrix}=\hat{U}_{r}\hat{E}_{r}^{1/2} \tag{26}\]
to define both \(\hat{\Sigma}_{Y}(0)^{1/2}\) and \(\hat{\Lambda}_{2}\). Either (22) or (26) lead to estimators \(\hat{\Lambda}\) and \(\hat{\Sigma}_{Y}(0)\). The estimator \(\hat{\Sigma}_{\varepsilon}\) can now be defined as
\[\hat{\Sigma}_{\varepsilon}=\hat{R}_{Z}(0)-\hat{U}_{r}\hat{E}_{r}\hat{U}_{r}^{ \prime}. \tag{27}\]
Note also that DFM (3) yields
\[R_{Z}(h)=\Lambda\Sigma_{Y}(h)\Lambda^{\prime},\quad h=1,\ldots,p. \tag{28}\]
Setting \(\hat{R}_{Z}(h)=L^{-1}(\hat{R}_{X}(h))\) as in (19) naturally suggests the estimators
\[\hat{\Sigma}_{Y}(h)=(\hat{\Lambda}^{\prime}\hat{\Lambda})^{-1}(\hat{\Lambda} ^{\prime}\hat{R}_{Z}(h)\hat{\Lambda})(\hat{\Lambda}^{\prime}\hat{\Lambda})^{ -1},\quad h=1,\ldots,p. \tag{29}\]
Alternatively, the estimators (29) also solve
\[\hat{\Sigma}_{Y}(h)=\operatorname*{argmin}_{\Sigma_{Y}(h)\in\mathbb{R}^{r \times r}}\left\|\hat{R}_{Z}(h)-\hat{\Lambda}\Sigma_{Y}(h)\hat{\Lambda}^{ \prime}\right\|_{F}^{2},\quad h=1,\ldots,p. \tag{30}\]
Having these estimators, the Yule-Walker equations can now be used to obtain the rest of the required estimators \(\hat{\Psi}_{1},\ldots,\hat{\Psi}_{p}\) and \(\hat{\Sigma}_{\eta}\). That is, \(\hat{\Psi}_{1},\ldots,\hat{\Psi}_{p}\) solve the system of matrix linear equations
\[\begin{pmatrix}&\hat{\Sigma}_{Y}(0)&\hat{\Sigma}_{Y}(1)&\ldots&\hat{\Sigma}_{ Y}(p-1)\\ &\hat{\Sigma}_{Y}(1)^{\prime}&\hat{\Sigma}_{Y}(0)&\ldots&\hat{\Sigma}_{Y}(p-2)\\ &\vdots&\vdots&\ddots&\vdots\\ &\hat{\Sigma}_{Y}(p-1)^{\prime}&\hat{\Sigma}_{Y}(p-2)^{\prime}&\ldots&\hat{ \Sigma}_{Y}(0)\end{pmatrix}\begin{pmatrix}\hat{\Psi}_{1}^{\prime}\\ \hat{\Psi}_{2}^{\prime}\\ \vdots\\ \hat{\Psi}_{p}^{\prime}\end{pmatrix}=\begin{pmatrix}\hat{\Sigma}_{Y}(1)^{ \prime}\\ \hat{\Sigma}_{Y}(2)^{\prime}\\ \vdots\\ \hat{\Sigma}_{Y}(p)^{\prime}\end{pmatrix} \tag{31}\]
and \(\hat{\Sigma}_{\eta}\) is
\[\hat{\Sigma}_{\eta}=\hat{\Sigma}_{Y}(0)-\sum_{h=1}^{p}\hat{\Psi}_{h}\hat{ \Sigma}_{Y}(h)^{\prime}. \tag{32}\]
**Remark 3.1**: To reiterate the idea of our approach, any estimation of the latent process (PCA and Yule-Walker equations, etc.) that can be carried out on the process in terms of its second-order
properties, will have its counterpart for the considered model in terms of the observable process by using relation (14). We shall exploit this idea again in cross-validation below (Section 3.3) when selecting \(r\) and \(p\).
### Estimating link function and calculating its inverse
We assumed in Section 3.1 that link functions \(L_{ij}\) are known. In practice, they can be estimated as follows. Recall that \(L_{ij}\) is defined through marginal CDFs \(F_{i}=F_{i}(\theta_{i})\) and \(F_{j}=F_{j}(\theta_{j})\). The parameters \(\theta_{i}\) and \(\theta_{j}\) can then be estimated from the marginal distributions of the data \(X_{i,t}\) and \(X_{j,t}\), respectively. For example, an MLE can be used for most parametric CDF models of interest. The estimator \(\hat{L}_{ij}\) can be defined through the CDFs \(F_{i}(\hat{\theta}_{i})\) and \(F_{j}(\hat{\theta}_{j})\). For example, when \(F_{i}=\text{Bern}(p_{i})\), \(\hat{p}_{i}\) is just the sample proportion of \(X_{i,t}=1\). As another example, if the \(i\)th marginal count series follows Poisson distribution with parameter \(\theta_{i}\), it is estimated by the sample mean of observations, say \(\hat{\theta}_{i}=\sum_{t=1}^{T}X_{i,t}/T\). Then with the computed Hermite polynomials \(\hat{g}_{i,k}\) as (9), the link function for each \(i\)th and \(j\)th pair as (10) can be estimated by \(\hat{L}_{ij}(u)=\sum_{k=1}^{K}\hat{\ell}_{ij,k}u^{k}\) for large enough \(K\), say \(K=100\), on \(u\in[-1,1]\). To simplify the notation, we write \(L_{ij}\) for \(\hat{L}_{ij}\) below, and are interested in calculating \(L_{ij}^{-1}\).
The idea to calculate \(L_{ij}^{-1}\) is as follows. Consider \(M\)-partition of the interval \([-1,1]\) by \(u_{0},u_{1},\ldots,u_{M}\) that satisfies \(u_{0}=-1\), \(u_{M}=1\), and
\[v_{m}:=L_{ij}(u_{m})\qquad\text{or}\qquad L_{ij}^{-1}(v_{m})=u_{m}, \tag{33}\]
so that one has the points \((v_{m},L_{ij}^{-1}(v_{m}))=(v_{m},u_{m})\) on the curve \(L_{ij}^{-1}(v)=u\). The value of \(L_{ij}^{-1}(v)\) for other points \(v\) can then be obtained through some interpolation, for example, natural cubic splines (e.g. Chapter 8 in Kress, 1998). The natural cubic splines produce piecewise polynomial functions, having smooth derivatives. In addition, we use finer grids for \(u\) near \(\pm 1\) while wider grids are used around \(u=0\).
From a numerical standpoint, the \(M+1\) points \((v_{m},u_{m})\) satisfy \(v_{m}=L_{ij}^{-1}(u_{m})\), and the interpolation of \(L_{ij}^{-1}\) is defined as
\[\tilde{L}_{ij}^{-1}(v)=:\sum_{m=1}^{M}\tilde{L}_{ij,m}^{-1}(v)1_{[v_{m-1},v_{m })}(v) \tag{34}\]
for \(v\in(v_{m},v_{m+1})\), \(m=0,1,\ldots,M\), with \(M\) pieces of the 3rd-order spline polynomials
\[\tilde{L}_{ij,m}^{-1}(v)=a_{m-1}\frac{(v_{m}-v)^{3}}{6h_{m}}+a_{m}\frac{(v-v_{m- 1})^{3}}{6h_{m}}+b_{1,m}(v-v_{m-1})+b_{2,m}(v_{m}-v), \tag{35}\]
where \(h_{m}=v_{m}-v_{m-1}\). Note that \(v\) may not be divided equally. The polynomials in (35) satisfy the following constraints,
\[\tilde{L}_{ij,m}^{-1}(v_{m-1}) =u_{m-1},\quad\tilde{L}_{ij,m}^{-1}(v_{m})=u_{m},\quad m=1, \ldots,M, \tag{36}\] \[(\tilde{L}_{ij,m}^{-1})^{\prime}(v_{m}) =(\tilde{L}_{ij,m+1}^{-1})^{\prime}(v_{m}),\quad m=1,\ldots,M-1,\] (37) \[(\tilde{L}_{ij,m}^{-1})^{\prime\prime}(v_{m}) =(\tilde{L}_{ij,m+1}^{-1})^{\prime\prime}(v_{m}),\quad m=1, \ldots,M-1,\] (38) \[(\tilde{L}_{ij,1}^{-1})^{\prime\prime}(-v_{0}) =(\tilde{L}_{ij,n}^{-1})^{\prime\prime}(v_{M})=0. \tag{39}\]
Note that the knots in (36) form a continuity of the spline function. (37) and (38) ensure the second-order smoothness. (39), called the natural boundary, is required for determining polynomials uniquely. We extend the interpolation further to \([-1,1]\) by letting \(\tilde{L}_{ij}^{-1}(v)=-1\) for \(v\in[-1,v_{0})\) and \(\tilde{L}_{ij}^{-1}(v)=1\) for \(v\in(v_{M},1]\). Since the equations (35) can be written as a symmetric tridiagonal system, it can be easily solved.
Figure 1 depicts \(L_{ij}(u),L_{ij}^{-1}(v)\) and its interpolation \(\tilde{L}_{ij}^{-1}(v)\) for three representative marginal count distributions: Bernoulli, Poisson, and negative binomial. For example, the Bernoulli case considers a pair of \(F_{i}=\text{Bern}(p_{i})\) and \(F_{j}=\text{Bern}(p_{j})\) with four different choices of combinations \((p_{i},p_{j})\). Several combinations of parameters for other types of distributions are also considered. As seen from the plots, the inverse link function \(L_{ij}^{-1}\) obtained by flipping the axes and numerical inverse \(\tilde{L}_{ij}^{-1}\) obtained through the interpolation are nearly indistinguishable.
**Remark 3.2**: As noted in Section 1, we expect the estimation procedure of Sections 3.1 and 3.2 to yield consistent estimators. The recent work of Duker et al. (2023) lays the theoretical foundation for working with the model (1). As an application of their results, Duker et al. (2023) established consistency in sparse estimation of (1) when the latent process follows a sparse VAR model. The latent process here follows a factor model and the underlying model estimation is different from sparse VAR. The main result of Duker et al. (2023) can be applied to our context as well, but they rely on the assumption of concentration phenomena for the observed time series \(X_{t}\) having underlying VAR factor structure. We are working on proving such concentration and applying the result of Duker et al. (2023), with this work serving as a methodological and more applied study
of the model.
### Selecting the number of factors and lag order of factor series
In practice, the number of factor series \(r\) and their lag order \(p\) in (4) are unknown and need to be chosen for the model estimation. We consider here several methods for this task based on available approaches, and also introduced cross-validation schemes tailored for our model. Their performance is assessed in Section 5.1.2 below, with cross-validation schemes generally outperforming others.
_Selection of \(r\):_ One practical approach is to examine a scree plot of the eigenvalues of \(\hat{\Sigma}_{Z}(0)=\hat{L}^{-1}(\hat{\Sigma}_{X}(0))\) where the presence of a "knee" suggests the value of \(r\). More formally, one can design an algorithm that determines the "knee." For example, Onatski (2010) suggested the Edge Distribution (ED) estimator, whereby
\[\hat{r}(\delta):=\max\{k\leq r_{\max}:\hat{e}_{k}-\hat{e}_{k+1}\geq\delta\}, \tag{40}\]
where \(\hat{e}_{1}\geq\hat{e}_{2}\geq\ldots\geq\hat{e}_{d}\) are the ordered eigenvalues of \(\hat{R}_{Z}(0)\) and \(\delta\) is calibrated through the algorithm described in Section 4 of that paper.
Alternatively, one could rely on information criteria (IC). That is, from \(q=1,\ldots,r_{\max}\), we choose the \(q\) as the estimate \(\hat{r}\) that minimizes
\[\mbox{IC}(q)=\ln\left(\frac{1}{dT}\|\hat{\Sigma}_{\varepsilon}(q)\|_{F}^{2} \right)+qg_{i}(d,T), \tag{41}\]
where \(\hat{\Sigma}_{\varepsilon}(q)\) is the estimator in (27) from the rank-\(q\) approximation of \(\Sigma_{Z}(0)\) in (21), and \(g_{i}(d,T)\to 0\) and \(\min(d,T)g_{i}(d,T)\to\infty\) as \(d,T\to\infty\). The recommended choices of the penalty functions \(g_{i}(d,T)\) are
\[g_{1}(d,T) = \frac{d+T}{dT}\ln\left(\frac{dT}{d+T}\right), \tag{42}\] \[g_{2}(d,T) = \frac{d+T}{dT}\ln\left(C_{dT}^{2}\right),\] (43) \[g_{3}(d,T) = \frac{\ln(C_{dT}^{2})}{C_{dT}^{2}},\quad C_{dT}=\min(\sqrt{d}, \sqrt{T}) \tag{44}\]
which corresponds to \(\mbox{IC}_{p1}(r)\)-\(\mbox{IC}_{p3}(r)\) studied by Bai and Ng (2002).
We now propose a block cross-validation (BCV)-based rank selection. The idea goes back at least to Browne and Cudeck (1989), and continues being utilized in psychometrics (e.g. Haslbeck
and van Bork, 2022). Differences from the setting considered here are that our observations are serially correlated and that our factor model is latent. To account for temporal dependence, we do not partition observations randomly but rather into equally sized consecutive blocks. The latent nature of the factor model will be dealt with by exploiting the idea in Remark 3.1. We thus fold \(\{X_{t}\}\) along the time into \(B\) blocks. The superscript \((b)\) will refer to the \(b\)th block, to be used for test data. The superscript \((-b)\) will refer to the \(b\)th block being excluded, to be used for training data. Let \(\hat{R}_{Z}^{(b)}(0)=\hat{L}^{-1}(\hat{R}_{X}^{(b)}(0))\) be the sample matrix ACF of the latent Gaussian series at lag 0, computed from the sample matrix ACF of the observations from the \(b\)th block, substituted into the inverse link function. Similarly, one can compute \(\hat{R}_{Z}^{(-b)}(0)=\hat{L}^{-1}(\hat{R}_{X}^{(-b)}(0))\) that excludes the \(b\)th block. Then, for each candidate rank \(q\) on a grid \(q=1,\ldots,r_{\max}\), the mean square error (MSE) of BCV is defined as
\[\text{MSE}(q)=\frac{1}{B}\sum_{b=1}^{B}\left\|\hat{R}_{Z}^{(b)}(0)-\hat{R}_{Z }^{(-b,q)}(0)\right\|_{F}^{2}, \tag{45}\]
where \(\hat{R}_{Z}^{(-b,q)}(0)\) is the rank-\(q\) approximation of \(\hat{R}_{Z}^{(-b)}(0)\) plus a diagonal matrix from the estimated covariance matrix of the innovations of the factor model. The PCA-based estimation procedure described in Section 3.1 can be used. The minimizer \(q\) of the MSE is chosen as the estimate of the number of factor series \(r\).
Instead of our PCA-based estimation, other estimation approaches can and have been used as well. For example, in the minimum residual factor analysis (MINRES, Harman and Jones, 1966), \(\hat{R}_{Z}^{(-b,q)}(0)\) is the rank-\(q\) approximation of \(\hat{R}_{Z}^{(-b)}(0)\) obtained by minimizing the difference between \(\hat{R}_{Z}^{(-b)}(0)\) and the sum of \(\hat{R}_{Z}^{(-b,q)}(0)\) and a diagonal matrix, the latter accounting for the variances of the error terms. This estimation approach is quite popular in factor analysis, with the estimation procedure implemented through R package psych by Revelle (2023). Other estimation approaches for factor analysis can be found in Bertsimas et al. (2017).
_Selection of \(p\):_ One common approach is to use the information criteria, that is,
\[\hat{p}=\underset{l}{\text{argmin}}\left\{\ln(|\hat{\Sigma}_{\eta}(l)|)+g_{i, l}(r,T)\right\}, \tag{46}\]
where \(\hat{\Sigma}_{\eta}(l)\) is the estimate of the covariance matrix of the innovations of the factor series for fixed
lag order \(l\). The possible penalty functions \(g_{i,l}\) include
\[g_{1,l}(r,T) = \frac{2}{T}lr^{2}, \tag{47}\] \[g_{2,l}(r,T) = \frac{2\log(\log(T))}{T}lr^{2},\] (48) \[g_{3,l}(r,T) = \frac{\log(T)}{T}lr^{2},\] (49) \[g_{4,l}(r,T) = \frac{2r(rl+1)}{T}. \tag{50}\]
The resulting criteria can be found in Chapter 4.3 of Lutkepohl (2005).
We now propose a cross-validation strategy tailored to our model. We shall exploit once again the idea of Remark 3.1. Were the factor series \(Y_{t}\) observed, a natural cross-validation scheme would select \(p\) as \(l\) minimizing
\[\sum_{b=1}^{B}\sum_{t}\left\|Y_{t}^{(b)}-\hat{\Psi}_{1}^{(-b)}Y_{t-1}^{(b)}- \ldots-\hat{\Psi}_{l}^{(-b)}Y_{t-l}^{(b)}\right\|_{F}^{2}, \tag{51}\]
where \(\hat{\phi}_{h}^{(-b)}\) are the VAR transition matrices estimated on the training data, and \(Y_{t}^{(b)}\) refer to the testing data. Setting
\[\mathcal{Y}^{(b,l)}=(Y_{t}^{(b)^{\prime}})_{t},\quad\hat{\Psi}^{(-b,l)}= \begin{pmatrix}\hat{\Psi}_{1}^{(-b)^{\prime}}\\ \vdots\\ \hat{\Psi}_{l}^{(-b)^{\prime}}\end{pmatrix},\quad\mathcal{X}^{(b,l)}=\begin{pmatrix} Y_{t-1}^{(b)^{\prime}}&\ldots&Y_{t-l}^{(b)^{\prime}}\end{pmatrix}_{t}, \tag{52}\]
the function to minimize can be replaced by
\[\sum_{b=1}^{B}\left\|\mathcal{Y}^{(b,l)}-\mathcal{X}^{(b,l)}\hat{\Psi}^{(-b,l )}\right\|_{F}^{2}=\sum_{b=1}^{B}\mathrm{vec}\left(\mathcal{Y}^{(b,l)}- \mathcal{X}^{(b,l)}\hat{\Psi}^{(-b,l)}\right)^{\prime}\mathrm{vec}\left( \mathcal{Y}^{(b,l)}-\mathcal{X}^{(b,l)}\hat{\Psi}^{(-b,l)}\right) \tag{53}\]
or
\[\sum_{b=1}^{B}\left(-2\hat{\boldsymbol{\psi}}^{(-b,l)^{\prime}}\hat{\gamma}_{ Y}^{(b,l)}+\hat{\boldsymbol{\psi}}^{(-b,l)^{\prime}}\hat{\Gamma}_{Y}^{(b,l)} \hat{\boldsymbol{\psi}}^{(-b,l)}\right), \tag{54}\]
where \(\hat{\boldsymbol{\psi}}^{(-b,l)}=\mathrm{vec}(\hat{\Psi}^{(-b,l)})\) and
\[\hat{\gamma}_{Y}^{(b,l)}=\mathrm{vec}(\mathcal{X}^{(b,l)^{\prime}}\mathcal{Y }^{(b,l)}),\quad\hat{\Gamma}_{Y}^{(b,l)}=I_{r}\otimes\mathcal{X}^{(b,l)^{ \prime}}\mathcal{X}^{(b,l)}. \tag{55}\]
Following the idea of Remark 3.1, note that the quantities in (55) are obtained from the (sample)
second-order properties of \((Y_{t}^{(b)})\). For our model, they can be replaced by those implied by the equation (14) based on the observed data \((X_{t}^{(b)})\). Similarly, we already have the estimators \(\hat{\Psi}^{(-b,l)}\) calculated from the observed data \((X_{t}^{(-b)})\). In summary, as a cross-validation scheme to select \(p\) for our model, we also minimize (54) but where the quantities involved are computed on the training data \((X_{t}^{(-b)})\) and the testing data \((X_{t}^{(b)})\).
**Remark 3.3**: Note that determining the number of factors can be performed regardless of the lag order of the factor series. We thus recommend choosing \(\hat{r}\) first, followed by selecting \(\hat{p}\).
## 4 Forecasting
### Particle-based sampling procedure
A fitted latent Gaussian dynamic factor model in (1)-(4) can naturally be used to forecast the series \(X_{t}\). We will show how this can be carried out for fixed model parameters. For example, the model parameters can be obtained through estimation (in which case our forecast will not reflect any uncertainty from estimation error). More specifically, for a given \(t\) (typically, \(t=T\), the sample length), we are interested in the distribution of
\[\hat{X}_{t+h|t}=(X_{t+h}|X_{1}=x_{1},\ldots,X_{t}=x_{t}), \tag{56}\]
where \(h=1,2,\ldots\), the vertical bar indicates the conditioning and \(x_{1},\ldots,x_{t}\) are the observed values of \(X_{1},\ldots,X_{t}\). This is equivalent to finding
\[\mathbb{E}_{x_{1:t}}\left[V(\hat{X}_{t+h|t})\right] \tag{57}\]
for arbitrary function \(V\), where the subscript \(x_{1:t}\) in \(\mathbb{E}_{x_{1:t}}\) refers to the conditioning on \(\{x_{1},\ldots,x_{t}\}\) as in (56). For example, suppose we have a \(d-\)dimensional vector \(x\). With \(V(x)=V((x_{i})_{i=1,\ldots,d})=1_{\{x_{i}=n_{i},\ i=1,\ldots,d\}}\) and \(d-\)dimensional integer \(n\), the quantity (57) becomes \(\mathbb{P}_{x_{1:t}}(\hat{X}_{t+h|t}=n)\), which is of primary interest in forecasting \(X_{t+h}\), when the components of \(X_{t}\) are integer-valued.
Let \(\hat{Z}_{t+h|t}=\hat{Z}_{t+h}(Z_{1:t})=H_{t1}^{(h)}Z_{t}+\ldots+H_{tt}^{(h)}Z_ {1}\) be the \(h\)-step-ahead linear prediction of \(Z_{t+h}\) from \(Z_{1:t}\). Define \(\hat{R}_{t+h|t}=\mathbb{E}[(Z_{t+h}-\hat{Z}_{t+h|t})(Z_{t+h}-\hat{Z}_{t+h|t})^ {\prime}]\) as the corresponding covariance matrix of prediction error of \(Z_{t+h}\). One can compute the latent prediction \(\hat{Z}_{t+h|t}\) by using Kalman recursions as recalled in Appendix A. This exploits the state-space structure of the model and is
more efficient computationally than a direct application of e.g. Durbin-Levinson algorithm. Then, the quantity (57) can be expressed as
\[\mathbb{E}_{x_{1:t}}\left[V(\hat{X}_{t+h|t})\right]=\mathbb{E}_{x_{1:t}}\left[D_{ V,t+h}(\hat{Z}_{t+h|t})\right], \tag{58}\]
where
\[D_{V,t+h}(z)=\int_{\mathbb{R}^{d}}V(G(z_{t+h}))\frac{\exp\left(-\frac{1}{2}(z- z_{t+h})^{\prime}\hat{R}_{t+h|t}^{-1}(z-z_{t+h})\right)}{(2\pi)^{d/2}|\hat{R}_{t+h|t }|^{1/2}}dz_{t+h} \tag{59}\]
(see equations (23)-(24) in Jia et al., 2023). The right-hand side of (58) will be approximated through a Monte Carlo scheme below; direct numerical calculation of underlying integrals is too cumbersome.
Note that conditioning on \(x_{1:t}\) does not determine the exact path of \(z_{1:t}\). Indeed, recall from (2) that for \(j=1,\ldots,d\),
\[\{X_{j,t}=x_{j,t}\}=\left\{\Phi^{-1}(C_{j,x_{j,t}-1})<Z_{j,t}\leq\Phi^{-1}(C_{ j,x_{j,t}})\right\}=\{Z_{j,t}\in A_{j,x_{j,t}}\}, \tag{60}\]
where \(C_{j,m}=\mathbb{P}(X_{j,t}\leq m)\) and \(A_{j,x_{j}}=\left(\Phi^{-1}(C_{j,x_{j}-1}),\Phi^{-1}(C_{j,x_{j}})\right]\). That is, each entry of the realization of \(X_{t}\) is determined by the range of the corresponding entry of \(Z_{t}\) at each time \(t\). For \(d-\)dimensional observations, one has
\[\{X_{t}=x_{t}\}=\bigcap_{j=1}^{d}\left\{Z_{j,t}\in A_{j,x_{j,t}}\right\}=\{Z_{ t}\in A_{x_{t}}\}, \tag{61}\]
where \(A_{x_{t}}=A_{1,x_{1,t}}\times\ldots\times A_{d,x_{d,t}}\). The notation \(A_{j,x_{j}}\) and \(A_{x}\) will be used below. In the Bernoulli case, for example,
\[A_{j,x_{j,t}}=\left(\Phi^{-1}(C_{j,x_{j,t}-1}),\Phi^{-1}(C_{j,x_{j,t}})\right] =\left\{\begin{array}{ll}\left(\Phi^{-1}(1-p_{j}),\infty\right),&\mbox{if $x_{j,t}=1$},\\ \left(-\infty,\Phi^{-1}(1-p_{j})\right],&\mbox{if $x_{j,t}=0$}.\end{array}\right. \tag{62}\]
The Bernoulli marginal distribution thus has the largest ranges for \(Z_{t}\). In a Monte Carlo approximation of (58), one will be generating \(Z_{j,t}\in A_{j,x_{j,t}}\) and producing their forecast \(\hat{Z}_{j,t+h}\).
The following presentation extends that of Jia et al. (2023) to the multivariate setting relevant to the Monte Carlo approximation problem. The quantity (58) is known to be well approximated through Sequential Monte Carlo (SMC) by generating "particles" over time \(t\)(e.g. Doucet et al., 2001, Doucet and Johansen, 2009). The main difference from Jia et al. (2023) is that the model
here has two latent processes \(\{Z_{t}\}\) and \(\{Y_{t}\}\). To deal with the two latent processes, we use Kalman recursions to forecast and update the latent process \(\{Y_{t}\}\) and approximate the distribution of \(\{Z_{t}\}\) conditioning on \(\{X_{t}\}\). This approach is called Rao-Blackwellization and the method is adapted from the partially observed Gaussian state-space models (e.g. Andrieu and Doucet, 2002, Briers et al., 2010).
The following is the SMC algorithm for particle filtering to generate particles \(\{\tilde{Z}_{t}^{(k)}\}_{t=1,\ldots,T}\), \(k=1,\ldots,N\), over time, whose weighted average then approximates (58). The particle \(\{\tilde{Z}_{t}^{(k)}\}\) can be regarded as the realization of the underlying latent process. Additionally, we add a resampling step which is often used in sequential Monte Carlo algorithms. We will explain the necessity of resampling below.
**Sequential Importance Sampling and Resampling (SIS/R)**: Set the initial importance weights \(w_{0}^{(k)}=1\) for all \(k\), initialize \(\tilde{Y}_{0|0}^{(k)}\sim\mathcal{N}(0,\tilde{Q}_{0|0})\), where \(\tilde{Q}_{0|0}=\text{Var}(Y_{0})\). For \(p=1\), \(\tilde{Q}_{0|0}\) is approximated by \(\sum_{m=0}^{M}\Psi^{m}\Sigma_{\eta}(\Psi^{\prime})^{m}\) for large \(M\). Then, recursively over \(t=1,\ldots,T\), do the following steps: For each \(k=1,\ldots,N\):
1. Forecasting step: Compute \(\hat{Y}_{t|t-1}^{(k)}\), \(\hat{Q}_{t|t-1}\), \(\hat{Z}_{t|t-1}^{(k)}\) and \(\hat{R}_{t|t-1}\) via Kalman recursions (see Appendix A).
2. Importance sampling step: Sample residual \(\xi_{t}^{(k)}\) satisfying \[\xi_{t}^{(k)}\stackrel{{ d}}{{=}}\mathcal{N}_{d}\left(0_{d},I_{d }\Big{|}\Phi^{-1}(C_{x_{t}-1})<\hat{Z}_{t|t-1}^{(k)}+\hat{R}_{t|t-1}^{1/2}\xi_ {t}^{(k)}\leq\Phi^{-1}(C_{x_{t}})\right),\] (63) where \(\Phi^{-1}(C_{n})=(\Phi^{-1}(C_{1,n_{1}}),\ldots,\Phi^{-1}(C_{d,n_{d}}))^{\prime}\) and \(\mathcal{N}_{d}(\mu,\Sigma|A)\) indicates a \(d-\)dimensional multivariate normal distribution with mean \(\mu\) and covariance \(\Sigma\) restricted to the set \(A\). Then, update the particle as \[\hat{Z}_{t}^{(k)}=\hat{Z}_{t|t-1}^{(k)}+\hat{R}_{t|t-1}^{1/2}\xi_{t}^{(k)}\] (64) and update the importance weight as \(w_{t}^{(k)}=w_{t-1}^{(k)}w_{t}(\hat{Z}_{t|t-1}^{(k)})\), where \[w_{t}(\hat{Z}_{t|t-1}^{(k)})=\mathbb{P}\left(\mathcal{N}(\hat{Z}_{t|t-1}^{(k)},\hat{R}_{t|t-1})\in A_{x_{t}}\right).\] (65)
3. Resampling step: Set \(\Omega_{t,N}=\sum_{k=1}^{N}w_{t}^{(k)}\) and normalize \(w_{t}^{(k)}\) by \(\tilde{w}_{t}^{(k)}=w_{t}^{(k)}/\Omega_{t,N}\). Take a quartet \((\tilde{w}_{t}^{(k)},\tilde{Y}_{t|t-1}^{(k)},\tilde{Z}_{t|t-1}^{(k)},\tilde{Z }_{t}^{(k)})\) as follows. * If a resampling criterion described around (67) is satisfied, then take \((\frac{1}{N},\hat{Y}_{t|t-1}^{(I_{k})},\hat{Z}_{t|t-1}^{(I_{k})},\hat{Z}_{t}^{( I_{k})})\)
where \(\{I_{k}\}\) are chosen indices after resampling. * If the criterion is not satisfied, then take \((\tilde{w}_{t}^{(k)},\tilde{Y}_{t|t-1}^{(k)},\hat{Z}_{t|t-1}^{(k)},\hat{Z}_{t}^{ (k)})\).
4. Updating step: Use \(\tilde{Y}_{t|t-1}^{(k)},\tilde{Z}_{t|t-1}^{(k)},\tilde{Z}_{t}^{(k)}\) and \(\hat{Q}_{t|t-1}\) to compute \(\tilde{Y}_{t|t}^{(k)}\) and \(\tilde{Q}_{t|t}\) via Kalman recursions (see Appendix A).
Finally, the SMC approximation of (58) becomes
\[\mathbb{E}_{x_{1:t}}[V(\hat{X}_{t+h|t})]\approx\sum_{k=1}^{N}\tilde{w}_{t}^{(k )}D_{V,t+h}\left(\hat{Z}_{t+h|t}^{(k)}\right), \tag{66}\]
where \(\hat{Z}_{t+h|t}\), \(h\geq 1\), are computed through forecasting step in the Kalman recursions. See equation (25) in Jia et al. (2023) for the justification of an analogous approximation.
The SMC is known to suffer from the so-called weight degeneracy of particles (e.g. Snyder et al., 2008) which occurs when the variance of normalized weights becomes inflated. The latter happens and becomes worse as the sample size increases. To overcome this, it is suggested to remove the particles with small weights. By following Doucet and Johansen (2009), we resample only when the effective sample size (ESS) exceeds \(N/2\), as a rule of thumb, for the criteria of resampling, where the ESS is defined as
\[\text{ESS}_{t}=\left(\sum_{k=1}^{N}\left(\tilde{w}_{t}^{(k)}\right)^{2} \right)^{-1}. \tag{67}\]
More specifically, we resample particles by following systematic resampling. That is, sample \(U_{1}\sim\mathcal{U}(0,1/N)\) and set \(U_{k}=U_{1}+\frac{k-1}{N}\), \(k=2,\ldots,N\). Then, compute
\[I_{k}=\left|\left\{U_{i}:\sum_{j=1}^{k-1}\tilde{w}_{t}^{(j)}\leq U_{i}\leq\sum _{j=1}^{k}\tilde{w}_{t}^{(j)}\right\}\right| \tag{68}\]
with \(\sum_{j=1}^{0}=0\) as a convention. This is used in Step 3 of the SIS/R algorithm above. Alternatively, one can use multinomial resampling, which is resampling particles by regarding \(\{\tilde{w}_{t}^{(k)}\}\) as a multinomial probability distribution. Many other resampling methods exist (see Douc and Cappe (2005) for more information).
**Remark 4.1**: The forecasting distribution (57) is characterized by the function \(V(x)=V((x_{i})_{i=1,\ldots,d})=1_{\{x_{i}=n_{i},\ i=1,\ldots,d\}}\) for fixed \(n=(n_{i})_{i=1,\ldots,d}\). For a single \(d\)-dimensional forecast value, one could take \(n=(n_{i})_{i=1,\ldots,d}\) for which (57) is largest with the corresponding functions \(V(x)\). Note, however, that this is a daunting task computationally. For example, even with the Bernoulli marginals where
\(n_{i}=0\) or \(1\), the number of functions \(V\) to consider is \(2^{d}\), which grows exponentially in \(d\). To sidestep this issue, we only consider \(V(x)=1_{\{x_{i}=n_{i}\}}\) in practice and take \(n_{i}\) as the forecast value in the \(i\)th coordinate for which (57) is largest. E.g. in the Bernoulli case, this task computationally is of the order \(d\).
### Speed-up in forecasting computation
The computation burden for sequential Monte Carlo sampling is substantial. The majority of the cost is due to sampling (doubly) truncated multivariate Gaussian random variables \(\{\xi_{t}^{(k)}\}\) in (63), and the fact that the algorithm runs for \(t=1,\ldots,T\). Currently, we implement sampling through the R package tmvtnorm developed by Wilhelm and Manjunath (2010). But an improvement in the computation speed for generating truncated multivariate normal random variables is not expected.
To reduce the computational cost, we note that the covariance matrices \(\hat{R}_{t|t-1}\) of prediction error typically converge within a few steps. This is due to a similarly quick convergence of the covariance matrix \(\hat{Q}_{t|t-1}\) of prediction error of the factor series in (83), and the covariance matrix \(\tilde{Q}_{t|t}\) and Kalman gain \(K_{t}\) described in (86).
From the pair of covariance matrices in (83) and (86), one has the recursive equation
\[\hat{Q}_{t+1|t}=\Psi\left(\hat{Q}_{t|t-1}-\hat{Q}_{t|t-1}\Lambda^{\prime}( \Lambda\hat{Q}_{t|t-1}\Lambda^{\prime}+\Sigma_{\varepsilon})^{-1}\Lambda\hat{ Q}_{t|t-1}\right)\Psi^{\prime}+\Sigma_{\eta}, \tag{69}\]
with the given initial condition \(\hat{Q}_{1|0}=\Psi\tilde{Q}_{0|0}\). The covariance matrices of the prediction error converge to a positive definite matrix \(Q\) satisfying the discrete algebraic Riccati equation (DARE),
\[Q=\Psi\left(Q-Q\Lambda^{\prime}(\Lambda Q\Lambda^{\prime}+\Sigma_{\varepsilon} )^{-1}\Lambda Q\right)\Psi^{\prime}+\Sigma_{\eta}. \tag{70}\]
One has a similar equation for the covariance matrices \(\tilde{Q}_{t|t}\). It is the convergence to these equations that happens within a few time steps substantially shorter than the length of observations \(T\). Since the purpose of the SIS/R algorithm is to obtain the importance weights \(\{\tilde{w}_{T}^{(k)}\}\) along the particles \(\{Z_{1:T}^{(k)}\}\) and we presume the stable factor series, it is reasonable to run the forecasting algorithm with only a few last observations. In simulation and application below, we employ the forecasting algorithm with the last 10 observations.
### On forecasting for longer horizons
In this section, we briefly discuss what to expect from the forecasting method when the forecasting horizon \(h\) becomes longer. Recall that the latent factor series is stationary and follows a stable VAR model. The latent process \(\{Z_{t}\}\) is also stationary and its long-term prediction converges to its mean, which is a zero vector. From (94) and (96), the predicted particles are therefore expected to converge eventually to zero vectors as well. Thus, for longer horizon \(h\), we expect
\[\mathbb{E}_{x_{1:t}}[V(\hat{X}_{t+h|t})]\approx D_{V,t+h}(0). \tag{71}\]
As in Remark 4.1, consider \(V(x)=V(x_{i})=1_{\{x_{i}=n_{i}\}}\), \(i=1,\ldots,d\). For \(\hat{Z}_{t+h|t}=z=0\), (59) becomes
\[D_{V,t+h}(0) = \int_{\mathbb{R}^{d}}1_{\{(G(z_{t+h}))_{i}=n_{i}\}}\frac{\exp \left(-\tfrac{1}{2}(z_{t+h})^{\prime}\hat{R}_{t+h|t}^{-1}(z_{t+h})\right)}{(2 \pi)^{d/2}|\hat{R}_{t+h|t}|^{1/2}}dz_{t+h} \tag{72}\] \[= \int_{\left\{G_{i}(z_{i,t+h})=n_{i}\right\}}\frac{\exp\left(- \tfrac{1}{2}z_{i,t+h}^{2}/(\hat{R}_{t+h|t})_{ii}\right)}{\sqrt{2\pi}|(\hat{R}_ {t+h|t})_{ii}|^{1/2}}dz_{i,t+h},\]
where \((\hat{R}_{t+h|t})_{ii}\) is the \(i\)th diagonal entries of covariance matrix \(\hat{R}_{t+h|t}\). For longer horizon \(h\), \((R_{t+h|t})_{ii}\approx\text{Var}(Z_{i,t})=1\). We thus expect from (71) and (72) that for longer horizon \(h\),
\[\mathbb{E}_{x_{1:t}}[1_{\{(\hat{X}_{t+h|t})_{i}=n_{i}\}}]\approx D_{V,t+h}(0) \approx\mathbb{P}(Z\in A_{i,n_{i}})=\mathbb{P}(X_{i}=n_{i}), \tag{73}\]
where \(Z\sim\mathcal{N}(0,1)\) and \(X_{i}\) has a CDF \(F_{i}\) with the parameter \(\theta_{i}\). Hence, for longer horizon \(h\), our forecasting method can be thought as choosing \(n_{i}\) among possible values that maximizes the most likely count value according to the distribution \(F_{i}\) with the parameter \(\theta_{i}\). Two observations are worth making in this regard.
First, in some instances, e.g. Bernoulli and multinomial distributions, \(\theta_{i}\) represents proportion of count values and is estimated as the corresponding sample proportions. In these cases, for long horizon \(h\), we therefore expect our forecast to yield the most likely observed count (modulo the issue of ties). This is the case with the application considered in Section 6. On the other hand, for many other distributions, this observation may not necessarily hold. For example, for the Poisson distribution, the parameter is taken as the sample mean and the most likely count according to this Poisson distribution does not need to be the most likely value, let alone be in the sample.
Second, the relation (73) might be confusing from the following point of view. As argued above,
for longer horizon \(h\), we expect \(\tilde{Z}_{t+h|t}\approx 0\). In fact, we see this clearly in the application of Section 6. The relation (73) might be read as saying that \(0\) belongs to the bin \(A_{i,n_{i}}\) with the highest standard normal probability. This is, however, not necessarily the case. It will be the case when \(n_{i}\) is the median of the distribution \(F_{i}\) (i.e. \(F_{i}(n_{i}-1)<1/2\) and \(F_{i}(n_{i})\geq 1/2\)), a quite likely scenario in practice especially for "bell-shaped" distribution with the most likely value \(n_{i}\) being at the center, but the statement will not hold in general.
## 5 Simulation study
In this section, we assess the performance of estimation and forecasting procedures introduced in Sections 3 and 4. Here, we focus on Bernoulli, Poisson, and negative binomial marginal distributions with several parameter values.
### Estimation
#### 5.1.1 Model parameter estimation for known number of factors and lag order
The model setting in the simulation is defined as follows. For the order \(p\) of factor series \(\{Y_{t}\}\) in (4), we take \(p=1\) and it is assumed to be given. The dimension is either \(r=2\) or \(r=5\), which is assumed to be known as well. The corresponding transition matrices \(\Psi=(\psi_{ij})\) are diagonal for simplicity but all the entries are either \(0.7\) (positive correlation) or \(-0.7\) (negative correlation). The corresponding matrices are denoted \(\Psi_{1}^{(1)}\) and \(\Psi_{1}^{(2)}\), respectively. We consider \(\Sigma_{\eta}=I_{r}\) and \(\Sigma_{\varepsilon}=I_{d}\). Following (24), the component \(\Lambda_{2}\) of the loadings matrix \(\Lambda\) in (3) is generated as follows. The entries \(\lambda_{i,j}\), \(i=1,\ldots,d-r\), \(j=1,\ldots,r\), in \(\Lambda_{2}\) are independently drawn from the uniform distribution \(U(0,1)\). We consider the same parameters \(\{\theta_{i}\}\) and marginal count distributions as for the illustrative examples of link functions in Figure 1. For Bernoulli case, \(d\) component series are divided into \(3\) equal groups having \(\theta_{i}=p_{i}=0.2,0.4,0.7\), respectively. Similarly, for Poisson marginal counts with \(\theta_{i}=\lambda_{i}=0.1,1\), and \(10\), and for negative binomial case with \(\theta_{i}=p_{i}=0.2,0.4,0.7\) and the number of successes \(3\). Monte Carlo simulations are based on \(100\) replications for each setting.
The performance of our estimators is assessed through relative \(\ell_{2}\) losses, which are defined as
\[L(\hat{a})=\frac{\mathbb{E}\|\hat{a}-a\|_{F}}{\|a\|_{F}}, \tag{74}\]
where \(\hat{a}\) is an estimator of the parameter \(a\). The reported losses are computed as averages over 100 replications. The numerical biases of our estimators are defined as
\[B(\hat{a})=\frac{\|\mathbb{E}[\hat{a}]-a\|_{1}}{\|a\|_{1}}, \tag{75}\]
where \(\hat{a}\) is an estimator of the parameter \(a\). Similar to the relative \(\ell_{2}\) losses, the reported bias is computed as averages over 100 replications.
Table 1 summarizes the estimation results for several \(d\) and \(T\) values. Generally, the results are as expected. For example, both the losses and the biases decrease with increasing sample size \(T\). As the dimension \(d\) increases, both the losses and the biases decrease as well. When the number of factors gets larger, then the corresponding losses and biases increase. Additionally, when the transition matrix \(\Psi_{1}^{(2)}\) of VAR has negative entries the losses and the biases are diminished compared to those for \(\Psi_{1}^{(1)}\).
Another interesting point is that the estimation losses for the underlying factor series \((\hat{\Psi},\hat{\Sigma}_{\eta})\) are larger than the losses from the factor model \((\hat{\Lambda},\hat{\Sigma}_{\varepsilon})\). In terms of biases, the values concerning the covariance matrix of factor series are not larger than those from the covariance matrix of factor models. However, when considering the number of entries in parameters, the contribution of each estimate is relatively large in the factor series compared to that from the factor model. The factor series parameters are estimated using factor model constructs so that estimation is naturally more difficult. Furthermore, the differences in estimation losses between the different marginal count distributions are not substantial.
#### 5.1.2 Selection of the number of factors and lag order
We illustrate the performance of the selection of the number of factors \(r\) and the lag order \(p\), suggested in Section 3.3. The exact same model parameters as in Section 5.1.1 are used in the context of finding the number of factor series. We denote the scree plot method of finding the "knee" described in (40) by ED. The IC methods as combinations of (41) with the three different penalty functions (42)-(44) are denoted by IC1-IC3, respectively. For the BCV-based approach, we employ two different estimation procedures. Our principle component-based estimation in Section 3.1 is denoted by BCV(PC). On the other hand, BCV(FAC) refers to MINRES estimation by Harman and Jones (1966), which is briefly described in Section 3.3. For each combination of model parameters, 100 replications are performed.
For selecting the lag order of factor series, we additionally introduce two sets of VAR transition matrices \(\Psi^{(3)}=\{\Psi_{1}^{(3)},\Psi_{2}^{(3)}\}\) and \(\Psi^{(4)}=\{\Psi_{1}^{(4)},\Psi_{2}^{(4)},\Psi_{3}^{(4)},\Psi_{4}^{(4)}\}\). Those two sets of parameters are used to produce factor series \(Y_{t}\) governed by VAR(2) and VAR(4), respectively. The diagonal entries of the transition matrices are
\[(\psi_{ii,1}^{(3)},\psi_{ii,2}^{(3)}) = (0.7,-0.4), \tag{76}\] \[(\psi_{ii,1}^{(4)},\psi_{ii,2}^{(4)},\psi_{ii,3}^{(4)},\psi_{ii,4}^ {(4)}) = (0.7,-0.2,0.3,-0.4), \tag{77}\]
and the off-diagonal entries are zero otherwise. These choices of the true lag orders will be denoted by \(p=2\) and \(p=4\) in the presented results, respectively. The rest of the parameters and the size of the problems remain the same as in other simulations. For baselines, the IC methods in (46) are considered combined with the four penalties (47)-(50) as Akaike information criterion (AIC), Hannan-Quinn criterion (HQ), Bayesian arguments Schwarz criterion (SC), and final prediction error criterion (FPE), respectively. 100 replications are performed per each combination of parameters.
The selection results are depicted in Figures 2 and 3 as the frequencies of estimated \(r\) and \(p\) in 100 replications. In Figure 2, the BCV(PC) method outperforms all baselines (ED, IC1-3) in selecting true \(r\), followed by BCV(FAC). The quality of estimation by the BCV-based approaches follows the pattern in the estimation of model parameters. That is, as the dimension \(d\) and sample length \(T\) increase, so do the percentages of correctly estimated \(r\). In terms of marginal distributions, BCV(PC) works best for negative binomial, but the other two distributions are not that behind. Interestingly, BCV(FAC) tends to overestimate for some cases, especially when the true number of factor series is large, \(r=5\) in our simulation.
Turning to Figure 3, determining the lag order \(p\) seems more difficult. Whereas the non-BCV-based baselines for \(r\) in the previous experiments tended to be more correct as either \(d\) or \(T\) increased, the IC methods now perform poorly for all conditions. Even for the BCV approach, the results are mixed. When the number of factor series is small, \(r=2\), the correct lag orders tend to be estimated in the majority of replications. The estimation performance degrades considerably for a large number of factor series, \(r=5\). The results improve for larger sample length (\(T=400\)) and are best for the negative binomial distribution. A more accurate lag order selection procedure, however, would still be desirable.
### Forecasting
In this section, we assess forecasting performance in the simulation setting considered in Section 5.1.1. We generate \(h\)-step-ahead prediction \(V(\hat{X}_{T+h|T})\) through (66). The importance weights \(\{\hat{w}_{T}^{(k)}\}\) are calculated through the SIS/R algorithm. The generation of \(h\)-step-ahead prediction \(\hat{Z}_{T+h|T}\) is described in Appendix A.
We hold out the last 5 observations and use \(T-5\) observations as given. Within the latter sample, the model is fitted and the last 10 observations are used for estimating covariances \(\hat{Q}_{t|t-1}\), \(\hat{Q}_{t|t}\), and Kalman gain \(K_{t}\) as discussed in Section 4.2. The rest of prediction follows the Kalman recursions as discussed in Appendix A.
The performance is evaluated by the following three measures. The first three measures are related to the Gaussian latent process and its transformation through the marginal distribution. For the Gaussian process, it is worth measuring the total error due to the difference between \(\{\hat{Z}_{T+h|T}^{(k)}\}_{k=1,\ldots,N}\) and \(Z_{T+h}\) caused by particles since the correctly produced \(\hat{Z}_{T+h|T}^{(k)}\) leads to the higher likelihood that the latent process lies in the bin that produces the observation identical to the observation \(X_{T+h}\). Besides, each particle contributes to forecasting \(X_{T+h}\) so the more particles deviate from the truth, the more variability the forecasted values should take. Likewise, we can compute the forecasts of the latent factor series \(\hat{Y}_{T+h|T}^{(k)}\) and consider the difference from \(Y_{T+h}\). Also, we compute the root mean forecasting error of \(X_{T+h|T}\) over the average of 100 realized particles in forecasting window. Specifically, the root mean \(h\)-step forecasting errors of the latent processes and the observation process for one replication (realization) are computed as
\[RMFE_{Y}(h)=\sqrt{\frac{\frac{1}{N}\sum_{k=1}^{N}\sum_{i=1}^{r}( \hat{Y}_{i,T+h|T}^{(k)}-Y_{i,T+h})^{2}}{\frac{1}{T}\sum_{i=1}^{r}\sum_{t=1}^{T }Y_{i,t}^{2}}},\quad h=1,2,\ldots,H, \tag{78}\] \[RMFE_{Z}(h)=\sqrt{\frac{\frac{1}{N}\sum_{k=1}^{N}\sum_{i=1}^{d}( \hat{Z}_{i,T+h|T}^{(k)}-Z_{i,T+h})^{2}}{\frac{1}{T}\sum_{i=1}^{d}\sum_{t=1}^{T }Z_{i,t}^{2}}},\quad h=1,2,\ldots,H,\] (79) \[RMFE_{X}(h)=\sqrt{\frac{\sum_{i=1}^{d}(\hat{X}_{i,T+h|T}-X_{i,T+ h})^{2}}{\frac{1}{T}\sum_{i=1}^{d}\sum_{t=1}^{T}X_{i,t}^{2}}},\quad h=1,2,\ldots,H. \tag{80}\]
Finally, we report the sensitivity defined as
\[Sens(h)=\frac{1}{d}\sum_{i=1}^{d}1_{\{\hat{X}_{i,T+h|T}=X_{i,T+h}\}},\quad h= 1,2,\ldots H. \tag{81}\]
Each Monte Carlo simulation is based on 100 replications and the averages are reported. Note that this measure is stringent, in that it reports the matching rate entrywise. In addition, we consider two baselines. First, as the most naive forecasting method, we use the last observation. We call this Last observation (denoted Last or L). Second, we consider each dimension \(i=1,\ldots,d\) separately and define the predictions as the likeliest previous value for that dimension. For example, if the most frequent value \(X_{1,t}\) for the first dimension \(i=1\) within the observation window is 3 for the Poisson case, we take 3 as predictions for all 5 steps ahead. We call this approach Marginal likelihood (denoted Marginal or M); see also Section 4.3. The forecasting accuracy of those two baselines is measured by sensitivity.
Table 2 reports the results. In general, square-root forecasting errors of both the observations \(X_{t}\) and the two latent processes \(Z_{t}\) and \(Y_{t}\) gradually increase with the longer forecasting horizon. The difference between the forecasting errors of the observation and latent process is that the increasing trend for \(RMFE_{Z}\) and \(RMFE_{Y}\) along the forecasting horizon \(h\) seems more pronounced. The forecasting error of factor series is also larger than the observation series. Furthermore, it is interesting that the forecasting error of the factor series depends on the dimension \(d\) rather than the number of factors \(r\), with the error decreasing as \(d\) increases. The sensitivity, on the other hand, seems to exhibit similar performance along the horizon, with the longer horizon outperforming for some cases. Perhaps as expected, the sensitivity of our method has scored better than both Last and Marginal approaches for all combinations of simulations. Among different marginal distributions, Bernoulli cases have the best performance followed by the Poisson in terms of sensitivity. The negative binomial cases have the lowest sensitivity scores, but the difference in \(RMSFE_{X}\) from the Poisson cases is relatively small. Interestingly, the \(RMSFE_{Z}\)s and \(RMSFE_{Y}\)s among the 3 different distributions are similar. Furthermore, forecasting performances for the different numbers of factors are also similar. However, a significant difference in performance has been found for different dimensions \(d\). A significant difference in performance has also been found in the different transition matrix cases, with better forecasting in the case of negative dependence (\(\Psi_{1}^{(2)}\)). This is intuitive in that the fluctuations of the latent process around the bin thresholds happen more frequently across the temporal series.
Application
To demonstrate the utility of the proposed model, we consider individual-level time series consisting of daily self-report measures of personality collected by Borkenau and Ostendorf (1998). That study was designed to explore items describing personal emotions representing the "Big Five" factors of personality. The structure of the data is as follows. 30 items of emotions where groups of 6 items are known to be related to one of the five factors in personality have been collected for 22 students over 90 days.
All 30 items are thought to correspond to at least one of the "Big Five" factors. These categories are known to follow a factor structure, and are denoted as categories 1 through 5 below. Each evening the participants of the study were instructed to appraise their daily behavior for each item on a scale ranging from 0 to 6, with 6 indicating greater endorsement of the emotion that day. For illustrative purposes, we work with the data for one student out of the 22 available. The choice of the student was largely motivated by the following consideration: for many students, response on some items showed very little variability (i.e., being mostly constant) and such cases were not particularly interesting from the time series perspective. We reduce the effect of the two extreme observations 0 and 6 by merging them with 1 and 5, respectively, so that the new scale ranges from 1 to 5. The practitioners wishing to use our (or any other) model should first carry out some basic explanatory analysis of data, as we touch upon below.
The data set and its time series plots are illustrated in Figure 4. Among 90 consecutive observations, we use the first 85 observations for estimation and the last 5 observations as a hold-out sample to evaluate our forecasts. The corresponding 6 items are grouped by the identified categories C1 through C5. The time series of individual items show substantial variability. We also see from the time plots that the dynamics of the 6 items in each category share common features. Hence, it is plausible to postulate the existence of a latent factor structure that drives the dynamics.
The following remarks provide further evidence for the latent factor structure. Several estimated correlation matrices are depicted in Figure 5. The top panel presents the sample autocorrelation matrices of the observed series \(X_{t}\). The bottom panel presents the estimated autocorrelation matrices of the latent series \(Z_{t}\). For both panels, the left plot is for lag 0 and the right plot is for lag 1. One can see that both autocorrelation matrices for the same lag order are nearly indistinguishable. At lag 0, the plots in both panels show clear block patterns characteristic of the factor structure. Furthermore, the factor structure seems to be preserved through temporal
dependence as suggested by the plots of the sample ACFs at lag 1, though it is less discernible.
The BCV methods for the number of factors in Section 3.3 also select \(r=5\). The BCV method for the lag order in Section 3.3 suggests \(p=1\). We thus work with the model assuming \(r=5\), \(p=1\). For these selections and in the estimation results below, we assume multinomial distributions on \(\{1,2,3,4,5\}\) as marginal distributions. Now R Figure 6 presents the estimated loading matrix and Figure 7 depicts the transition matrix of the factor series estimated through the method described in Section 3. This pattern is consistent with what we observe in Figure 5. Note that the transition matrix has relatively large values not only on the diagonal but also for some off-diagonal entries. The large off-diagonal values indicate that there are cross-correlations across the factor series.
With the estimated model, we forecast the next 5 steps as discussed in Remark 4.1 and compare the values with the true ones held out of the sample. For comparison, we consider two simple forecasting approaches, Last and Marginal, as described in Section 5.2.
Figures 8 and 9 present the generated particles \(\{\hat{Z}_{t}^{(k)}\}\) for 30 items within the last 10 observations and the predicted values of latent process \(\{\hat{Z}_{T+h|T}^{(k)}\}\) for next 5 forecasting steps, respectively. The items have been grouped using the 5 underlying categories. The three or four parallel lines in Figures 8 and 9 represent the thresholds, and each pair of lines represents a bin. To distinguish the values, we use different line types for each threshold. As explained above, the particles are generated at each time point to belong to a certain bin that matches the discrete observation for that dimension. This is why the particles stay within bins at all time points in the left panel. For some of the items, a thresholding line is placed outside the given vertical scale. This is due to the way the bins are defined. Note that since each bin is estimated through the observations, some values do not appear if they are not realized in the observation period. Those values are also excluded from the candidate forecasting values. On the other hand, since no further observations are assumed to be given after the observation period, the particles are generated by forecasting the latent process. For this reason, the particles in the right panel do not need to stay in the same bin. Note that all of the particles seem to converge to zero for the longer forecasting horizon. As explained in Section 4.3, this is natural when a stable VAR is used for forecasting. Note also that the particles are rather close to zero even for the first few horizons. This is a consequence of the interplay between the levels of signal (factors) and noise (errors) in the estimated model. The forecasted value naturally takes 0 when forecasting noise and thus downweights the factor forecast as the larger the noise magnitude is, since the variance of our latent process is 1 in each dimension.
Finally, Figure 10 shows the plots of the absolute differences between forecasts and the true
values for each item. The items are ordered to match the categories expecting that each factor mainly affects the corresponding category. Overall, the proposed forecasting approach slightly outperforms the reference methods, by showing small absolute differences in general. In particular, as explained in Section 4.3, the forecasting performance becomes identical to the Marginal for the longer horizon. But for smaller horizons, our approach does better for 4 items, compared to 2 items doing better for Marginal. We naturally cannot draw overarching conclusions based on this evidence.
## 7 Conclusion
In this work, we considered a multivariate discrete-valued times series model, wherein component count series are obtained by binning the continuous values of latent Gaussian dynamic factor processes. We introduced an estimation method based on second-order properties of the count and latent processes, and PCA. We also suggested additional model selection approaches for determining the number of factor series and their lag orders through cross-validation and information criteria. Facilitated by the state-space formulation of our model, we employed a sequential Monte Carlo method with resampling, for forecasting. Our estimation and forecasting methods were examined on simulated data and an empirical example from psychology.
While our study advances a framework for latent Gaussian time series modeling of categorical observations collected over time, important questions remain. As we pointed out in Remark 3.2, theory for our model in high dimensions is currently being investigated. Regarding estimation, the lag order selection could be improved. There are also potential improvements to make in terms of accuracy and computing time of our forecasting methods. Instead of employing standard particle filtering strategies, one could try other variants of sequential Monte Carlo sampling, for example, ensemble Kalman filtering (e.g. Frei and Kunsch, 2013).
## Acknowledgement
Vladas Pipiras's research was partially supported by the grants NSF DMS 1712966, DMS 2113662, and DMS 2134107.
Kalman recursions and forecasting for SIS/R algorithm
This section describes how the Kalman recursions are used to obtain the \(h\)-step-ahead linear prediction of \(Z_{t}\), \(\hat{Z}_{t+h|t}=H_{t1}^{(h)}Z_{t}+\ldots+H_{tt}^{(h)}Z_{1}\), and how they enter into the SIS/R algorithm. Having the autocovariance function of \(\{Z_{t}\}\), the predictor \(\hat{Z}_{t+h|t}\) can naturally be computed through e.g. the Durbin-Levinson algorithm, but the Kalman recursions route provides computational benefit for higher dimension \(d\). Related technical details can be found in Durbin and Koopman (2012), Douc et al. (2014) but this list is not exhaustive.
We consider below the case \(p=1\) only for simplicity. But for higher \(p\), one can convert the VAR(\(p\)) structure of the factor series into an augmented VAR(1) model by using a companion form of the VAR transition matrix. We define the one-step-ahead prediction of \(Y_{t}\) by \(\hat{Y}_{t|t-1}\) when \(Z_{1},\ldots,Z_{t-1}\) are given. Also, we denote the corresponding covariance matrix of prediction error by \(\hat{Q}_{t|t-1}:=\mathbb{E}[(Y_{t}-\hat{Y}_{t|t-1})(Y_{t}-\hat{Y}_{t|t-1})^{ \prime}]\). Also, let \(\tilde{Y}_{t|t}\) be the filtered estimate and denote the corresponding error covariances \(\tilde{Q}_{t|t}\). By convention of Kalman recursions, we let \(\tilde{Y}_{0|0}\sim\mathcal{N}(0,\tilde{Q}_{0|0})\) where \(\tilde{Q}_{0|0}=\text{Var}(Y_{0})\).
The forecast step, which generates the filtering distribution conditioned on the previous information up to \(t-1\), is
\[\hat{Y}_{t|t-1}=\Psi\tilde{Y}_{t-1|t-1} \tag{82}\]
and the corresponding covariance matrix of prediction error is
\[\hat{Q}_{t|t-1}=\Psi\tilde{Q}_{t-1|t-1}\Psi^{\prime}+\Sigma_{\eta}. \tag{83}\]
As a consequence, \(\hat{Z}_{t|t-1}=\Lambda\hat{Y}_{t|t-1}\) and \(\hat{R}_{t|t-1}=\Lambda\hat{Q}_{t|t-1}\Lambda^{\prime}+\Sigma_{\varepsilon}\). The joint distribution of the forecast \(Y_{t}\) and \(Z_{t}\) conditioning on \(Z_{1:t-1}\) is
\[\begin{pmatrix}Y_{t}\\ Z_{t}\end{pmatrix}\Bigg{|}Z_{1:t-1}\sim\mathcal{N}_{r+d}\left(\begin{pmatrix} \hat{Y}_{t|t-1}\\ \Lambda\hat{Y}_{t|t-1}\end{pmatrix},\begin{pmatrix}\hat{Q}_{t|t-1}&\hat{Q}_{t|t -1}\Lambda^{\prime}\\ \Lambda\hat{Q}_{t|t-1}&\Lambda\hat{Q}_{t|t-1}\Lambda^{\prime}+\Sigma_{ \varepsilon}\end{pmatrix}\right). \tag{84}\]
From this perspective, one can interpret the update step as sampling \(Y_{t}\) conditioned on \(Z_{1:t}\). That is, \(Y_{t}|Z_{1:t}\sim\mathcal{N}_{r}(\tilde{Y}_{t|t},\tilde{Q}_{t|t})\) where
\[\tilde{Y}_{t|t} = \hat{Y}_{t|t-1}+K_{t}(Z_{t}-\hat{Z}_{t|t-1}), \tag{85}\] \[\tilde{Q}_{t|t} = (I_{r}-K_{t}\Lambda)\hat{Q}_{t|t-1}, \tag{86}\]
where \(K_{t}=\hat{Q}_{t|t-1}\Lambda^{\prime}(\Lambda\hat{Q}_{t|t-1}\Lambda^{\prime}+ \Sigma_{\varepsilon})^{-1}=\hat{Q}_{t|t-1}\Lambda^{\prime}\hat{R}_{t|t-1}^{-1}\) is called the Kalman gain. The relations (85) and (86) form the update equations for Kalman recursions. One can apply the Sherman-Morisson-Woodbury formula for the matrix inversion of \(\hat{R}_{t|t-1}^{-1}\) when the dimension \(d\) is high. These two equations are used to update filtered estimators given \(Z_{1:t-1}\) when new information about \(Z_{t}\) is added.
Hence, the Kalman recursions for the SIS/R algorithm suggested in Section 4 to obtain one-step-ahead linear prediction \(\hat{Z}_{t|t-1}\) are as follows: At each time \(t=1,\ldots,T\), carry out the following two steps for \(k=1,\ldots,N\),
1. Forecasting step: \[\hat{Y}_{t|t-1}^{(k)} = \Psi\tilde{Y}_{t|t}^{(k)},\] (87) \[\hat{Q}_{t|t-1} = \Psi\tilde{Q}_{t-1|t-1}\Psi^{\prime}+\Sigma_{\eta},\] (88) \[\hat{Z}_{t|t-1}^{(k)} = \Lambda\hat{Y}_{t|t-1}^{(k)},\] (89) \[\hat{R}_{t|t-1} = \Lambda\hat{Q}_{t|t-1}\Lambda^{\prime}+\Sigma_{\varepsilon}.\] (90)
4. Updating step: \[K_{t} = \hat{Q}_{t|t-1}\Lambda^{\prime}(\Lambda\hat{Q}_{t|t-1}\Lambda^{ \prime}+\Sigma_{\varepsilon})^{-1},\] (91) \[\tilde{Y}_{t|t}^{(k)} = \hat{Y}_{t|t-1}^{(k)}+K_{t}(\tilde{Z}_{t}^{(k)}-\hat{Z}_{t|t-1}^{ (k)}),\] (92) \[\tilde{Q}_{t|t} = (I_{r}-K_{t}\Lambda)\hat{Q}_{t|t-1}.\] (93)
Note that \(Z_{t}\) in (85) is replaced by \(\tilde{Z}_{t}\) in (92) because this is the notation used in the SIS/R algorithm.
Forecasting \(h\)-step-ahead linear prediction after \(T\) observations in the algorithm is straightforward. Since the latent factor series \(\{Y_{t}\}\) follows a VAR model, the prediction of \(Y_{T+h}\) with the information only up to \(T\) is
\[\hat{Y}_{T+h|T}=\Psi\hat{Y}_{T+h-1|T}=\ldots=\Psi^{h}\tilde{Y}_{T|T}, \tag{94}\]
and the corresponding covariance matrix of prediction error is
\[\hat{Q}_{T+h|T}=\Psi\hat{Q}_{T+h-1|T}\Psi^{\prime}+\Sigma_{\eta}=\ldots=\Psi^{h} \tilde{Q}_{T|T}\Psi^{h^{\prime}}+\sum_{s=1}^{h}\Psi^{s-1}\Sigma_{\eta}\Psi^{s-1 ^{\prime}} \tag{95}\]
from (83).
5. Prediction step:
\[\hat{Z}^{(k)}_{T+h|T} = \Lambda\hat{Y}^{(k)}_{T+h|T}, \tag{96}\] \[\hat{R}_{T+h|T} = \Lambda\hat{Q}_{T+h|T}\Lambda^{\prime}+\Sigma_{\varepsilon}. \tag{97}\]
Hence, for each \(k=1,\ldots,N\), we compute \(\{\hat{Y}^{(k)}_{T+h|T}\}\) as above and then \(\hat{Z}^{(k)}_{T+h|T}\) as (96). Likewise, the covariance of the prediction error is also computed by (97). Note that unlike within the observation period, computing (63) is impossible beyond the period. So rather than following Forecasting and Updating steps, we directly compute (96) and (97).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline & \multicolumn{3}{c}{Bernoulli} & \multicolumn{3}{c}{Poisson} & \multicolumn{3}{c}{Weighting} & \multicolumn{3}{c}{Weighting} \\ \cline{2-13} & \(\Psi^{(1)}\) & \(\Psi^{(2)}\) & \(\Psi^{(1)}\) & \(\Psi^{(1)}\) & \(\Psi^{(2)}\) & \(\Psi^{(1)}\) & \(\Psi^{(2)}\) & \(\Psi^{(1)}\) & \(\Psi^{(2)}\) & \(\Psi^{(2)}\) \\ \hline \(T-300\) & \(r-2\) & \(r-5\) & \(r-2\) & \(r-5\) & \(r-2\) & \(r-5\) & \(r-2\) & \(r-5\) & \(r-2\) & \(r-5\) & \(r-2\) & \(r-5\) \\ \hline \(d-15\) & \(L(\emptyset)\) & 0.0983 & 0.1108 & 0.0503 & 0.0561 & 0.0395 & 0.0451 & 0.0182 & 0.0161 & 0.0424 & 0.0409 & 0.0121 & 0.0203 \\ \((B\|)(B\|)(B\|) & 0.0932 & 0.1030 & 0.0523 & 0.0565 & 0.0141 & 0.0427 & 0.0188 & 0.0172 & 0.0397 & 0.0449 & 0.0181 & 0.0173 \\ \(L(\tilde{\lambda})\) & 0.2359 & 0.9250 & 0.2454 & 0.4097 & 0.2526 & 0.0413 & 0.2430 & 0.3602 & 0.2088 & 0.2149 & 0.2068 & 0.2048 \\ \((B\|)(B\|)(B\|) & 0.2066 & 0.8135 & 0.2148 & 0.3620 & 0.2288 & 0.3453 & 0.2166 & 0.3141 & 0.1773 & 0.1906 & 0.1816 & 0.1825 \\ \(L(\Sigma_{c})\) & 0.1941 & 0.4051 & 0.2040 & 0.3925 & 0.1917 & 0.4341 & 0.1911 & 0.4313 & 0.2712 & 0.3942 & 0.2826 & 0.3900 \\ \((B\|_{c})(\(\Sigma_{d}))\) & 0.2066 & 0.2742 & 0.2074 & 0.27189 & (1.8552) & 0.2432 & (1.8289) & (2.3533) & (1.5820) & (2.3429) & (1.5647) & (2.3379) \\ \(L(\Psi)\) & 0.2610 & 0.1692 & 0.2930 & 0.5178 & 0.2077 & 0.5113 & 0.2589 & 0.5269 & 0.3196 & 0.4097 & 0.2937 & 0.5238 \\ \((B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\)(B\|)(B\)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\|)(B\)(B\|)
## References
Figure 1: (a) The link function \(L_{ij}(u)\) for several combinations of CDFs \(F_{i}=\text{Bern}(p_{i})\), \(F_{i}=\text{Pois}(\lambda_{i})\), and \(F_{i}=\text{NB}(r,p_{i})\) with \(r=3\) (same type of distribution for \(j\)). (b) The inverse link function \(L_{ij}^{-1}(v)\) and its interpolation \(\tilde{L}_{ij}^{-1}(v)\) for chosen combinations of CDFs.
Figure 2: Estimated number of factors from simulated data for various combinations of model parameters including transition matrices \(\Psi\), number of factors \(r\), number of time points \(T\), dimension \(d\), and several marginal distributions.
Figure 3: Estimated lag order from simulated data for various combinations of model parameters including transition matrices with different lag order \(p\), number of factors \(r\), number of time points \(T\), dimension \(d\), and several marginal distributions.
Figure 4: Time plots of 90-day observations for 30 items.
Figure 5: (Top) The sample autocorrelation matrices of the observations at lag 0 (left) and lag 1 (right), \(\hat{\Sigma}_{X}(h)\), \(h=0,1\). (Bottom) The estimated autocorrelation matrices of the latent Gaussian series at lag 0 (left) and lag 1 (right), \(\hat{\Sigma}_{Z}(h)\), \(h=0,1\).
Figure 6: Estimate of the loadings matrix \(\hat{\Lambda}\).
Figure 7: Estimate of the VAR(1) transition matrix \(\hat{\Psi}_{1}\).
Figure 8: The simulated particles for the first 18 items in the observation period (left) and forecasting period (right). Two consecutive horizontal lines with different line types form the bin for discrete observations of each item.
Figure 9: The simulated particles for the next 12 items in the observation period (left) and forecasting period (right). Two consecutive lines with different line types form the bin for discrete observations of each item.
Figure 10: Result of 5-step-ahead prediction of each item with two baseline approaches. Each of 6 items in order are known to be affected by the same category. |
2308.09958 | A Comparison of Adversarial Learning Techniques for Malware Detection | Machine learning has proven to be a useful tool for automated malware
detection, but machine learning models have also been shown to be vulnerable to
adversarial attacks. This article addresses the problem of generating
adversarial malware samples, specifically malicious Windows Portable Executable
files. We summarize and compare work that has focused on adversarial machine
learning for malware detection. We use gradient-based, evolutionary
algorithm-based, and reinforcement-based methods to generate adversarial
samples, and then test the generated samples against selected antivirus
products. We compare the selected methods in terms of accuracy and practical
applicability. The results show that applying optimized modifications to
previously detected malware can lead to incorrect classification of the file as
benign. It is also known that generated malware samples can be successfully
used against detection models other than those used to generate them and that
using combinations of generators can create new samples that evade detection.
Experiments show that the Gym-malware generator, which uses a reinforcement
learning approach, has the greatest practical potential. This generator
achieved an average sample generation time of 5.73 seconds and the highest
average evasion rate of 44.11%. Using the Gym-malware generator in combination
with itself improved the evasion rate to 58.35%. | Pavla Louthánová, Matouš Kozák, Martin Jureček, Mark Stamp | 2023-08-19T09:22:32Z | http://arxiv.org/abs/2308.09958v1 | # A Comparison of Adversarial Learning Techniques for Malware Detection
###### Abstract
Machine learning has proven to be a useful tool for automated malware detection, but machine learning models have also been shown to be vulnerable to adversarial attacks. This article addresses the problem of generating adversarial malware samples, specifically malicious Windows Portable Executable files. We summarize and compare work that has focused on adversarial machine learning for malware detection. We use gradient-based, evolutionary algorithm-based, and reinforcement-based methods to generate adversarial samples, and then test the generated samples against selected antivirus products. We compare the selected methods in terms of accuracy and practical applicability. The results show that applying optimized modifications to previously detected malware can lead to incorrect classification of the file as benign. It is also known that generated malware samples can be successfully used against detection models other than those used to generate them and that using combinations of generators can create new samples that evade detection. Experiments show that the Gym-malware generator, which uses a reinforcement learning approach, has the greatest practical potential. This generator achieved an average sample generation time of 5.73 seconds and the highest average evasion rate of 44.11%. Using the Gym-malware generator in combination with itself improved the evasion rate to 58.35%.
**Keywords:** Adversarial Examples, Malware Detection, Machine Learning, PE Files
## 1 Introduction
With the rapid development of information technology, computer systems have become increasingly important in the daily lives of people. Unfortunately, the rapid development of these technologies is accompanied by a similarly rapid increase in cyberattacks.
Malicious software (malware) is one of the most significant security threats today, comprising several different categories of malicious code, such as viruses, trojans, worms, spyware, and ransomware. To protect computers and the Internet from malware, early detection is necessary. However, this is problematic, as a large amount of new malicious code is generated every day [1]. Since it
is not possible to analyze each sample individually, automatic mechanisms are required to detect malware.
Antivirus companies often rely mainly on signature-based detection techniques [2] for malware detection. Signatures are specific patterns that allow for the recognition of malicious files. For example, they can be a byte sequence, a file hash, or a string. When inspecting a file, the antivirus system compares its content with the signatures of already-known malware stored in the database. If a match is found, the file is reported as malware. Signature-based detection methods are fast and effective in detecting known malware. However, malware authors can modify their code to change the signature of the program, thereby avoiding detection. Some malware can hide in the system using various obfuscation techniques [3], such as encryption, oligomorphic, polymorphic, metamorphic, stealth, and packing methods, to make the detection process more difficult.
Machine learning (ML) models are commonly used today in various fields. Their application can be found, for example, in technologies such as self-driving cars, weather forecasting, face recognition, or language translation systems. Machine learning has also proved to be a useful tool for automatic malware detection [4]. Unlike the signature-based method, it is capable of detecting previously unknown or obfuscated malware. However, it can be difficult to explain why the model classifies a certain file as malicious or benign [5], which can cause hidden vulnerabilities that attackers can exploit.
Machine learning models are vulnerable to adversarial attacks [6]. Attackers purposely design adversarial examples, which are deliberately designed inputs to a machine learning model, to cause the model to make a mistake in its predictions. Adversarial machine learning is a field that deals with attacks on machine learning algorithms and defenses against such attacks.
Malware detection is thus a battle between defenders and malware authors, in which each side attempts to devise new and effective ways to outwit the other. Each detection method has its own advantages and disadvantages. In various scenarios, one method may be more successful than another. Thus, the creation of an effective malware detection method is a very challenging task, and new research and methods are necessary.
The main contributions of this paper are to compare works that focus on adversarial machine learning in the area of malware detection. Specifically,
* we applied some existing methods in the field of adversarial learning to selected malware detection systems.
* we combined these methods to create more sophisticated adversarial generators capable of bypassing top-tier AV products.
* we evaluated the single and combined generators in terms of accuracy and usability in practice.
The rest of the paper is organized as follows: In Section 2, we describe state-of-the-art techniques used to generate adversarial examples present. Section 3 provides an overview of the publications focused on creating adversarial portable executable malware samples. Section 4 describes the experiments performed and the metrics used for evaluation. Section 5 presents and discusses the experimental results, and Section 6 concludes this work.
## 2 Background
In this section, we describe the different methods used to create adversarial examples. We also introduce and describe selected attacks for experimentation.
### Methods for Creating Adversarial Examples
In this section, we describe various methods to create adversarial examples.
#### 2.1.1 Gradient-based Approaches
Gradient-based methods are a popular approach to generate adversarial examples. These methods work by computing the gradient of a loss function with respect to the input data. This gradient is then used to iteratively modify the input to minimize the loss. The Fast Gradient Sign Method [7] and the Jacobian-based Saliency Map Approach [8] are two popular gradient-based methods used for malware generation.
Given a trained model \(f\) and an input example \(x\), gradient-based methods generate an adversarial
example \(x^{\prime}\) by adding a small perturbation \(\delta\) to the input that maximizes the loss function \(L(f(x^{\prime}),y)\), where \(y\) is the true input label.
The perturbation \(\delta\) is calculated as follows:
\[\delta=\epsilon\cdot\text{sign}(\nabla_{x}L(f(x),y))\]
where \(\epsilon\) is a small constant that controls the size of the perturbation, and the sign function is used to ensure that the perturbation has the same sign as the gradient, allowing efficient computation and ensuring that the perturbation always increases the loss.
The gradient of the loss function with respect to the input \((\nabla_{x}L(f(x),y))\) is computed by backpropagation through the model \(f\). This gradient gives the direction in which the loss function increases the most for a small change in the input and is used to determine the direction of the perturbation.
Gradient-based attacks are performed using the addition or insertion method for perturbations generated using the gradient of the cost function. When using the append method, the data (payload) is appended at the end of the file. When the insertion method is used, the payload is inserted into the slack region where the physical size is greater than the virtual size.
#### Generative Adversarial Network-based Approaches
Generative adversarial networks (GANs) were developed and presented by Goodfellow et al. [9] in 2014.
GAN is a system consisting of two neural networks, a generator and a discriminator, which compete against each other. The goal of the generator is to create examples that are indistinguishable from the real examples in the training set, thus fooling the discriminator. In contrast, the objective of the discriminator is to distinguish the false examples produced by the generator from the real examples that come from the training data set, thus preventing it from being fooled by the generator. The generator learns from the feedback it receives from the discriminator's classification [10].
These two neural networks are trained simultaneously. The generator is constantly improving its ability to generate realistic samples, so the discriminator must continually improve its ability to distinguish between real and generated samples. This mutual competition forces both networks to continuously improve through the training process. Once this training process is completed, the generator can be used to generate new samples that are indistinguishable from the real samples [11].
Denote the generator as \(G\) and the discriminator as \(D\). As described in [9], networks \(G\) and \(D\) play the following two-player minimax game with value function \(V(G,D)\):
\[\min_{G}\:\max_{D}\:V(D,G)=\mathbb{E}_{x\sim p_{data}(x)}[logD(x)]\] \[+\mathbb{E}_{z\sim p_{z}(z)}[log(1-D(G(z)))]\]
that \(G\) tries to minimize, while \(D\) tries to maximize. \(D(x)\) is the discriminator's estimate of the probability that the original data \(x\) is real, \(G(z)\) is the generator's output when it receives noise \(z\) as input, \(D(G(z))\) is the discriminator's estimate of the probability that a synthetic sample \(G(z)\) of data is real, \(\mathbb{E}_{x}\) is the expected value over all real data instances, and \(\mathbb{E}_{z}\) is the expected value over all generated fake instances.
#### Reinforcement Learning-based Approaches
Reinforcement learning (RL) is a type of machine learning technique along with supervised and unsupervised machine learning. In supervised machine learning, the model is trained using a training set of labeled examples. Based on the given inputs and expected outputs, the model creates a mapping equation that the model can use to predict the labels of the inputs in the future. In unsupervised machine learning, the model is trained only on inputs without labels. The model divides the input data into classes that have similar properties, and during prediction, the inputs are labeled based on the similarity of their properties to one of the classes. Unlike supervised and unsupervised machine learning, reinforcement learning algorithms learn by interacting with an environment and getting feedback in the form of rewards or penalties rather than relying on pre-labeled instances.
A reinforcement learning model consists of two main parts: an agent and an environment. The
agent learns to perform a task through repeated interactions with the environment through trial and error. In addition to the agent and the environment, the reinforcement learning model has four main subelements: a policy, a reward signal, a value function and, optionally, an environment model. The policy defines the behavior of the agent at a particular time. In other words, it is a strategy that the agent uses to determine the next action based on the current state to achieve the highest reward. The reward signal is the feedback from the environment to the agent, indicating the success or failure of the agent's action in a given state. At each time step, the agent is in some state and sends the selected action as its output to the environment, which then returns a new state to the agent along with a reward signal. While the reward signal shows what is beneficial in the present, the value function describes what is beneficial in the long term. The value function provides an estimate of the expected cumulative reward from the current state of the environment in the future. The agent's objective is to maximize the total reward. The environment model mimics the behavior of the environment, making it possible to predict future states and rewards. This is an optional part of the system [12].
Reinforcement learning is defined as repeated interactions between an agent and an environment. Individual interactions (signals exchange) are performed in time steps. At time step \(t\), the environment is in state \(s_{t}\in S\), where \(S\) is the set of all possible states in the environment. The agent receives state \(s_{t}\) and then chooses action \(a_{t}\in A\) based on policy \(\pi\), where \(A\) is the set of all possible actions defined by the environment. After the environment receives information about the chosen action \(a_{t}\) from the agent, it calculates the reward \(r_{t}=R(s_{t},a_{t},s_{t+1})\) and sends it to the agent in the form of feedback. At the same time, the environment transitions to a new state \(s_{t+1}\). When this cycle is complete, we say that one time step has passed. This cycle can repeat forever or end when it reaches a terminal state or a maximum time step \(t=T\). We call the triplet of signals \((s_{t},a_{t},r_{t})\) an experience. The time elapsed between \(t=0\) and the end of the environment is called an episode. A trajectory is a sequence of experiences during an episode, \(\tau=(s_{0},a_{0},r_{0}),(s_{1},a_{1},r_{1}),\dots\)[12].
In the field of creating adversarial malware samples, interactions between the agent and the environment occur in discrete time steps. The agent has a set of operations available for modifying PE files while maintaining the functionality of the malware. The goal of the agent is to perform a sequence of operations on the malware to prevent its detection.
#### 2.1.4 Evolutionary Algorithm-based Approaches
Evolutionary algorithms (EA) are a useful tool for solving optimization problems. They are based on the Darwinian principle of evolution and attempt to mimic these processes. The search for the best or at least satisfactory solution to a problem takes the form of competition between gradually developing solutions within the population. Variants of EA include, for example, evolutionary strategies, evolutionary programming, genetic algorithms, and genetic programming. All of these variants share the same principle of operation but differ in their implementation.
When solving a problem, it is necessary first to define the representation of candidate solutions. These candidate solutions are called individuals or phenotypes. Since the phenotype can have a complex structure, an encoding is used to represent the individuals in an appropriate way, which is called a chromosome or genotype. Next, an initial population of individuals is created, with each individual representing a coded solution. Then, each member of the population is evaluated using a fitness function that numerically expresses the quality of the solution. The individuals with the best score are then selected and used to create a new generation. This is done using the crossover operator, which usually takes pairs of chromosomes and exchanges information between them to create new offspring. This is followed by the mutation operator, which changes a small portion of the offspring so that it is no longer just a mixture of parental genes. This introduces new genetic material into the new generation. This entire cycle (fitness evaluation, selection, crossover, mutation) is repeated until the termination condition is reached.
### Selected Attacks
To generate samples of malicious software, we utilized three distinct techniques: gradient-based
techniques, evolutionary algorithm-based techniques, and reinforcement learning-based techniques. Specifically, we selected Partial DOS [13] and Full DOS [14] attacks from gradient-based techniques. From the evolutionary algorithms-based techniques, we chose GAMMA padding [15] and GAMMA section-injection [15] attacks. Finally, for reinforcement learning techniques, we selected the Gym-malware [16] attack. In this section, we describe each of these attacks in detail.
Partial DOS attack and Full DOS attack focus on modifying bytes in the DOS header of a portable executable (PE) file. The DOS header contains only two important pieces of information. The initial 2 bytes represent the magic number, while the final 4 bytes at offset 0x3C show the location of the PE signature in the NT headers. Partial DOS attacks modify only bytes within the range of 0x02 to 0x3B, inclusive. Full DOS attacks expand this range to include all bytes up to the PE signature. The position of this signature may vary in individual files, but it can be found at offset 0x3C.
GAMMA padding attack and GAMMA section-injection attack are based on inserting parts extracted from benign files into malware files. These are black-box attacks. Gamma attacks are formalized as a constrained optimization problem. The goal is to minimize the probability of detection but also to minimize the size of the injected content. This optimization problem is solved using a genetic optimizer. First, a random matrix is created that represents the initial population of manipulation vectors. The algorithm then iterates in three steps: selection, crossover, and mutation. During selection, the objective function is evaluated and the \(N\) best candidate manipulation vectors from the current population and the population created in the previous iteration are selected. This is followed by the crossover function, where the candidates from the previous step are modified by mixing the values of pairs of randomly selected candidate vectors and a new set of \(N\) candidates is returned. The last operation is a mutation, which randomly changes the elements of each vector with a low probability. At each iteration, \(N\) queries are made to the target model to evaluate the objective function on new candidates and keep the best candidate population. When the maximum number of queries is reached or no further improvement in the objective function value is observed, the best manipulation vector from the current population is returned. The resulting adversarial malware sample is obtained by applying the optimal manipulation vector to the input malware sample through the manipulation operator.
The Gym-malware attack is based on reinforcement learning. The environment consists of a sample of malware and the attack target, which is an anti-malware engine. At each step, the agent receives feedback that is composed of a reward value and a vector of features that summarize the state of the environment. Based on the feedback, the agent selects mutations from a set of actions, such as adding a function to the import address table that will never be used; manipulating existing section names; creating new sections that will not be used; adding bytes to extra space at the end of sections; creating a new entry point that immediately jumps to the original entry point; manipulating debug information; packing or unpacking the file; modifying the header checksum; adding bytes to the end of the PE file. This process is repeated in several rounds, and rounds can be prematurely terminated if the agent bypasses the anti-malware engine before 10 mutations are completed.
## 3 Related Work
In this section, the publications on modern methods to create adversarial examples are summarized. The section is divided into several parts, depending on the area to which the method belongs, with all publications compared based on the attacker's knowledge and strategy in Table 1.
### Evolutionary Algorithm-based Attacks
In [17], an AIMED system was designed and implemented to generate adversarial examples using genetic programming by Castro et al. The system enables the automatic finding of optimized modifications that are applied to previously detected malware and lead to its incorrect evaluation by the malware classifier. It is ensured that all generated adversarial examples are valid PE files. The system implements genetic operations such as selection, crossover, and mutation.
If modified PE malware cannot bypass the malware detector, genetic operations are repeated until the generated adversary PE malware can bypass the malware classifier. Experiments have shown that the time to generate successful adversarial examples is reduced by up to 50% compared to random approaches. Furthermore, adversarial examples generated using the given malware classifier were shown to be successful against other malware detectors in 82% of cases.
Wang and Miikkulainen proposed an adversarial malware detection model named MDEA [18]. This model combines the convolutional neural network to classify raw data from malicious binary and evolutionary optimization to modify detected malware. The action space consists of 10 different methods to modify binary programs. The genetic algorithm evolves different action sequences by selecting actions from the action space until the generated adversarial malware can bypass the target malware detectors. After the successful discovery of action sequences, these sequences are applied to the corresponding malware samples and create a new training set for the detection model. Unlike AIMED, malware samples generated by MDEA are not tested for functionality.
Demetrio et al. introduced a black-box attack framework called GAMMA [15]. The black-box attack is the most challenging case since the attacker knows nothing about the target classifier besides the final prediction label. GAMMA attacks are based on the principle of injecting harmless content extracted from goodware into malicious files. Harmless content is inserted into some newly created section or at the end of the file, while the functionality of the file is preserved. The attack is formalized as a constrained optimization problem that minimizes the probability of escaping detection and also penalizes the size of the injected content through a specific penalty.
### Reinforcement Learning-based Attacks
Anderson et al. focused on automating the manipulation of malicious PE files in [16]. The goal is to modify the original malicious PE file so that it is no longer detected as malicious and, at the same time, its format and functionality are not violated. They proposed an attack known as Gymmalware. This is a black-box attack based on reinforcement learning. The authors defined the RL agent's action space as a set of binary manipulation actions. Over time, the agent learns which combinations of actions make malware more likely to bypass antivirus systems.
Song et al. proposed a MAB-Malware framework [20] based on reinforcement learning to generate adversarial PE malware examples. The action selection problem is modeled as a multi-armed bandit problem. The results showed that
\begin{table}
\begin{tabular}{|c|c|l|l|l|} \hline Paper & Year & Attack framework & Knowledge & Attack Strategy \\ \hline \hline
[17] & 2019 & AIMED & black-box & evolutionary algorithm \\ \hline
[18] & 2020 & MDEA & black-box & evolutionary algorithm \\ \hline
[15] & 2021 & GAMMA & black-box & evolutionary algorithm \\ \hline
[16] & 2018 & Gym-malware & black-box & reinforcement learning \\ \hline
[19] & 2019 & DQEAF & black-box & reinforcement learning \\ \hline
[20] & 2020 & MAB-malware & black-box & reinforcement learning \\ \hline
[21] & 2021 & AIMED-RL & black-box & reinforcement learning \\ \hline
[22] & 2018 & AMB & white-box & gradient \\ \hline
[23] & 2018 & – & white-box & gradient \\ \hline
[13] & 2019 & – & white-box & gradient \\ \hline
[24] & 2018 & – & white-box & gradient \\ \hline
[14] & 2020 & RAMEN & white-box & gradient \\ \hline
[25] & 2017 & MalGAN & black-box & GAN \\ \hline
[26] & 2019 & Improved-MalGAN & black-box & GAN \\ \hline
[27] & 2020 & GAPGAN & black-box & GAN \\ \hline \end{tabular}
\end{table}
Table 1: Summary of State-of-the-Art Adversarial Attacks against PE Malware Detection.
the MAB-Malware framework achieves an evasion rate of 74% to 97% against machine learning detectors (EMBER [28] and MalConv [29]) and an evasion rate of 32% to 48% against commercial antivirus (AV). Furthermore, they also showed that the transferability of adversarial attacks between ML-based classifiers (i.e., adversarial examples generated against one classifier can be used successfully against another) is greater than 80%, and the transferability of attacks between pure ML and commercial AV is only up to 7%.
Fang et al. proposed a framework named DQEAF [19] that uses reinforcement learning to evade antimalware engines. DQEAF is similar in methodology to Gym-malware but has many benefits and a higher evasion success rate of adversarial examples compared to it. DQEAF uses a subset of modifications used in Gym-malware and ensures that all modifications will not cause corruption in the modified malware files. DQEAF is able to reduce instability caused by higher dimensions by representing executable files with only 513 features, which is much lower than that in Gym-malware. DQEAF takes priority into account when replaying past transitions. This helps to replay higher-value transitions more frequently, and thus optimize RL networks.
Another reinforcement learning approach is presented in [21], in which Labaca-Castro et al. presented the AIMED-RL adversarial attack framework. This attack can generate adversarial examples that lead machine learning models to misclassify malicious files without compromising their functionality. The authors demonstrated the importance of a penalty technique and introduced a new penalization for the reward function with the aim of increasing the diversity of the generated sequences of modifications while minimizing the number of modifications. The results showed that the agents with penalty outperform the agents without penalty in terms of both the best and the average evasion rates.
### Gradient-based Attacks
Kolonsnjaji et al. in [22] introduced a gradient-based white-box attack to generate adversarial malware binaries against MalConv. A white-box scenario occurs when the attacker gets access to the system and may examine its internal configuration or training datasets. The basic idea of the attack is to manipulate some bytes of each malicious software to maximize the likelihood that the input samples are classified as benign. To ensure that malicious binary functionality is maintained, only padding bytes at the end of the file are considered. The results show that the evasion rate is linearly correlated with the length of the injected sequence and, despite the fact that less than 1% of the bytes are modified, the modified binary evades the target network with high probability and that the precision of MalConv can be reduced by more than 50%.
In [23] Kreuk et al. presented an improved gradient-based white-box attack method against MalConv. The authors proposed two methods for inserting a sequence of bytes into a file; the payload is inserted either at the end of the file or into slack regions. Unlike [22], the evasion rate of [23] is invariant to the length of the injected sequence.
Demetrio et al. in [13] presented a gradient-based variant white-box attack that is similar to [22]. The main difference is that [22] injects adversarial bytes to the end of the PE file, while this attack is limited to changing bytes within a specific disk operating system (DOS) header in the PE header. The results show that a change of a few bytes is sufficient to evade MalConv with high probability.
Suciu et al. in [24] describe the FGM append and the FGM slack attacks and compare their effectiveness against MalConv. Experimental results show that the FGM slack attack achieves better results than the FGM append attack with fewer modified bytes.
Demetrio et al. in [14] propose RAMEN, a general adversarial attack framework against PE malware detectors. This framework generalizes and includes previous attacks against machine learning models, as well as three new attacks based on manipulations of the PE file format that preserve its functionality. The first attack is a full DOS attack, which edits all the available bytes in the DOS header. The second attack is called Extend, which enlarges the DOS header, thus enabling manipulation of the extra DOS bytes. The third is the Shift attack, which shifts the content of the first section, creating additional space for the adversarial payload.
### Generative Adversarial Network-based Attacks
Hu and Tan proposed in [25] a model called MalGAN, which is based on a generative adversarial network (GAN). This model enables the generation of adversarial malware examples that are capable of bypassing black-box ML-based detection models. MalGAN includes two neural networks: a generator and a substitute detector. The generator is used to generate adversarial examples, which are dynamically generated according to the feedback of the malware detector and are able to fool the substitute detector. The results showed that MalGAN can reduce the accuracy of detector to almost zero.
Later, Kawai et al. presented Improved MalGAN in [26]. The authors discuss the problems of MalGAN and try to improve them. For example, the original MalGAN uses multiple malware samples to train MalGAN, in contrast, the improved MalGAN uses only one malware sample. Furthermore, the original MalGAN trains the generator and the malware detector with the same application programming interface (API) call list, while the Improved MalGAN trains with different API call lists.
Yuan et al. in [27] introduced GAPGAN, a GAN-based black-box adversarial attack framework. GAPGAN allows end-to-end black-box attacks at the byte level against deep learning-based malware binaries detection. In this approach, a generator and a discriminator are trained concurrently. The generator is used to generate adversarial payloads that are appended to the end of the original data to craft a malware adversarial sample while ensuring the preservation of their functionality. The discriminator tries to imitate the black-box malware detector to recognize both the original benign samples and the adversarial samples generated. When the training process is completed, the trained generator is able to generate an adversarial sample in less than 20 milliseconds. Experiments show that GAPGAN is capable of achieving a 100% attack success rate against the MalConv malware detector by only inserting adversarial payloads with the size of 2.5% of the total length of the input malware samples.
## 4 Experiments
This section describes the setup and procedure for each experiment. First, the hardware configuration is introduced, followed by a description of the datasets used. Next, the setup of the different algorithms used to generate adversarial samples is described. Finally, the experiments performed are described.
### Experimental Setup
The experiments are carried out on the NVIDIA DGX Station A100 server. This server is equipped with an AMD EPYC 7742 processor with a base frequency of 2.25 GHz, 64 cores and 512 GiB of RAM. We also use a virtual machine with Windows 11 operating system and another virtual machine with Kali Linux operating system for testing and analysis.
We use two datasets for our experiments. The first dataset contains a total of 3,625 harmless executable files, which were collected from a newly installed Windows 11 system. The second dataset contains 3,625 malicious executable files obtained from the VirusShare repository [30]. All the files in both datasets are PE files.
### Attack settings
In total, we compare five adversarial attack strategies. Partial DOS and Full DOS attacks are performed in a white-box setting against the MalConv detector and the maximum number of iterations was set to 50. GAMMA padding and GAMMA section-injection attacks are performed in a black-box setting against the MalConv detector with the maximum number of queries set to 500, regularization parameter set to \(10^{-5}\), and a total of 100.data sections extracted from benign programs used as injection content. The last attack we use is the Gym-malware attack with its default settings performed in a black-box setting against the GDBT detector. The Gym-malware model was trained on a dataset of 3,000 malicious samples and a validation set of 1,000 files.
### Evaluation Metrics
We use several metrics to evaluate the experiments. A key metric in the area of adversarial machine learning is the evasion rate (\(ER\)), the
proportion of malware files misclassified by the target malware classifier, and can be calculated as follows:
\[ER=\frac{misclassified}{total} \tag{1}\]
where \(misclassified\) is the number of malware samples misclassified as benign and \(total\) is the total number of files submitted to the target classifier after discarding files that were already incorrectly predicted before modification.
The evasion rate mentioned above is a universal metric that can be used to evaluate both single attacks and combinations of attacks. In both cases, we are interested in the percentage of malware that escaped detection by the antivirus program. Additionally, we use the following metrics to evaluate the combination of attacks.
The first two metrics that we chose to evaluate the success of the combination are the absolute improvement and the relative improvement in the evasion rate when using the second attack in the combination compared to the first attack. Absolute improvement (\(AI\)) can be described by the following formula:
\[AI=ER_{C}-ER_{1} \tag{2}\]
where \(ER_{C}\) is the total evasion rate when using a combination of methods, and \(ER_{1}\) is the evasion rate after using the first attack in the combination alone. The result is the percentage increase in evasion rate between the first and second attack in the combination. For example, if the evasion rate after the first attack in the combination is 0.01 and after the second attack is 0.1, then the absolute improvement is 0.09, which means that the second attack improved the overall evasion rate by 9%.
Similarly, the relative improvement (\(RI\)) can be expressed using the formula:
\[RI=\frac{ER_{C}-ER_{1}}{ER_{1}} \tag{3}\]
where the meaning of the variables is the same as in the previous formula (2). However, in the previous case, the result expressed a percentage increase over all samples tested. In the case of relative improvement, we limit ourselves to the set of samples that escaped the antivirus program after the first attack. For example, if the evasion rate after the first attack of the combination is 0.01 and after the second attack it is 0.1, then the relative improvement is 9, i.e. the second attack improved the evasion rate of the first attack by 900%.
Next, we need to compare the combination of attacks with performing the attacks separately to see if the combination of attacks adds any value. To do this, we use a simple comparison of the evasion rate of the combination of attacks with the evasion rate of the attacks performed separately. We call it evasion rate comparison (\(ERC\)), and it has the following calculation:
\[ERC=ER_{C}-\max(ER_{1},ER_{2}) \tag{4}\]
where \(ER_{C}\) is the evasion rate of the combination attack, while \(ER_{1}\) and \(ER_{2}\) are the evasion rates of the first and second attacks, respectively. If the result is positive, it means that the combination of attacks performed better than the combination of the two attacks in the combination that would have been performed alone. If the result is negative or zero, it means that the execution of the attack combination was pointless because one of the attacks that were part of the combination performed better or was equal to the combination in terms of evasion rate.
### Experiments Description
We present four experiments that explore different characteristics of the adversarial attack methods mentioned above.
#### 4.4.1 Sample Generation Time
In the first experiment, we measure the time it took to generate individual samples using all the aforementioned selected algorithms. These results, along with other data collected during the experiments, can help compare the effectiveness of various generators and determine the most effective method to generate adversarial malware samples.
#### 4.4.2 Sample Size
In the second experiment, we investigate how the size of the original malware samples changes by applying various adversarial malware generators. Generally, the attacker's goal is to minimize the increase in the size of the generated adversarial files to make it harder to distinguish them from the original malware samples.
#### 4.4.3 Bypassing commercial AV products
In the third experiment, we analyze the effectiveness of created adversarial malware samples against real-world AV detectors. Based on a comparative study [31], 10 AV programs were selected for experimentation, and their names were intentionally anonymized in the following results. Note that in the subsequent results, only nine AVs are listed as two of the selected AVs reported the same results.
The modified malware files from different adversarial algorithms are submitted to the VirusTotal server [32] to obtain the evasion rate for each adversarial malware generator. To avoid bias in the results, we only analyzed malware samples that were correctly classified by all selected AV products before modification. We also discarded samples from which we were unable to obtain file analysis from VirusTotal, e.g., due to broken behavior of modified examples. In total, we use a set of 530 genuine malware samples along with a modified version for each malware generator.
#### 4.4.4 Combination of Multiple Techniques
In the last experiment, we test the effectiveness of using a combination of methods to generate malware samples [33]. The goal of this experiment is to test whether using multiple adversarial example generators per malware sample would significantly increase the malware evasion rate.
An overview of the experiment is shown in Figure 1. First, the original malware samples are processed by the first generator. These modified samples are then tested against a real AV detector that is not part of the generator. The result is a set of samples divided into two sets. The first set consists of _evasive examples_ that successfully evaded the given malware detector, and this set is no longer processed. In contrast to _adversarial examples_, which are generated against the target classifier, _evasive examples_ are samples that have evaded detection by the AV program, although this AV program was not used to generate these samples. The second set consists of _failed examples_ that failed to evade the detector and are used as input to the second generator. The second generator processes the _failed examples_, and the resulting modified samples are again tested against real the AV detector. The result is again a set of samples divided into two sets: _evasive_ and _failed_. The set of _evasive examples_ produced by the first generator and the set of _evasive examples_ produced by the second generator together form the set of resulting successful adversarial examples produced by combining these two generators. The _failed examples_ obtained after using the second generator are the resulting samples that did not evade detection.
## 5 Results
This section presents the recorded results of the individual experiments described in Section 4.
### Sample generation time
Firstly, we look at the results of the sample generation time. Average times in seconds and standard deviation for each attack are listed in Table 2 and in the box plot in Figure 2.
From Figure 2, we can see that Gym-malware took the least amount of time, on average less than 6 seconds, to generate adversarial examples
Figure 1: Method for generating adversarial examples by combining two generators.
with some outliers below 100 seconds. It should be noted that Gym-malware requires a preceding training phase that is not taken into account in this experiment. On the contrary, the Full DOS attack recorded the longest time to create adversarial examples, with an average duration of more than 160 seconds. The remaining three methods achieved similar sample generation times of around 100 seconds in spite of the fact that the results of the GAMMA section-injection attack contain several outliers with extensive long durations of more than 300 seconds. However, the measured times may be affected by the settings of individual algorithms.
### Sample Size
Secondly, we present the results of the sample size experiment. Partial DOS and Full DOS attacks are based on changing bytes in the DOS header. Thus, modifying the malware samples by these methods does not alter the size of the resulting adversarial examples. On the other hand, the remaining tested methods can change the initial file size. The general results of this experiment are shown in Table 3.
The GAMMA padding and section-injection attacks are based on inserting parts extracted from benign files into the malware file. In the case of the GAMMA padding attack, the file size increased on average by 223,605 bytes and in the case of the GAMMA section-injection attack, by 1,940,352 bytes. As we can see from Table 3, the adversarial examples generated by the GAMMA section-injection attack exhibit significant differences in the final file sizes.
The Gym-malware attack uses various types of file manipulations. The file size can be reduced, increased, or unchanged based on the chosen modification. On average, the file size was reduced by 149,273 bytes. The observed file size reduction was probably due to the authors' implementation of file manipulations, as they used the LIEF library [34], which significantly alters the structure of the initial malware file.
### Bypassing commercial AV products
Thirdly, we list the effectiveness of generated adversarial samples on commercially available AV programs. The results are shown in Table 4. Each column represents one of the selected AV programs and each row represents one of the algorithms used to generate adversarial samples. The values in the table represent the achieved evasion rates, expressed as a percentage, for the corresponding algorithm and AV products.
The Gym-malware attack achieved the highest evasion rates among all selected AV programs, successfully bypassing the top AVs 19.02% to 67.23% of the time. The second-best results were recorded
\begin{table}
\begin{tabular}{|l|r|r|} \hline Attack & Average Duration [s] & Standard Deviation [s] \\ \hline \hline Partial DOS & 99.27 & 31.13 \\ \hline Full DOS & 169.08 & 104.53 \\ \hline GAMMA Padding & 87.61 & 39.28 \\ \hline GAMMA Section-injection & 118.47 & 69.44 \\ \hline Gym-malware & 5.73 & 7.52 \\ \hline \end{tabular}
\end{table}
Table 2: Average sample generation time for each sample generator.
Figure 2: Time required to generate a sample for each sample generator.
by the GAMMA section-injection attack, which recorded evasion rates between \(1.23\%\) and \(43.62\%\).
In contrast, the GAMMA padding attack achieved the worst results, failing to mislead any detector tested in more than \(1.5\%\) of cases. Full and Partial DOS attacks scored slightly better than the GAMMA padding attack, with the Full DOS marginally outperforming the Partial DOS attack.
### Combination of Multiple Techniques
Based on the results from the previous experiment, we chose the three most successful adversarial example generators and tested all nine possible combinations. Namely, we used Gym-malware, GAMMA section-injection, and Full DOS adversarial malware generators.
This section contains two types of tables. The first type of table lists the measured minimum, average, and maximum values for particular AVs across the nine combinations of the three selected generators. The second type of table lists the measured minimum, average, and maximum values for a particular combination of generators across the nine AVs. The First Generator column contains the first generator used, and the Second Generator column contains the second generator used as described in Section 4.4.4.
### Evasion Rate
First, we present the results of the evasion rate metric. For all AVs, we examine the minimum, average, and maximum of the results of all combinations of generators tested in the experiment. These results can be found in Table 5. For the minimum values of the evasion rate, we can see that none of the antivirus programs reached a detection rate of \(100\%\). On the other hand, all these values are relatively low compared to the average and maximum values, which tells us that some of the nine generator combinations were not very successful. The average evasion rates range from about \(18\%\) to \(49\%\), and the maximum values range from \(30\%\) to \(78\%\). The best result was achieved by combinations of generators against AV7, where
\begin{table}
\begin{tabular}{|l||c|c|c|c|c|c|c|c||c|} \hline Attack & AV1 & AV2 & AV3 & AV4 & AV5 & AV6 & AV7 & AV8 & AV9 & Average \\ \hline \hline GAMMA padding & 0.00 & 1.79 & 0.45 & 0.22 & 0.45 & 0.90 & 1.34 & 0.56 & 0.45 & 0.68 \\ \hline Partial DOS & 0.78 & 2.57 & 0.78 & 1.01 & 0.78 & 0.78 & 1.90 & 1.45 & 0.78 & 1.21 \\ \hline Full DOS & 0.67 & 1.34 & 0.78 & 0.90 & 0.78 & 0.78 & 4.14 & 1.23 & 0.78 & 1.27 \\ \hline GAMMA section-injection & 18.46 & 5.37 & 6.38 & 4.36 & 4.47 & 9.06 & 43.62 & 1.23 & 5.37 & 10.92 \\ \hline Gym-malware & 45.53 & 19.02 & 44.86 & 67.23 & 41.61 & 53.58 & 53.80 & 26.51 & 44.86 & 44.11 \\ \hline \end{tabular}
\end{table}
Table 4: Evasion rate (in \(\%\)) of generated adversarial samples achieved against AV products on the VirusTotal server.
\begin{table}
\begin{tabular}{|l|r|r|} \hline Attack & Average Size [B] & Standard Deviation [B] \\ \hline \hline Partial DOS & 0 & 0 \\ \hline Full DOS & 0 & 0 \\ \hline GAMMA Padding & \(223,604.57\) & \(48,403.25\) \\ \hline GAMMA Section-injection & \(1,940,351.63\) & \(78,088,981.28\) \\ \hline Gym-malware & \(-149,272.75\) & \(754,660.5\) \\ \hline \end{tabular}
\end{table}
Table 3: Changes to the sample size of generated adversarial samples for each generator.
\begin{table}
\begin{tabular}{|l||r|r|r|} \hline AV & Minimum & Average & Maximum \\ \hline \hline AV1 & 0.78 & 32.39 & 55.26 \\ \hline AV2 & 1.45 & 17.79 & 29.53 \\ \hline AV3 & 0.90 & 31.15 & 63.09 \\ \hline AV4 & 1.45 & 38.59 & 78.19 \\ \hline AV5 & 0.78 & 26.76 & 57.61 \\ \hline AV6 & 0.90 & 35.91 & 73.60 \\ \hline AV7 & 5.26 & 49.32 & 74.50 \\ \hline AV8 & 1.57 & 17.80 & 41.39 \\ \hline AV9 & 0.78 & 30.79 & 62.75 \\ \hline \end{tabular}
\end{table}
Table 5: Evasion rate (in \(\%\)) for each AV using all combinations of generators.
the average evasion rate is around 49%, while the least successful was against AV2, where the average evasion rate is around 18%.
Table 6 shows the results of the evasion rate achieved by each combination of generators. Here, we can see that the most successful combination was the one in which the Gym-malware generator was used twice in a row. This achieved a minimum evasion rate of around 30% for all AVs. The average value for this combination is around 58%, and for at least one AV we achieved an evasion rate of around 78% with this combination. On the other hand, the worst combination in terms of evasion rate is the one in which the Full DOS generator was used twice. In this case, we have an average evasion rate of about 2%, while for all other combinations, this value exceeds 10%, and for some even more significantly. The maximum value of the evasion rate for this combination is about 5%, for the others we have at least about 44%.
To determine the effectiveness of using generator combinations instead of individual generators alone, we can compare these results with the values from the previous experiment. However, we have additional metrics for this, which are described in Section 4.3. We analyze the results of these metrics in the following parts of this section.
### Absolute Improvement
Next, we evaluate the metric that we identified as an absolute improvement in Section 4.3. In short, it is the percentage difference in the evasion rate between the evasion rate achieved by combining both generators and the evasion rate achieved by using only the first generator.
First, we focus on Table 7, which shows the absolute improvement values for each AV in the nine combinations of generators. Here we can see that for some AVs we were unable to improve the evasion rate by applying the second generator. Specifically, this refers to the minimum value for AV4, AV5, and AV9. Table 8 helps us to identify the relevant generators. We can see that the only non-improving combinations are the ones in which we use the same generator twice, namely, the Full DOS generator or the GAMMA section-injection generator. This may indicate that the use of these combinations is not entirely effective. For the average values in Table 7, we can see that they do not differ significantly, in all cases between about 8% and 15%. For the maximum values, we have a slightly higher range, approximately 22% to 61%. This means that if we use a second generator, we will improve the evasion rate by about 10% on average after using the first generator.
However, significantly more intriguing is Table 8, which shows the absolute improvement for each generator combination against all AVs. Here, we can see that the use of Full DOS as the second generator does not result in a significant absolute improvement in the evasion rate for the minimum,
\begin{table}
\begin{tabular}{|l|c||r|r|r|} \hline First Generator & Second Generator & Minimum & Average & Maximum \\ \hline \hline Full DOS & Full DOS & 0.78 & 1.54 & 5.26 \\ \hline Full DOS & GAMMA section-injection & 1.57 & 10.48 & 43.96 \\ \hline Full DOS & Gym-malware & 23.15 & 38.69 & 61.63 \\ \hline GAMMA section-injection & Full DOS & 6.26 & 15.05 & 45.30 \\ \hline GAMMA section-injection & GAMMA section-injection & 1.90 & 14.03 & 44.52 \\ \hline GAMMA section-injection & Gym-malware & 25.39 & 46.97 & 74.50 \\ \hline Gym-malware & Full DOS & 26.51 & 46.16 & 67.34 \\ \hline Gym-malware & GAMMA section-injection & 27.18 & 49.22 & 67.79 \\ \hline Gym-malware & Gym-malware & 29.53 & 58.34 & 78.19 \\ \hline \end{tabular}
\end{table}
Table 6: Evasion rate (in %) for each generator combination against all AVs.
\begin{table}
\begin{tabular}{|l||r|r|r|} \hline AV & Minimum & Average & Maximum \\ \hline \hline AV1 & 0.11 & 10.84 & 29.98 \\ \hline AV2 & 0.11 & 9.21 & 22.48 \\ \hline AV3 & 0.11 & 13.81 & 41.16 \\ \hline AV4 & 0.00 & 14.43 & 60.74 \\ \hline AV5 & 0.00 & 11.14 & 35.01 \\ \hline AV6 & 0.11 & 14.77 & 50.22 \\ \hline AV7 & 0.90 & 15.46 & 40.72 \\ \hline AV8 & 0.34 & 8.14 & 26.51 \\ \hline AV9 & 0.00 & 13.78 & 41.39 \\ \hline \end{tabular}
\end{table}
Table 7: Absolute improvement (in %) for each AV using all combinations of generators.
average, and maximum values. This means that using Full DOS as the second generator in a combination is the least effective. On the other hand, we can see that we get the best absolute improvement by using the Gym-malware method as the second generator in the combination.
In Table 8, we can observe another interesting fact. As we have already mentioned, the use of two identical generators in combination is not very effective. However, this statement does not apply to the Gym-malware generator. In contrast, if we use the Gym-malware generator as the first generator in the combination, then the best choice to select the second generator seems to be the Gym-malware generator again. Full DOS and GAMMA section-injection generators have minimal absolute improvements in the role of the second generator. These results show that the Gym-malware generator is successful in both cases, used alone and in combination with itself.
The best absolute improvement values are achieved when we choose Full DOS as the first generator and Gym-malware as the second. This results in an absolute improvement in the evasion rate in all AVs of at least around 22%, on average 37% and up to a maximum of 61%. On the other hand, the worst results in terms of absolute evasion rate improvement are obtained when we choose Full DOS as both generators. In this case, we obtain a minimum absolute improvement of 0%, an average of 0.3% and a maximum of 1.1% across all antivirus programs. We can conclude that the Full DOS generator is likely to be the least successful generator, both when used alone and in combination.
### Relative Improvement
We follow the absolute improvement results with a relative improvement in the evasion rate. Analogously to the improvement in the absolute evasion rate, we can see the results of relative improvement in Table 9. We can see that AV7 performed the best on average in terms of relative improvement of the evasion rate. Conversely, AV4 performed the worst, where in some cases the second generator managed to increase the number of successfully modified samples (which evaded detection) by more than 67 times.
Table 10 confirms that the Full DOS generator, which was used as the second generator in combination, does not increase the evasion rate significantly. On the other hand, if this generator is chosen as the first generator in the combination, the GAMMA section-injection or Gym-malware generator can increase the number of samples that evade detection by up to 67 times. This shows that the Full DOS generator is not a very strong generator on its own. Regarding the Gym-malware generator, we can see that when it is used as the
\begin{table}
\begin{tabular}{|l|l||r|r|r|} \hline First Generator & Second Generator & Minimum & Average & Maximum \\ \hline \hline Full DOS & Full DOS & 0.00 & 0.27 & 1.12 \\ \hline Full DOS & GAMMA section-injection & 0.67 & 9.21 & 39.82 \\ \hline Full DOS & Gym-malware & 21.92 & 37.42 & 60.74 \\ \hline GAMMA section-injection & Full DOS & 0.78 & 4.13 & 5.93 \\ \hline GAMMA section-injection & GAMMA section-injection & 0.00 & 3.11 & 10.29 \\ \hline GAMMA section-injection & Gym-malware & 20.02 & 36.04 & 51.23 \\ \hline Gym-malware & Full DOS & 0.11 & 2.05 & 7.49 \\ \hline Gym-malware & GAMMA section-injection & 0.56 & 5.11 & 10.63 \\ \hline Gym-malware & Gym-malware & 7.72 & 14.23 & 20.02 \\ \hline \end{tabular}
\end{table}
Table 8: Absolute improvement (in %) for each generator combination against all AVs.
\begin{table}
\begin{tabular}{|l||r|r|r|} \hline AV & Minimum & Average & Maximum \\ \hline \hline AV1 & 0.61 & 788.88 & 4466.67 \\ \hline AV2 & 8.33 & 307.66 & 1675.00 \\ \hline AV3 & 3.74 & 732.66 & 5014.29 \\ \hline AV4 & 0.00 & 914.67 & 6787.50 \\ \hline AV5 & 0.00 & 633.29 & 4257.14 \\ \hline AV6 & 1.46 & 846.29 & 6000.00 \\ \hline AV7 & 2.05 & 232.70 & 983.78 \\ \hline AV8 & 3.80 & 510.39 & 2154.55 \\ \hline AV9 & 0.00 & 758.01 & 5285.71 \\ \hline \end{tabular}
\end{table}
Table 9: Relative improvement (in %) for each AV using all combinations of generators.
first generator in a combination, the second generator does not significantly increase the result, even when Gym-malware is used again as the second generator. However, the Gym-malware generator is very successful on its own as it achieves a significantly higher evasion rate than other generators. Therefore, a relative improvement in the evasion rate from 39% to 56% can cause a drastic increase in the total evasion rate of the combination (up to 20%). We can also see that when the Gym-malware generator is used as a second generator in other combinations, there are huge relative improvements over the first generator. Again, we can see that Gym-malware is very effective when used in any combination of generators.
### Evasion Rate Comparison
The last metric examined, evasion rate comparison, helps us find the answer to the question of whether it is better to use individual generators separately or to use them in combination. A negative value of this metric indicates that we would achieve a better evasion rate by using the better of the two generators individually (better in terms of evasion rate). On the contrary, a positive value indicates that we recorded a better result using the combination of both generators.
Looking at the minimum values in table 11, we can see that for almost all AVs there was a combination of generators that was not effective due to the negative values in this column. The exception is AV2, where we achieved a better evasion rate for all combinations of generators used. However, the average and maximum values show that in most cases the combination is more effective than the better of the two generators used separately. Only AV4 has a negative average value, which means that a separate use of a single generator is a better option against this AV. On the other hand, when attacking AV2 it is better to use a combination of generators as it achieved the highest average evasion rate comparison value.
Table 12 shows that combining generators using Full DOS as the first generator and either GAMMA section-injection or Gym-malware as the second generator yields negative results, as evidenced by both the minimum and average values being negative. Based on the results from the previous parts of this section, we can say that it is more advantageous to use GAMMA section-injection and Gym-malware generators separately in such cases. We can also see other negative values in the minimum value column for the combination of GAMMA section-injection and Gym-malware generators used in this order. Nonetheless, the average value of the combination is positive, indicating that it is beneficial to use this combination on average. For the remaining combinations, we can conclude that using them results in a better evasion rate than using the better of the two generators separately.
\begin{table}
\begin{tabular}{|l|c||r|r|r|} \hline First Generator & Second Generator & Minimum & Average & Maximum \\ \hline \hline Full DOS & Full DOS & 0.00 & 18.93 & 62.50 \\ \hline Full DOS & GAMMA section-injection & 75.00 & 715.51 & 2400.00 \\ \hline Full DOS & Gym-malware & 983.78 & 4027.99 & 6787.50 \\ \hline GAMMA section-injection & Full DOS & 3.85 & 104.90 & 409.09 \\ \hline GAMMA section-injection & GAMMA section-injection & 0.00 & 58.50 & 181.25 \\ \hline GAMMA section-injection & Gym-malware & 70.77 & 741.93 & 2154.55 \\ \hline Gym-malware & Full DOS & 0.17 & 7.31 & 39.41 \\ \hline Gym-malware & GAMMA section-injection & 0.83 & 13.60 & 42.94 \\ \hline Gym-malware & Gym-malware & 16.31 & 35.90 & 56.12 \\ \hline \end{tabular}
\end{table}
Table 10: Relative improvement (in %) for each generator combination against all AVs.
\begin{table}
\begin{tabular}{|l||r|r|r|} \hline AV & Minimum & Average & Maximum \\ \hline \hline AV1 & -14.88 & 0.87 & 9.73 \\ \hline AV2 & 0.11 & 5.28 & 10.52 \\ \hline AV3 & -4.81 & 4.02 & 18.23 \\ \hline AV4 & -11.63 & -0.31 & 10.96 \\ \hline AV5 & -7.49 & 2.06 & 16.00 \\ \hline AV6 & -5.82 & 3.03 & 20.02 \\ \hline AV7 & -8.95 & 4.43 & 20.69 \\ \hline AV8 & -3.36 & 2.52 & 14.88 \\ \hline AV9 & -2.69 & 3.99 & 17.90 \\ \hline \end{tabular}
\end{table}
Table 11: Evasion rate comparison (in %) for each AV using all combinations of generators.
## 6 Conclusion
In this paper, we explored the use of adversarial learning techniques in malware detection. Our goal was to apply existing methods for generating adversarial malware samples, test their effectiveness against selected malware detectors, and compare the evasion rates achieved and the practical applicability of these methods.
For our experiments, we chose five adversarial malware sample generators: Partial DOS, Full DOS, GAMMA padding, GAMMA section-injection, and Gym-malware. This selection represents a spectrum of adversarial techniques based on gradient, evolutionary algorithms, and reinforcement learning. These adversarial malware generators were evaluated on nine commercially available antivirus products.
To validate and compare the different characteristics and properties of the methods used, we performed four experiments. These included tracking the time taken to generate samples, changes in sample size after using applying adversarial modifications, testing effectiveness against antivirus programs, and evaluating combinations of generators.
The results indicate that making optimized modifications to previously detected malware can cause the classifier to misclassify the file and label it as benign. Furthermore, the study confirmed that generated malware samples could be used successfully against detection models other than those used to generate them. Using combination attacks, a significant percentage of new samples were created that could evade detection by antivirus programs.
Experiments showed that the Gym-malware generator, which uses a reinforcement learning approach, has the greatest practical potential. This generator produced malware samples in the shortest time, with an average sample generation time of 5.73 seconds. The Gym-malware generator also achieved the highest evasion rate among all selected antivirus products, with the highest average evasion rate of 44.11% against nine AVs. Furthermore, the Gym-malware generator was effective when combined with another generator, especially with itself, where it achieved the highest average evasion rate of 58.35%. Additionally, this generator could significantly improve the performance of other generators with absolute and relative improvements ranging between 36.04%-37.42% and 741.93%-4027.99%, respectively.
For future experiments, we propose to study in more detail how the sample generation time is affected by the input genuine malware size and investigate the correlation between the time and the evasion rate of the resulting adversarial examples. Further experiments could be done in the area of combining generators where more than two generators could be combined to achieve even higher evasion rates. Our work highlights the importance of developing new techniques to detect malware and identify adversarial attacks. More research is needed in this area to successfully combat these novel threats and attacks.
## Acknowledgements
This work was supported by the Student Summer Research Program 2022 of FIT CTU in Prague and by the OP VVV MEYS funded project CZ.02.1.01/0.0/0.0/16 019/0000765 "Research
\begin{table}
\begin{tabular}{|l|c||r|r|} \hline First Generator & Second Generator & Minimum & Average & Maximum \\ \hline \hline Full DOS & Full DOS & 0.00 & 0.27 & 1.12 \\ \hline Full DOS & GAMMA section-injection & -2.80 & -0.45 & 1.57 \\ \hline Full DOS & Gym-malware & -14.88 & -5.42 & 4.81 \\ \hline GAMMA section-injection & Full DOS & 0.78 & 4.13 & 5.93 \\ \hline GAMMA section-injection & GAMMA section-injection & 0.00 & 3.11 & 10.29 \\ \hline GAMMA section-injection & Gym-malware & -11.63 & 2.86 & 20.69 \\ \hline Gym-malware & Full DOS & 0.11 & 2.05 & 7.49 \\ \hline Gym-malware & GAMMA section-injection & 0.56 & 5.11 & 10.63 \\ \hline Gym-malware & Gym-malware & 7.72 & 14.23 & 20.02 \\ \hline \end{tabular}
\end{table}
Table 12: Evasion rate comparison (in %) for each generator combination against all AVs.
Center for Informatics" and by the Grant Agency of the CTU in Prague, grant No. SGS23/211/OHK3/3T/18 funded by the MEYS of the Czech Republic.
## Declarations
The authors have no relevant financial or non-financial interests to disclose.
|
2307.06236 | Statistical complexity and connectivity relationship in cultured neural
networks | We explore the interplay between the topological relevance of a neuron and
its dynamical traces in experimental cultured neuronal networks. We monitor the
growth and development of these networks to characterise the evolution of their
connectivity. Then, we explore the structure-dynamics relationship by
simulating a biophysically plausible dynamical model on top of each networks'
nodes. In the weakly coupling regime, the statistical complexity of each single
node dynamics is found to be anti-correlated with their degree centrality, with
nodes of higher degree displaying lower complexity levels. Our results imply
that it is possible to infer the degree distribution of the network
connectivity only from individual dynamical measurements. | A. Tlaie, L. M. Ballesteros-Esteban, I. Leyva, I. Sendina-Nadal | 2023-07-12T15:27:07Z | http://arxiv.org/abs/2307.06236v1 | # Statistical complexity and connectivity relationship in cultured neural networks
###### Abstract
We explore the interplay between the topological relevance of a neuron and its dynamical traces in experimental cultured neuronal networks. We monitor the growth and development of these networks to characterise the evolution of their connectivity. Then, we explore the structure-dynamics relationship by simulating a biophysically plausible dynamical model on top of each networks' nodes. In the weakly coupling regime, the statistical complexity of each single node dynamics is found to be anti-correlated with their degree centrality, with nodes of higher degree displaying lower complexity levels. Our results imply that it is possible to infer the degree distribution of the network connectivity only from individual dynamical measurements.
keywords: Complex Networks, Neuron Models, Network Inference, Cultured Networks +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
One of the main research lines in the study of the dynamics of complex networks has been the deep relationship between the connectivity and the dynamics of the nodes, and how this interaction shapes the emergence of a collective state such as synchronisation [1; 2; 3; 4]. An enormous effort has been devoted to the understanding of this phenomenon, and the knowledge gathered so far has driven the advance in crucial applications in brain dynamics [5], power grids [6], and in many other fields where synchronisation is essential [7; 8] for the system's proper functioning.
Commonly, studies have focused in states of full synchronisation [4]. Nevertheless, there are very relevant cases in which only a partial or weak synchronisation level is achieved [9; 10; 11], and often this state becomes optimal to
balance functional integration and segregation in the system [12; 13; 14] while a complete coordination is evidencing the existence of a pathological condition.
Several investigations [15; 16; 17] have shown that nodes play different roles in the ensemble dynamics depending on their topological position and intrinsic dynamics [18]. One of the most explored situation is that of the hubs acting as coordinators of the dynamics of the whole system [19; 20; 21; 22], being the first nodes to synchronise among them [23] and to the mean field [24], while the rest of the nodes progressively locks the hubs dynamics.
This effect of the topology on the dynamics in the weakly synchronised regime opens the question of whether it is possible to infer the network architecture from statistical correlations among the coupled units [15; 25]. Currently, a great amount of the research is being conducted in this sense. In particular, the computational neuroscience field roots in the hypothesis that dynamical correlations (which can be recorded in non-invasive ways) are greatly constrained and induced by the anatomical structure of the brain [5]. From these site-to-site correlation maps, the _functional brain networks_, it is often possible to obtain information about the underlying topological networks [26].
However, it has been less explored the fact that this structural-dynamical interaction also plays in the other way around: just as the dynamics of each node influences the ensemble, the ensemble imprints its structural marks into the dynamics of each individual node [23; 24]. We make the assumption that, long before the coupling strength is high enough to induce synchronisation, the dynamical changes at the node level are encoding the imprint of its structural role. This relevant feature could be used to extract information about the network without making any reference to pairwise correlations, particularly in those cases where the structure is unknown or unreliable, as we showed in a previous work [27].
Here, we extend our study of the influence that the ensemble has over the node dynamics to an experimental case. We culture networks of neurons coming from _Schistocerca gregaria_ and study the potential relationship between a simulated dynamical model (the Morris-Lecar neuron) and the anatomical network structure of a neuronal culture. The main motivation for this study is that in cultured neuronal networks the simultaneous obtention of structural and dynamical information is not possible, either because one recording technique influences the other measurement or mainly because the culture is not able to survive to both measurements.
## 2 Experimental setup: culturing the network
To analyse the spatial structure of a real network, we focus on the study of cultured neuronal networks (CNNs), considered as a simplified version of a more complex network of the central nervous system [28; 29; 30]. For this purpose, we analyse the network structure in this CNN model by means of optical microscopy techniques, extracting the detailed connectivity and its statistical topological properties.
Our CNNs were obtained from _Schistocerca gregaria_ specimens, also known as desert locusts. As they share basic neuronal features with vertebrates, this invertebrate model has been recently used in neuroscience as an easier approach for the understanding of more complex neural systems [31]. The large size of its neurons makes it ideal for observing the structure of the network, as an alternative to the mammals.
In our experiments we follow the protocol described in [32]. Each locust is dissected to extract its frontal ganglion, formed by approximately 100 neurons [33]. To obtain an intermediate neuron density that allows us to study a complex network morphology we extracted 12 ganglia per culture. After the dissection, the frontal ganglia endure a chemical and mechanical procedure to remove all the connections and dissociate the neurons. The neuronal somata are cultured in a Petri dish, in an enriched environment to allow the neurites to regrowth and form a new connectivity network. The cultures are monitored _in vitro_ from day 0 (DIV0, DIV=days in vitro) to day 14 (DIV14). The data used in this work correspond to 6 cultures grown in the same conditions. We inspected the morphological features of the cultured networks using a phase contrast inverted microscope (Eclipse Ti-S, Nikon) with a 10x air objective (Achromat, ADL, NA 0.25) and an automated motorised \(XYZ\) stage controller. High resolution images were obtained in a daily basis.
In order to analyse the spatial network, we need to extract the corresponding mathematical graph. To do so, we process the culture images by means of an image segmentation algorithm [34, 35]. In Fig. 1 we portray the whole process, starting from a typical microscope image of the culture (Fig. 1(a) shows just a small area) and ending up with the output of the segmentation detecting neurons (and aggregates of neurons) and the neuronal processess connecting them Fig. 1(a). The algorithm is summarised in the following steps:
1. The red layer of a RGB high-resolution image of the recorded cultured network is processed (Fig.1(a)).
2. The image is segmented and thresholded to separate background from foreground areas. Then neurons and aggregates of neurons (red areas in Fig. 1(b)) and neurites (green paths in Fig. 1(b)) are identified separately.
3. Both neurons and neurites are connected and coded in the adjacency matrix where single and clustered neurons are the nodes and neurites are the links between them. Branching and end points of the neurites are also registered as junction nodes in the graph, even when there is no neuron on them. This provides a complete version of the graph that we call the _full graph_ (Fig. 1(c)) with two types of nodes, those corresponding to neurons and the ones denoting a branching point in the neuronal process path connecting two neurons.
4. The previous data is used to build a reduced version of the graph, where only neurons (or neuronal clusters) are the nodes, and the links represent the existence of a path between neuronal clusters, eventually through branching points. In this graph, junction nodes have been removed and we
observe a more direct path between neuronal clusters, obtaining a simple version of the matrix, _the cluster graph_ (Fig. 1(d)).
We analyse the morphological and topological properties of the cultured network using both full and cluster graphs, where the links are unweighted. With the purpose of characterising the segregation and integration of the cultured neuronal network along the experiment, in Fig. 2 we measure the longitudinal progression of the averaged clustering coefficient (\(C\)) and shortest path length \(L\), normalised by the size of the largest connected component \(S_{1}\)[3], resulting \(L/S_{1}\). The relationship between \(C\) and \(L\) is often used as an indicator of the balance between the local and long-distance connectivity in the network. These aforementioned parameters were measured in both the full graph (with neuronal clusters and junction nodes) and in the cluster graph (with only neuronal clusters nodes).
In the full graph (Fig. 2 (a)), the normalised shortest path \(L/S_{1}\) shows a high mean value at the early days of the culturing, where the connectivity is still not fully developed. Between DIV3 and DIV6 there is a significant decrease and showed no significant change thereafter, meaning a high integration degree in the mature culture network. The clustering coefficient was characterised by a very low mean value, showing a slight increase between DIV3 and DIV6, when the network development occurs. The low mean values of \(C\) are due to the fact that both neurons and branching points are considered nodes meaning that the probability of forming triangles is reduced (see Fig. 1(b)).
In the case of the cluster graphs (Fig. 2 (d)) we observe a similar trend in \(L/S_{1}\) (see Fig. 2(b)) as the one described for the full graph. On the contrary, \(C\) exhibits higher mean values, with a more acute increase between DIV 3
Figure 1: Image segmentation processing steps and extraction of the network graph. (a) Red layer of a RGB cut of a culture 6 DIV old. (b) Output of the segmentation algorithm of the region of interest. Single neurons and aggregates of neurons are highlighted in red while the neurites are marked in green. (c) Mapping of the segmentation objects in (b) into a full graph where circles (neurons and neuronal clusters) and diamonds (branching and end points of neurites, with no neurons) are the nodes, these being connected with green lines, representing the neurites. (d) Graph representing the projection of the full graph into the cluster graph with only neuronal clusters (red) and neurites (green), where the links represent the existence of a path between cluster nodes through junction nodes in the full graph in (c).
and DIV 6, that coincides with the most intense developmental phase. As neuronal aggregates are the only nodes in this cluster graph and junctions are not represented as nodes, the mean values of \(C\) are more accurate with the connected structure. After that point, in the mature neuronal network these two parameters keep constant values [34; 35].
The analysis of these parameters in both types of graphs concludes with the emergence of a mature cultured neuronal network from an initial random stage. This evolved structure is characterised by high clustering coefficients and low mean path values, indicating the presence of a mature network with high segregation (favored by high clustering values) and integration (facilitated by the existence of small shortest paths) levels. These are the characteristics of a small world structure, where the high tendency to form clusters of nodes in highly interconnected subgroups and short distance between them contribute to an optimal functionality in the network [34].
We also analized the degree distribution \(P(k)\) of these networks, being \(k\) the number of links that each node has. In Fig. 3 we plot an example of the cumulative degree distribution \(P_{c}(k)\) of an in-vitro clustered network at DIV7 (black squares), compared to equivalent simulated networks (same number of nodes and links) obtained from usual generative models: random Erdos-Renyi (ER, blue diamonds), scale-free obtained by Barabasi-Albert algorithm (SF, red dots) and a spatial network with a distance-dependence linkage pattern (spatial-ER green diamonds). As described in[34], the cultured networks belong to the single-scale type as they show a well defined \(k\). The study of \(P_{cum}(k)\) reveals a fast decay with a large number of nodes with similar number of connections, and a few ones with a different and large node degrees. As it can be seen, the experimental connectivity largely differs from the pure random ER, showing instead shared features between SF and spatial networks.
Figure 2: Results of the cultured network analysis. Mean value of the clustering coefficient \(C\) and mean path length \(L\) normalised by the size of the largest connected component \(S_{1}\) (a) in a full graph, where clusters and junctions are the nodes, and (b) in a cluster graph, where the only nodes are clusters of neurons. Each point is the average over 6 experiments.
## 3 Dynamical model
Once we have extracted the connectivity of real neuronal cultures, we can provide a dynamical behavior to their nodes, in order to enable the exploration of the potential interplay between structure and dynamics. For this study we implemented the bio-inspired Morris-Lecar (ML) model [36; 27], whose equations describing the membrane potential behavior for each unit read [37; 17]:
\[C\dot{V_{i}} =-\overbrace{g_{\rm X}M_{\infty}(V_{i}-V_{\rm X})}^{\rm Ionic\ channels}+q\xi_{i}+\underbrace{\sigma\over K}\sum_{j}a_{ij} \overbrace{e^{-2(t-t_{j})}(V_{0}-V_{i})}^{\rm Synaptic\ function}+I_{i}^{ext}, \tag{1}\] \[\dot{W_{i}} =\phi\,\tau_{W}(W_{\infty}-W_{i})\]
where \(V_{i}\) and \(W_{i}\) are, respectively, the membrane potential and the fraction of open \(\rm K^{+}\) channels of the \(i\)th neuron and \(M_{\infty},W_{\infty}\), and \(\tau_{W}\) are hyperbolic functions dependent on \(V_{i}\) and \(\phi\) is a reference frequency. The parameters \(g_{\rm X}\) and \(V_{\rm X}\) account for the electric conductance and equilibrium potentials of the \(\rm X=\{K,Ca,leaky\}\) channels. The external current \(I_{i}^{ext}=50.0\) mA is the same for all the neurons and is chosen such that neurons are sub-threshold to neuronal firing which is induced by the white Gaussian noise \(q\xi_{i}\) of zero mean and intensity \(q\). The coupling of the neuron \(i\)th with the neuron ensemble is described by the injected synaptic current \(I_{i}\), given by the superposition of all the post-synaptic potentials emitted by the neighbours of node \(i\) in the past, being \(t_{j}\) the time of the last spike of node \(j\), and the corresponding element of the adjacency matrix is \(a_{ij}=1\) if there is a link between nodes \(i,j\) and \(a_{ij}=0\)
Figure 3: Comparative plot of the cumulative degree distribution \(P_{\rm c}(k)\) of an experimental clustered neuronal network at DIV7 (black squares) and equivalent synthetic networks: random Erdös-Renyi (ER, blue diamonds), scale-free obtained with the Barabási-Albert algorithm (SF, red dots) and a spatial network where distance-dependence linkage pattern (spatial-ER green diamonds). We see that the cultured network’s distribution shows a similar behavior, for low degrees, as the SF one, and there is a region in which it is similar to the spatial network (namely, the decay for large values of \(k\) is almost identical).
otherwise. The synaptic conductance \(\sigma\), normalised by the largest node degree present in the network \(K\), plays the role of coupling intensity.
Additionally, the channel voltage-dependent saturation values are given by the following functions:
\[M_{\infty}(V_{i}) = \frac{1}{2}\left[1+\tanh\left(\frac{V_{i}-V_{1}}{V_{2}}\right) \right],\] \[W_{\infty}(V_{i}) = \frac{1}{2}\left[1+\tanh\left(\frac{V_{i}-V_{3}}{V_{4}}\right) \right], \tag{2}\] \[\tau_{W}(V_{i}) = \cosh\left(\frac{V_{i}-V_{3}}{2V_{4}}\right).\]
We chose the parameters such that the neurons in the simulations corresponded to type II class excitability for the neuron dynamics, which means that a discontinuous transition is found in the dependence of the spiking frequency on the external current. The values for all the parameters can be found in Refs. [27; 38].
We have to remark, however, that this model was originally conceived for a single neuron and in this work we are dealing with aggregates of 20 of them as our individual nodes. Although this quantity is not enough for employing a _neural mass model_, it should be noted that, strictly, it is not a single neuron either.
## 4 Statistical characterisation of the dynamics
With the purpose of providing a solid description of the system, we use two different quantities: a global and a local one. The global measure is the _synchronisation level_ of the network, i.e. how similar are the dynamical outputs of our units, while the local one is an individual measure of the _complexity_ of a node's time series.
### Synchronization measure
In order to quantify the level of synchronization we estimate how many neurons fire within the same time window. The total simulation time \(T\) is divided in \(n=1,\ldots,N_{b}\) bins of a convenient size \(\tau\), such that \(T=N_{b}\tau\), and the binary quantity \(B_{i}(n)\) is defined such that \(B_{i}(n)=1\) if the \(i\)th neuron spiked within \(n\)th interval and 0 otherwise. The synchronisation between the spiking sequences of neurons \(i\) and \(j\) is therefore characterised with the pairwise correlation matrix \(s_{ij}\in[0,1]\)
\[s_{ij}=\frac{\sum_{n=1}^{N_{b}}B_{i}(n)B_{j}(n)}{\sum_{n=1}^{N_{b}}B_{i}(n) \sum_{n=1}^{N_{b}}B_{j}(n)}, \tag{3}\]
where the term in the denominator is a normalisation factor and \(s_{ij}=1\) means full coincidence between the two spiking series. The ensemble average of \(s_{ij}\),
\(\langle s_{ij}\rangle=\frac{2}{N(N-1)}\sum_{i,j=1,i\neq j}^{N}s_{ij}\) is a measure of the global synchronisation in the network. In Fig. 5(a) we plot an example of the averaged value of \(S\) as a function of \(\sigma\) for an experimental CNN clustered network with \(N=246\) nodes, obtained at DIV7 as explained in Sec. 2. A transition from an asynchronous to an almost synchronous firing is observed as the synaptic conductance \(\sigma\) is increased, which confirms that the structure is suitable for inducing synchronisation in the system.
### Statistical complexity
Once we know the effect that the relationship between network topology and dynamics has on the global state, we explore the effect that the presence of the ensemble has on the single node dynamics by measuring the _statistical complexity_ of the single nodes along the synchronisation process. As the typical neuronal dynamics exhibited by Eq. (1) consists of a sequence of \(L\) spikes whose amplitude variability is negligible, we focused on the complexity \(C_{i}\) of the sequence of inter-spike intervals \((t_{l}-t_{l-1})\) (ISI) of each neuron.
The ordinal patterns formalism [39] associates a symbolic sequence to a series [40], transforming the actual values of the data series into a set of natural numbers. To do that, the ISI series of each neuron is divided in sequences of length \(D\). In each sequence, the data values are ordered in terms of their relative magnitudes [13], which provides the corresponding symbolic sequence. The information content of these sequences is then evaluated as a function of the complexity measure. The complete process is illustrated in Fig. 4.
This is a broad-field, well-established and known method, statistically reliable and robust to noise, extremely fast in computation and with a clear definition and interpretation in physical terms. It is derived from two also well-established measures (divergence and entropy), also easily interpretable when analysing non linear dynamical systems. In addition, it only requires soft criteria, namely that the time series must be weakly-stationary, i.e., for \(k\leq D\), the probability for \(\text{ISI}_{\text{t}}<\text{ISI}_{\text{t+k}}\) should not depend on time [39], and that \(M>>D!\) (where \(M\) is the number of points of the entire time series of ISIs), which are easily checkable. We proceed in the following way, as shown in Fig. 4:
* From each single node time series in the simulated (Morris-Lecar neuron) signal, we detect the spikes (a) and extract the duration between two consecutive spikes (b).
* We compute series of the inter-spike time intervals ISI (c) and save them in an array (d), which will be our object of study.
* The ISI series is divided in sequences of length \(D\) (\(D=3\) in this illustration, (e)). We compare consecutive points in each sequence and associate a natural number to each of them (f), ranking them based on their relative size.
* We count how many times a certain symbolic sequence (or _pattern_) \(\pi\) of length \(D\) appears (\(N_{\pi}\)).
Figure 4: Illustration of the Ordinal Patterns formalism. (a) Detection of the maxima of the simulated (Morris-Lecar neuron) signal. (b) Extraction of the timestamps at which these maxima have been attained. Consecutive timestamps are subtracted (c) and their differences (ISIs) are stored in an array (d). The time series of ISIs is divided in sequences of length \(D\) (\(D\) = 3 in this example), and consecutive points in each sequence are compared and classified based on their relative values (e). Each sequence of length \(D\) is given a natural number and the probability of the realization of each possible \(D\)-symbols sequence is computed (f). The probabilities are then used to construct the complexity measure.
* Then, we define a probability of occurrence for each pattern: \(P_{\pi}=\frac{N_{\pi}}{N_{T}}\), where \(N_{T}\) is the total number of sequences of length \(D\) in which the time series is divided, i.e. \(N_{T}=(L-1)/D\), being \(L\) the total number of spikes.
* We construct a _probability distribution_, which we call \(P\) from now on, from all possible symbolic sequences of length D with probability \(P_{\pi}\).
Once the probability distribution \(P\) is obtained, the statistical complexity is defined. It is a measure that should be minimal both for pure noise and absolute regularity, and provide a bounded value for other regimes. Being this so, we need to characterise the disorder and a correcting term (i.e., a way of comparing known probability distributions with the actual one). The statistical complexity (\(C\)), as defined in Ref. [41], is the product of the Permutation Entropy (\(H\)) and the Disequilibrium (\(Q\)).
To define the permutation entropy \(H\), the first step is the evaluation of the Shannon entropy, that gives an idea of the _predictability_ of the series:
\[S[P]=-\sum_{j=1}^{D!}p_{j}\cdot\log(p_{j}) \tag{4}\]
The permutation entropy corresponds to the normalisation of \(S\) with respect to the entropy of the uniform probability distribution, \(S_{max}\):
\[H=\frac{S}{S_{max}},\quad S_{max}=S[P_{e}], \tag{5}\] \[P_{e}\equiv\{p_{i}=1/D!\}_{i=1,...,D!}\implies 0\leq H\leq 1\]
Regarding the disequilibrium \(Q\), it is a way of measuring the distance of the actual probability distribution \(P\) with the equilibrium probability distribution \(P_{e}\). This notion of _distance_ can be acquired by several means; in this work, we adopt the statistical distance given by the Kullback-Leibler [42] relative entropy (\(K\)):
\[K[P|P_{e}] =-\sum_{j=1}^{D!}p_{j}\cdot\log(p_{e})+\sum_{j=1}^{D!}p_{j}\cdot \log(p_{j})=\] \[=S[P|P_{e}]-S[P] \tag{6}\]
where \(S[P|P_{e}]\) is the Shannon cross entropy. If we now symmetrise Eq. (6), we get the Jensen-Shannon divergence (\(J\)):
\[J[P|P_{e}]=(K[P|P_{e}]+K[P_{e}|P])/2\mathop{\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt \vrule width 0.4pt height 6.0pt depth 0.0pt}}_{(*)}J[P|P_{e}]=S[(P+P_{e})/2]-S[P]/2-S[P_{e}]/2 \tag{7}\]
where (\(*\)) is simply the rewritten version in terms of \(S\). Finally, we can write the disequilibrium \(Q\) as the normalised version of \(J\) as:
\[Q=Q_{0}J[P|P_{e}] \tag{8}\]
with \(Q_{0}=\frac{N+1}{N}\log(N+1)-2\log(2N)+\log(N)^{-1}\), implying again \(0\leq Q\leq 1\). We then just have to multiply \(H\) and \(Q\) to obtain the _Complexity measure_:
\[C=H\cdot Q \tag{9}\]
## 5 Results
We summarise our results for the statistical complexity in a cultured neuronal network in Fig. 5. As commented above, in panel (a) we show as a reference the synchronisation level vs the synaptic conductance \(\sigma\) for the dynamics simulated on top of a DIV7 experimental network. In panel Fig. 5(b) we plot the value of \(\langle C\rangle_{k}\) as a function of the conductance \(\sigma\) for two nodes with high (\(k=30\)) and poor (\(k=3\)) connectivity, being \(\langle C\rangle_{k}=\sum_{[i|k_{i}=k]}C_{i}/N_{k}\), with \(N_{k}\) the number of nodes with degree \(k\). The results evidences that, on that same route to synchronisation, there exists differences between how hubs and peripheral nodes behave due to the presence of the ensemble, even when the global synchronisation level is still very low.
The main detail that catches our attention is that the peripheral nodes show a greater complexity than the hubs (\(\sigma=950\)). To further explore this finding, in the third panel we depict the statistical complexity vs the degree, for this value of \(\sigma\). We can extract an interesting result here: for \(\sigma=950\), _there exists an anti-correlation between \(\langle C\rangle_{k}\) and \(k\)_.
This anti-correlation observed in cultured neuronal cultures is not as evident as the one reported in Ref. [27] in SF networks, but taking into account that the structure, which has grown in a limited spatial domain, does not belong to the class of power-law networks (as it was already discussed) and that the model (Morris-Lecar) was not originally designed for this kind of neuronal aggregates, one can conclude that the anti-correlation between the statistical complexity \(C\) and the degree \(k\) is a quite robust feature.
## 6 Conclusions
In Ref. [27] we investigated the relationship between the statistical complexity and topology in synthetically generated networks. Here, we focused on the study of real-world topologies, as the ones exhibited by self-organised neuronal cultures. The longitudinal study of the morphology of these networks shows an evolution in the topology from isolated neurons to a percolated heterogeneous topology with small-work properties.
In order to study the structure-dynamics interaction in these networks, we simulated a dynamical model (the Morris-Lecar neuron) on top of experimental neuronal networks at the mature developmental stage. We evidenced that, in the weakly coupled regime, it is possible to anti-correlate the individual node statistical complexity of the series of the neuronal inter-spike intervals with the degree of a node. Therefore, it would be possible to infer the degree distribution of the network from node dynamical measurements, which confirms the result
Figure 5: Dependence of the statistical complexity at the node level and its topological role in an experimental neuronal network 7 days in vitro old with \(N=246\) Morris-Lecar neurons. (a) Synchronisation curve for the explored values of the synaptic conductance. (b) Complexity values \(\langle C\rangle_{k}\) vs. \(d\) for low (\(k=3\)) and high (\(k=20\)) degree node values. (b) \(\langle C\rangle_{k}\) vs. \(k\) for the conductance value (\(\sigma=950\)) marked in (b) with a vertical dashed line. Each point is the average of 6 network realisations.
obtained for synthetic networks [27]. This approach based on the computation of complexity values retrieved from single node dynamics, provides a different perspective than the usual methods of network inference, since it does not imply node-to-node calculations. Additionally, our method does not impose the need of measuring the dynamics of every node: it can be an incomplete measure, an still it will provide their relative roles. We hope this approach will be useful in applications where the knowledge of the degree distribution, instead of the detailed connectome, provides a sufficient insight over an unknown topology and about the functioning of the underlying system.
## 7 Acknowledgements
Financial support from the Ministerio de Economia y Competitividad of Spain under project FIS2017-84151-P and from the Group of Research Excelence URJC-Banco de Santander is acknowledged. A.T. and L.B.-E. acknowledge support from the European Youth Employment Initiative.
|
2305.16884 | Solving the cohomological equation for locally hamiltonian flows, part I
-- local obstructions | We study the cohomological equation $Xu=f$ for smooth locally Hamiltonian
flows on compact surfaces. The main novelty of the proposed approach is that it
is used to study the regularity of the solution $u$ when the flow has saddle
loops, which has not been systematically studied before. Then we need to limit
the flow to its minimum components. We show the existence and (optimal)
regularity of solutions regarding the relations with the associated
cohomological equations for interval exchange transformations (IETs). Our main
theorems state that the regularity of solutions depends not only on the
vanishing of the so-called Forni's distributions (cf.\ \cite{Fo1,Fo3}), but
also on the vanishing of families of new invariant distributions (local
obstructions) reflecting the behavior of $f$ around the saddles. Our main
results provide some key ingredient for the complete solution to the regularity
problem of solutions (in cohomological equations) for a.a.\ locally Hamiltonian
flows (with or without saddle loops) to be shown in \cite{Fr-Ki3}.
The main contribution of this article is to define the aforementioned new
families of invariant distributions $\mathfrak{d}^k_{\sigma,j}$,
$\mathfrak{C}^k_{\sigma,l}$ and analyze their effect on the regularity of $u$
and on the regularity of the associated cohomological equations for IETs. To
prove this new phenomenon, we further develop local analysis of $f$ near
degenerate singularities inspired by tools from \cite{Fr-Ki} and \cite{Fr-Ul2}.
We develop new tools of handling functions whose higher derivatives have
polynomial singularities over IETs. | Krzysztof Frączek, Minsung Kim | 2023-05-26T12:41:11Z | http://arxiv.org/abs/2305.16884v3 | # Solving the cohomological equation for locally Hamiltonian flows, part I - local obstructions
###### Abstract.
We study the cohomological equation \(Xu=f\) for smooth locally Hamiltonian flows on compact surfaces. The main novelty of the proposed approach is that it is used to study the regularity of the solution \(u\) when the flow has saddle loops, which has not been systematically studied before. Then we need to limit the flow to its minimum components. We show the existence and (optimal) regularity of solutions regarding the relations with the associated cohomological equations for interval exchange transformations (IETs). Our main theorems state that the regularity of solutions depends not only on the vanishing of the so-called Forni's distributions (cf. [2, 3]), but also on the vanishing of families of new invariant distributions (local obstructions) reflecting the behavior of \(f\) around the saddles. Our main results provide some key ingredient for the complete solution to the regularity problem of solutions (in cohomological equations) for a.a. locally Hamiltonian flows (with or without saddle loops) to be shown in [5].
The main contribution of this article is to define the aforementioned new families of invariant distributions \(\mathfrak{d}_{\sigma,j}^{k}\), \(\mathfrak{C}_{\sigma,l}^{k}\) and analyze their effect on the regularity of \(u\) and on the regularity of the associated cohomological equations for IETs. To prove this new phenomenon, we further develop local analysis of \(f\) near degenerate singularities inspired by tools from [4] and [7]. We develop new tools of handling functions whose higher derivatives have polynomial singularities over IETs.
Key words and phrases:locally Hamiltonian flows, cohomological equation, invariant distributions 2000 Mathematics Subject Classification: 37E35, 37A10, 37C40, 37C83, 37J12
## 1. Introduction
Let \(M\) be a smooth compact connected orientable surface of genus \(g\geq 1\). We deal with smooth flows \(\psi_{\mathbb{R}}=(\psi_{t})_{t\in\mathbb{R}}\) on \(M\) (associated to a vector field \(X:M\to TM\)) preserving a smooth positive measure \(\mu\), i.e. such that for any (orientable) choice of local coordinates \((x,y)\) we have \(d\mu=V(x,y)dx\wedge dy\) with \(V\) positive and smooth. These flows are called _locally Hamiltonian flows_. Indeed, for any (orientable) choice of local coordinates \((x,y)\) such that \(d\mu=V(x,y)dx\wedge dy\), the flow \(\psi_{\mathbb{R}}\) is a local solution to the Hamiltonian equation
\[\frac{dx}{dt}=\frac{\frac{\partial H}{\partial y}(x,y)}{V(x,y)},\quad\frac{dy }{dt}=-\frac{\frac{\partial H}{\partial x}(x,y)}{V(x,y)}\]
for a smooth real-valued function \(H\), or equivalently \(\frac{dz}{dt}=-2\iota\frac{\frac{\partial H}{\partial x}(z,\overline{z})}{V( z,\overline{z})}\). For general introduction to locally Hamiltonian flows, we refer readers to [7, 4, 10, 12].
For any smooth observable \(f:M\to\mathbb{C}\) we are interested in understanding the smoothness of the solution \(u:M\to\mathbb{C}\) of the cohomological equation
\[u(\psi_{t}x)-u(x)=\int_{0}^{t}f(\psi_{s}x)\,ds\text{ for all }x\in M,\ t\in \mathbb{R}, \tag{1.1}\]
## 1. Introduction
In this paper we consider the following _generalized differential equations_
\[\frac{d\omega}{dt}+\frac{\partial H}{\partial x}+\frac{\partial H}{\partial y}+ \frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y }+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H }{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y }+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H }{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+ \frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+ \frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y }+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y }+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y }+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+ \frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+ \frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+ \frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{ \partial y}+\frac{\partial H}{\partial y}+\frac{\partial H}{\partial y}+\frac{ \partial H}{\partial y}+\frac{\partial H}{\partial y
The main goal of this article (and the subsequent one [5]) is to go beyond the case of a minimal flow on the whole surface \(M\) and beyond the case of functions \(f\) belonging to a weighted Sobolev space. We deal with locally Hamiltonian flows restricted to any minimal component and \(f:M\to\mathbb{C}\) is any smooth function. The study of locally Hamiltonian flows in such a context gives a rise to new invariant distributions, which, unlike Forni's distributions, are local in nature. The first two new families of such invariant distributions, defined in Section 1.4, read local behaviour of functions around saddle points. The last family, which is a counterpart of Forni's distributions, is defined in [5] using renomalization techniques inspired by the approach developed in [8, 9, 7, 4].
All three families of invariant distributions affect the degree of smoothness of the solution of the cohomological equation. However, in the present article we focus only on the first two families and the main results of the paper are contained in Theorems 1.1, 1.2 and 1.3. The methods for studying their effect on the degree of smoothness are purely analytical, in contrast to the dynamical arguments left to [5], where the last family play a central role.
### Special representation and IETs
Locally Hamiltonian flows restricted to their minimal components are represented as special flows over interval exchange transformations. Let us consider a restriction of a locally Hamiltonian flow \(\psi_{\mathbb{R}}\) on \(M\) to its minimal component \(M^{\prime}\subset M\). Let \(I\subset M^{\prime}\) be any transversal smooth curve with its standard parametrization \(\gamma:[0,|I|]\to I\), i.e. \(\int_{0}^{\gamma(s)}\eta=s\) for \(s\in[0,|I|]\), where \(\eta\) is the closed \(1\)-form given by \(\eta=\frac{\partial H}{\partial x}dx+\frac{\partial H}{\partial y}dy\) in local coordinates. By minimality, \(I\) is a global transversal and the first return map \(T:I\to I\) is an interval exchange transformation (IET) in standard coordinates on \(I\). We will denote by \(I_{\alpha}\), \(\alpha\in\mathcal{A}\) the subintervals translated by \(T\). In order to minimize the number of exchanged intervals, we will always assume that each end of \(I\) is the first meeting point of a separatrix (that is not a saddle connection) emanating by a fixed point (incoming or outgoing) with the set \(I\).
Let \(\tau:I\to\mathbb{R}_{>0}\cup\{+\infty\}\) be the first return time map. Then each point in \(M^{\prime}\setminus(\operatorname{Sd}(\psi_{\mathbb{R}})\cup\operatorname{SL }(\psi_{\mathbb{R}}))\) is uniquely represented as \(\psi_{t}x\) for some \(x\in I\) and \(0\leq t<\tau(x)\). The function \(\tau:I\to\mathbb{R}_{>0}\cup\{+\infty\}\) is smooth on the interior of any exchanged interval and has _singularities_ at discontinuities of \(T\). Each such discontinuity is the first hitting point (forward or backward) of a separatrix emanated by a saddle with the curve (interval) \(I\). Moreover, degenerate saddles (\(m_{\sigma}>2\)) of \(\psi_{\mathbb{R}}\) are responsible for the appearance of singularities of _polynomial type_ and simple saddles (\(m_{\sigma}=2\)) are responsible for the appearance of _logarithmic type_ singularities.
### Two crucial operators and two cohomological equations
For any smooth observable \(f:M\to\mathbb{C}\) we deal with the corresponding map \(\varphi_{f}:I\to\mathbb{C}\cup\{\infty\}\) given by
\[\varphi_{f}(x)=\int_{0}^{\tau(x)}f(\psi_{t}x)dt.\]
The function \(\varphi_{f}\) is smooth on the interior of any interval \(I_{\alpha}\) and can have polynomial or logarithmic type singularities at discontinuities of \(T\) depending on the vanishing of some invariant distributions on \(f\) defined in [4] and based on partial derivatives of \(f\) at saddles in \(M^{\prime}\). One of the aim of this paper is a deeper understanding of the operator \(f\mapsto\varphi_{f}\) on the kernel of all invariant distributions coming from [4]. Then \(\varphi_{f}\) has no singularities, but its derivatives can have. In this paper we define
an infinite sequence of new (a little bit more sophisticated) invariant distributions (based on partial derivatives at saddles) which are responsible for understanding the regularity of \(\varphi_{f}\).
For solving the cohomological equation (1.1) we also need to study another operator \(g\mapsto u_{g,f}\). Suppose that \(g:I\to\mathbb{C}\) is a smooth solution (at least continuous) of the another cohomological equation
\[g(Tx)-g(x)=\varphi_{f}(x)\text{ on }I. \tag{1.2}\]
This is an obvious necessary condition for the existence of a smooth solution of the equation (1.1). Indeed, if \(u\) is smooth and satisfies (1.1), then the map \(g:I\to\mathbb{C}\) defined as the restriction of \(u\) to \(I\) is smooth and satisfies (1.2). A natural problem is: when is this also a sufficient condition?
Suppose that \(g:I\to\mathbb{C}\) is a smooth solution of (1.2). Then the corresponding solution \(u_{g,f}:M^{\prime}\setminus(\operatorname{Sd}(\psi_{\mathbb{R}})\cup \operatorname{SL}(\psi_{\mathbb{R}}))\to\mathbb{C}\) is defined as follows. If \(\psi_{t}x\in I\) for some \(t\in\mathbb{R}\) then
\[u_{g,f}(x):=g(\psi_{t}x)-\int_{0}^{t}f(\psi_{s}x)\,ds.\]
By the proof of Lemma 6.3 in [6], the function \(u_{g,f}\) is well defined on \(M^{\prime}\setminus(\operatorname{Sd}(\psi_{\mathbb{R}})\cup\operatorname{ SL}(\psi_{\mathbb{R}}))\). Moreover, if \(M\) is a \(C^{\infty}\)-surface, \(\psi_{\mathbb{R}}\) is a \(C^{\infty}\)-flow and \(f\) is a \(C^{\infty}\)-observable, then \(u_{g,f}\) is as regular as \(g\). Indeed, by the absence of saddle connections joining different saddles, for every \(x_{0}\in M^{\prime}\setminus(\operatorname{Sd}(\psi_{\mathbb{R}})\cup \operatorname{SL}(\psi_{\mathbb{R}}))\) there exists \(t_{0}\in\mathbb{R}\) such that \(\psi_{t_{0}}x_{0}\in\operatorname{Int}I\). For simplicity, assume that \(t_{0}\leq 0\). Then choose \(\varepsilon>0\) such that \([\psi_{t_{0}}x_{0}-\varepsilon,\psi_{t_{0}}x_{0}+\varepsilon]\subset \operatorname{Int}I\) and let
\[R(x_{0},t_{0},\varepsilon):=\bigcup_{-\varepsilon\leq t\leq-t_{0}+\varepsilon }\psi_{t}[\psi_{t_{0}}x_{0}-\varepsilon,\psi_{t_{0}}x_{0}+\varepsilon]. \tag{1.3}\]
If \(\varepsilon>0\) is small enough then \(\nu:[-\varepsilon,-t_{0}+\varepsilon]\times[\psi_{t_{0}}x_{0}-\varepsilon, \psi_{t_{0}}x_{0}+\varepsilon]\to R(x_{0},t_{0},\varepsilon)\) given by \(\nu(t,x)=\psi_{t}x\) is a \(C^{\infty}\)-diffeomorphism. Moreover,
\[u_{g,f}\circ\nu(t,x)=g(x)-\int_{0}^{t}f\circ\nu(s-t,x)ds=g(x)+\int_{0}^{t}f \circ\nu(s,x)ds.\]
It follows that the regularity of \(u_{g,f}\) restricted to \(R(x_{0},t_{0},\varepsilon)\) coincides with the regularity of \(g\) on \([\psi_{t_{0}}x_{0}-\varepsilon,\psi_{t_{0}}x_{0}+\varepsilon]\). Since \(x_{0}\in\operatorname{Int}R(x_{0},t_{0},\varepsilon)\), we obtain our claim.
However, the solution \(u_{g,f}\) of the cohomological equation is not fully satisfactory because it is defined only on an open (dense) subset of the minimal component, without fixed points and saddle loops. Our main goal is to find necessary and sufficient conditions for the existence of a smooth solution (of the cohomological equation) defined over all of \(M^{\prime}\). More precisely, instead of \(M^{\prime}\) we will study smooth solutions defined on the end compactification \(M^{\prime}_{e}\) of \(M^{\prime}\setminus\operatorname{Sd}(\psi_{\mathbb{R}})\). Roughly speaking, if a saddle \(\sigma\) emanates \(l\geq 2\) loops, then \(\sigma\) is the \(l\)-fold end of the set \(M^{\prime}\setminus\operatorname{Sd}(\psi_{\mathbb{R}})\). For this reason, \(\sigma\) splits in \(M^{\prime}_{e}\) into \(l\) different end points \(\sigma_{1},\dots,\sigma_{l}\), see Figure 1. We will look for smooth solutions \(u:M^{\prime}_{e}\to\mathbb{C}\) of (1.1). If a smooth solution \(u:M^{\prime}_{e}\to\mathbb{C}\) exists then it is smooth in a neighborhood (in \(M^{\prime}_{e}\)) of any version \(\sigma_{i}\) of the saddle point \(\sigma\), but it does not even have to be continuous at \(\sigma\), whenever the limits of \(u\) at \(\sigma\) with respect to different neighborhood sectors (connected components) are different. Of course, if each saddle emanates at most one saddle loop then \(M^{\prime}_{e}\) coincides with \(M^{\prime}\) and the problem of regularity of \(u:M^{\prime}_{e}\to\mathbb{C}\) and \(u:M^{\prime}\to\mathbb{C}\) are equivalent.
### Grading of smoothness
Let \(M\) be a \(C^{\infty}\)-manifold with a boundary. For any \(n\in\mathbb{Z}_{\geq 0}\) and \(0<a<1\) denote by \(C^{n+a}(M)\) the space of \(C^{n}\)-functions on \(M\) such that their \(n\)-th derivative is \(a\)-Holder. Let \(\eta:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) be given by \(\eta(x)=-x\log x\) for \(x\in[0,e^{-1}]\) and \(\eta(x)=e^{-1}\) for \(x\geq e^{-1}\). For any \(n\in\mathbb{Z}_{\geq 0}\) denote by \(C^{n+\eta}(M)\) the space of \(C^{n}\)-functions on \(M\) such that their \(n\)-th derivative is continuous so that a positive multiple of \(\eta\) is its modulus of continuity. For every non-natural real \(r>0\) we will write \(C^{r}\) for \(C^{[r]+\{r\}}\).
Let \(\mathbb{R}_{\eta}:=(\mathbb{R}_{>-1}\setminus\mathbb{Z})\cup(\mathbb{Z}_{ \geq-1}+\{\eta\})\) and let \(v:\mathbb{R}_{\eta}\to\mathbb{R}\) be given by \(v(r)=r\) if \(r\in(\mathbb{R}_{>-1}\setminus\mathbb{N})\) and \(v(n+\eta)=n+1\). Then \(0\leq v(r)\leq v(r^{\prime})\) iff \(C^{r}\subset C^{r^{\prime}}\).
### Invariant distributions
To solve our main problem, in the present paper we introduce a family of invariant distributions \(f\mapsto\mathfrak{d}_{\sigma,j}^{k}(f)\) for all \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\), \(k\geq 0\) and \(0\leq j\leq k\wedge(m_{\sigma}-2)\). Throughout the article we use the notation \(x\lor y=\max\{x,y\}\) and \(x\wedge y=\min\{x,y\}\) for any pair of real numbers \(x,y\). Recall that a linear bounded functional \(f\mapsto\mathfrak{D}(f)\) is an _invariant distribution_ if \(\mathfrak{D}(Xu)=0\) for any \(u\in C^{\infty}(M)\). The distributions are defined locally around saddles and are obstructions to the existence of smooth solutions to the cohomological equation. The invariant distributions \(\mathfrak{d}_{\sigma,j}^{k}\) are defined based on the higher-order partial derivatives of the function \(f\) in saddles or they are linear combinations of partial derivatives (if \(k>m_{\sigma}-2\)). We also introduce alternative versions of such invariant distributions, i.e. \(f\mapsto\mathfrak{C}_{\sigma,l}^{k}(f)\) for \(0\leq l<2m_{\sigma}\), which have a more geometric interpretation, and generate the same space of invariant distributions as \(\mathfrak{d}_{\sigma,j}^{k}\).
Suppose that \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\) is a saddle of multiplicity \(m_{\sigma}\geq 2\). Fix a singular chart \((x,y)\) in a neighborhood \(U_{\sigma}\) of \(\sigma\). Then the local Hamiltonian is of the form \(H(x,y)=\Im(x+\imath y)^{m_{\sigma}}\) and the \(\psi_{\mathbb{R}}\)-invariant area-measure is \(d\mu=V(x,y)dx\wedge dy\), where \(V\) is positive and smooth. Then for every \(k\geq 0\) and \(0\leq j\leq k\wedge(m_{\sigma}-2)\)
Figure 1. The minimal component \(M^{\prime}\) before and after separation procedure.
with \(j\neq k-(m_{\sigma}-1)\operatorname{mod}m_{\sigma}\) we define the functional \(\mathfrak{d}^{k}_{\sigma,j}:C^{k}(M)\to\mathbb{C}\) as follows:
\[\mathfrak{d}^{k}_{\sigma,j}(f)=\sum_{0\leq n\leq\frac{k-j}{m_{\sigma}}}\frac{ \binom{k}{j+nm_{\sigma}}\binom{\binom{m_{\sigma}-1)-j}{m_{\sigma}}-1}{n}}{ \binom{(k-j)-(m_{\sigma}-1)}{n}}\frac{\partial^{k}(f\cdot V)}{\partial z^{j+ nm_{\sigma}}\partial\overline{z}^{k-j-nm_{\sigma}}}(0,0). \tag{1.4}\]
Note that for \(k\leq m_{\sigma}-2\) we have \(\mathfrak{d}^{k}_{\sigma,j}(f)=\binom{k}{j}\frac{\partial^{k}(f\cdot V)}{ \partial z^{j}\partial\overline{z}^{k-j}}(0,0)\), so \(\mathfrak{d}^{k}_{\sigma,j}\) are essentially distributions defined already in [4] to study deviation spectrum of Birkhoff integrals of \(f\). Let us mention that non-vanishing any of these distributions is an obstacle to the existence of any solution of the cohomological equation (even measurable). The distributions \(\mathfrak{d}^{k}_{\sigma,j}\) for \(k\geq m_{\sigma}-1\) are responsible for determining regularity of the solution if we already know that equation (1.1) has a smooth solution. To explain this relation in better way, we need to introduce another family of distributions \(\mathfrak{C}^{k}_{\sigma,l}:C^{k}(M)\to\mathbb{C}\) for \(0\leq l<2m_{\sigma}\),
\[\mathfrak{C}^{k}_{\sigma,l}(f):=\sum_{\begin{subarray}{c}0\leq i\leq k\\ i\neq m_{\sigma}-1\operatorname{mod}m_{\sigma}\\ i\neq k-(m_{\sigma}-1)\operatorname{mod}m_{\sigma}\end{subarray}}\theta^{l(2i- k)}_{\sigma}\binom{k}{i}\mathfrak{B}(\tfrac{(m_{\sigma}-1)-i}{m_{\sigma}},\tfrac{(m_{ \sigma}-1)-k+i}{m_{\sigma}})\frac{\partial^{k}(f\cdot V)}{\partial z^{i} \partial\overline{z}^{k-i}}(0,0), \tag{1.5}\]
where \(\theta_{\sigma}\) is the principal \(2m_{\sigma}\)-th root of unity and the (beta-like) function \(\mathfrak{B}(x,y)\) is defined for any pair \(x,y\) of real numbers such that \(x,y\notin\mathbb{Z}\) as follows
\[\mathfrak{B}(x,y)=\frac{\pi e^{t\frac{\pi}{2}(y-x)}}{2^{x+y-2}}\frac{\Gamma(x +y-1)}{\Gamma(x)\Gamma(y)},\]
where we adopt the convention \(\Gamma(0)=1\) and \(\Gamma(-n)=1/(-1)^{n}n!\). The functionals \(\mathfrak{C}^{k}_{\sigma,l}\) for \(0\leq l<2m_{\sigma}\) are not linearly independent, in contrast to the family of functionals \(\mathfrak{d}^{k}_{\sigma,j}\). Indeed, \(\mathfrak{C}^{k}_{\sigma,l^{\prime}}=(-1)^{k}\mathfrak{C}^{k}_{\sigma,l}\) if \(l^{\prime}=l\pm m_{\sigma}\) and
\[\sum_{0\leq l<2m_{\sigma}}\theta^{(k-2j)l}_{\sigma}\mathfrak{C}^{k}_{\sigma,l }=0\text{ if }j=m_{\sigma}-1\text{ or }j=k-(m_{\sigma}-1).\]
The element of \(\mathbb{R}_{\eta}\) given by
\[\mathfrak{e}(\mathfrak{d}^{k}_{\sigma,j})=\mathfrak{e}(\mathfrak{C}^{k}_{ \sigma,l})=\mathfrak{e}(\sigma,k)=\left\{\begin{array}{cl}\frac{k-(m_{ \sigma}-2)}{m_{\sigma}}&\text{ if }\frac{k-(m_{\sigma}-2)}{m_{\sigma}}\notin \mathbb{Z}\\ \frac{k-2(m_{\sigma}-1)}{m_{\sigma}}+\eta&\text{ if }\frac{k-(m_{\sigma}-2)}{m_{ \sigma}}\in\mathbb{Z}.\end{array}\right.\]
is called the _exponent_ of \(\mathfrak{d}^{k}_{\sigma,j}\) or \(\mathfrak{C}^{k}_{\sigma,l}\). Then
\[\mathfrak{o}(\mathfrak{d}^{k}_{\sigma,j})=\mathfrak{o}(\mathfrak{C}^{k}_{ \sigma,l})=\mathfrak{o}(\sigma,k):=v(\mathfrak{e}(\sigma,k))=\frac{k-(m_{\sigma }-2)}{m_{\sigma}}\]
is called the _order_ of \(\mathfrak{d}^{k}_{\sigma,j}\) or \(\mathfrak{C}^{k}_{\sigma,l}\). Finally, let \(\widehat{\mathfrak{e}}(\mathfrak{d}^{k}_{\sigma,j})=\widehat{\mathfrak{e}}( \sigma,k)=k-(m_{\sigma}-1)+\eta\) and \(\widehat{\mathfrak{o}}(\mathfrak{d}^{k}_{\sigma,j})=\widehat{\mathfrak{o}}( \sigma,k)=v(\widehat{\mathfrak{e}}(\mathfrak{d}^{k}_{\sigma,j}))=k-(m_{ \sigma}-2)\).
For any saddle \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\) its (singular) neighbourhood \(U_{\sigma}\) splits into \(2m_{\sigma}\) (angular) sectors bounded by separatrices emanated from \(\sigma\). In singular coordinates \(z=(x,y)\) they are of the form
\[U_{\sigma,l}:=\{z\in U_{\sigma}:\operatorname{Arg}z\in(\tfrac{\pi l}{m_{\sigma }},\tfrac{\pi(l+1)}{m_{\sigma}})\}\text{ for }0\leq l<2m_{\sigma}.\]
Each such sector is either included in a minimal component \(M^{\prime}\) of \(\psi_{\mathbb{R}}\) or is disjoint from \(M^{\prime}\). In the problem of studying the regularity of the solutions of the cohomological equation, only non-zero values of invariant distributions \(\mathfrak{C}^{k}_{\sigma,l}(f)\) such that \(U_{\sigma,l}\cap M^{\prime}\neq\emptyset\) turn out to be relevant.
### Main results
The first main theorem describes the smoothness of the function \(\varphi_{f}\) depending on the values of the functionals described in Section 1.4. To precisely describe the regularity of \(\varphi_{f}\), in Section 2, for any \(n\in\mathbb{Z}_{\geq 0}\) and \(0\leq a<1\) we introduce the space \(C^{n+\mathrm{P}_{a}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) (and its geometric version \(C^{n+\mathrm{P}_{a}\mathrm{G}}\)) of functions whose \(n\)-th derivative has polynomial singularities of order at most \(-a\) at the ends of the intervals translated by the IET \(T\). We should mention that for any \(n\in\mathbb{N}\) we have \(C^{n+\mathrm{P}_{a}}\subset C^{(n-1)+(1-a)}\) if \(0<a<1\) and \(C^{n+\mathrm{P}_{0}}\subset C^{(n-1)+\eta}\).
Recall that we always assume that \(M\) is a compact connected orientable \(C^{\infty}\)-surface and \(\psi_{\mathbb{R}}\) is a locally Hamiltonian \(C^{\infty}\)-flow on \(M\) with isolated fixed points and such that all its saddles are perfect and all saddle connections are loops. Let \(M^{\prime}\subset M\) be a minimal component of the flow and let \(I\subset M^{\prime}\) be a transversal curve. The corresponding IET \(T:I\to I\) exchanges the intervals \(I_{\alpha}\), \(\alpha\in\mathcal{A}\).
For any \(r\geq-\frac{m-2}{m}\), where \(m\) is the maximal multiplicity of saddles in \(\mathrm{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\), let
\[k_{r}=\left\{\begin{array}{ll}\lceil mr+(m-1)\rceil&\text{if }-\frac{m-2}{m} \leq r\leq-\frac{m-3}{m}\\ \lceil mr+(m-2)\rceil&\text{if }-\frac{m-3}{m}<r.\end{array}\right.\]
Note that
\[\max\{k\geq 0:\exists_{\sigma\in\mathrm{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}} \mathfrak{o}(\sigma,k)<r\}+1=\lceil mr+(m-2)\rceil. \tag{1.6}\]
Denote by \(\mathscr{T}\mathscr{D}\) the set of triples \((\sigma,k,j)\in(\mathrm{Sd}(\psi_{\mathbb{R}})\cap M^{\prime})\times\mathbb{Z} _{\geq 0}\times\mathbb{Z}_{\geq 0}\) such that \(0\leq j\leq k\wedge(m_{\sigma}-2)\) and \(j\neq k-(m_{\sigma}-1)\operatorname{mod}m_{\sigma}\) and by \(\mathscr{T}\mathscr{C}\) the set of triples \((\sigma,k,l)\in(\mathrm{Sd}(\psi_{\mathbb{R}})\cap M^{\prime})\times\mathbb{Z }_{\geq 0}\times\mathbb{Z}_{\geq 0}\) such that \(0\leq l<2m_{\sigma}\) and \(U_{\sigma,l}\cap M^{\prime}\neq\emptyset\).
**Theorem 1.1**.: _Fix \(r\geq-\frac{m-2}{m}\). Suppose that \(f\in C^{k_{r}}(M)\) is such that \(\mathfrak{C}^{k}_{\sigma,l}(f)=0\) for all \((\sigma,k,l)\in\mathscr{T}\mathscr{C}\) such that \(\mathfrak{o}(\mathfrak{C}^{k}_{\sigma,l})<r\). Then \(\varphi_{f}\in C^{n+\mathrm{P}_{a}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_ {\alpha})\) with \(n=\lceil r\rceil\) and \(a=\lceil r\rceil-r\). Moreover, the operator_
\[C^{k_{r}}(M)\cap\bigcap_{\begin{subarray}{c}(\sigma,k,l)\in\mathscr{T} \mathscr{C}\\ \mathfrak{o}(\mathfrak{C}^{k}_{\sigma,l})<r\end{subarray}}\ker(\mathfrak{C}^{ k}_{\sigma,l})\ni f\mapsto\varphi_{f}\in C^{n+\mathrm{P}_{a}\mathrm{G}}(\sqcup_{ \alpha\in\mathcal{A}}I_{\alpha})\]
_is bounded._
This result provides a descending filtration of the space \(\Phi^{k}:=\{\varphi_{f}:f\in C^{k}(M)\}\), \(k\in\mathbb{N}\cup\{\infty\}\) that is the basis for proving a spectral theorem (in [5]) for the so-called Kontsevich-Zorich cocycle on \(\Phi^{k}\). Using renormalization techniques, the aforementioned spectral result allows understanding the regularity of the solution of the cohomological equation (1.2) (see also [5]) for a.e. IET \(T\).
The second main theorem solves the problem regarding regularity of the solutions of (1.1) provided we know the degree of smoothness for the solution of (1.2). This result is another ingredient in the proof of the final theorem on the regularity of the solution of the cohomological equation (1.1) presented in [5].
**Theorem 1.2**.: _Fix \(r\in\mathbb{R}_{\eta}\) so that \(v(r)>0\). Assume that \(f\in C^{k_{v(r)}}(M)\) is such that_
* \(\mathfrak{d}^{k}_{\sigma,j}(f)=0\) _for all_ \((\sigma,k,j)\in\mathscr{T}\mathscr{D}\) _with_ \(\widehat{\mathfrak{o}}(\mathfrak{d}^{k}_{\sigma,j})<v(r)\)_;_
* \(\mathfrak{C}^{k}_{\sigma,l}(f)=0\) _for all_ \((\sigma,k,l)\in\mathscr{T}\mathscr{C}\) _with_ \(\mathfrak{o}(\mathfrak{C}^{k}_{\sigma,l})<v(r)\)_._
_Suppose that \(g\in C^{r}(I)\) is a solution of the cohomological equation \(\varphi_{f}=g\circ T-g\). Then there exists \(u_{g,f}\in C^{r}(M^{\prime}_{e})\) satisfying \(Xu_{g,f}=f\) on \(M^{\prime}_{e}\). Moreover, there exists a constant \(C_{r}>0\) such that_
\[\|u_{g,f}\|_{C^{r}(M^{\prime}_{e})}\leq C_{r}(\|g\|_{C^{r}(I)}+\|f\|_{C^{k_{v(r) }}(M)}).\]
**Theorem 1.3** (optimal regularity).: _Let \(r\in\mathbb{R}_{\eta}\) with \(v(r)>0\) and let \(f\in C^{k_{v(r)}}(M)\). If there exists \(u\in C^{r}(M_{e}^{\prime})\) such that \(Xu=f\) on \(M_{e}^{\prime}\) then_
* \(\mathfrak{d}_{\sigma,j}^{k}(f)=0\) _for all_ \((\sigma,k,j)\in\mathscr{T}\mathscr{D}\) _with_ \(\widehat{\mathfrak{o}}(\mathfrak{d}_{\sigma,j}^{k})<v(r)\)_;_
* \(\mathfrak{C}_{\sigma,l}^{k}(f)=0\) _for all_ \((\sigma,k,l)\in\mathscr{T}\mathscr{C}\) _with_ \(\mathfrak{o}(\mathfrak{C}_{\sigma,l}^{k})<v(r)\)_._
In summary, all three main results provide an analytical background necessary to fully solve the regularity problem of solving the cohomological equation for locally Hamiltonian flows. The dynamical component, using mainly renormalization techniques, the authors left to [5].
If the locally Hamiltonian flow has no saddle loops then for any \(k\geq 0\) the functionals \(\mathfrak{C}_{\sigma,l}^{k}\) and \(\mathfrak{d}_{\sigma,j}^{k}\) generate the same space of invariant distributions. In general, the former space is a subspace of the latter. Since \(\mathfrak{o}(\sigma,k)<\widehat{\mathfrak{o}}(\sigma,k)\), the conditions involving the functionals \(\mathfrak{d}_{\sigma,j}^{k}\) can be removed. Then our main result has the following form.
**Corollary 1.4**.: _Fix \(r\in\mathbb{R}_{\eta}\) so that \(v(r)>0\). Assume that \(f\in C^{k_{v(r)}}(M)\) and \(g\in C^{r}(I)\) is a solution of the cohomological equation \(\varphi_{f}=g\circ T-g\). Then the existence of \(u_{g,f}\in C^{r}(M)\) satisfying \(Xu_{g,f}=f\) is equivalent to \(\mathfrak{C}_{\sigma,l}^{k}(f)=0\) for all \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\), \(0\leq l<m_{\sigma}\) and \(k<m_{\sigma}v(r)+(m_{\sigma}-2)\)._
Let us mention that local \(C^{\infty}\)-solutions of cohomological equations for flows without saddle loops around saddles were studied by Roussarie in [11]. We should emphasize that our results are new (even for flows without saddle loops) because they involve solutions with finite differentiability, which causes significant technical complications. In this case, Forni has suggested us an alternative strategy potentially simplifying the complex techniques used in this article.
However, the main advantage and novelty of local tools introduced in this article is the ability to study solutions in closed angular sectors (so-called semi-solutions), which makes it possible to apply to flows that have saddle loops. These types of problems has not been systematically studied before. Under an assumption that some saddles have (many) loops, for every \(k\) large enough the functionals \(\mathfrak{C}_{\sigma,l}^{k}\) generate less space than that generated by \(\mathfrak{d}_{\sigma,j}^{k}\). Then some functionals \(\mathfrak{d}_{\sigma,j}^{k}\) begin to have an independent effect on the regularity of solutions, but their influence has less intensity than the functionals \(\mathfrak{C}_{\sigma,l}^{k}\), even though both types of functionals (for fixed \(k\)) have the same order of regularity. This seems to be a completely new phenomenon, not previously observed in the study of the regularity of solutions to cohomological equations in parabolic dynamics.
### Structure of the paper
The paper is organized as follows. In Section 2, we define one-parameter family of Banach spaces of functions whose (higher order) derivatives have polynomial singularities at the ends of intervals exchanged by an IET. We establish their basic properties necessary in next sections of the article. In Section 3, for any continuous function \(f\) defined around a saddle, we define three types of functions: \(\varphi_{f,l}\), \(\mathscr{F}_{f,l}\) and \(F_{f}\). The map \(\varphi_{f,l}\) is a local version of the function \(\varphi_{f}\) defined in Section 1.2 and is necessary to study the local behavior of \(\varphi_{f}\) near the ends of intervals exchanged by an IET. The map \(F_{f}\) is (in a sense) a local solution to the cohomological equation \(Xu=f\) in open angular sectors \(U_{\sigma,l}\) around the saddle. The map \(\mathscr{F}_{f,l}\) is a covering of \(F_{f}\) and is a technical tool for showing basic properties of the other two. In Section 3, we prove basic properties of \(\mathscr{F}_{f,l}\), which are used to understand the behavior of \(F_{f}\) on open angular sectors \(U_{\sigma,l}\). In Section 4, using
the tools introduced in Section 3, we determine precisely the form of \(\varphi_{f,l}\) and \(\mathscr{F}_{f,l}\) on some angular sectors. Both of these results are then used to prove that \(F_{f}\) has a smooth extension to closed angular sectors \(\overline{U_{\sigma,l}}\) and to establish necessary and sufficient conditions (expressed in the language of local invariant distributions) for such an extension. Finally, in Section 5, we use the contents of all previous sections to prove Theorem 1.1, 1.2 and 1.3.
## 2. Functions whose (higher order) derivatives have polynomial singularities
In this section we introduce one-parameter family of Banach spaces of functions whose (higher order) derivatives have polynomial singularities at the ends of intervals exchanged by an IET. The new spaces simply generalize Banach spaces \(\mathrm{P_{a}}\) studied in [4].
### Space \(C^{n+\mathrm{P_{a}}}\)
Fix \(0\leq a<1\) and an IET \(T:I\to I\) satisfying so called Keane's condition. Denote by \(I_{\alpha}=[l_{\alpha},r_{\alpha})\), \(\alpha\in\mathcal{A}\) all subintervals exchanged by \(T\). The IET is determined by a pair \((\pi,\lambda)\), where \(\lambda=(\lambda_{\alpha})_{\alpha\in\mathcal{A}}\in\mathbb{R}_{>0}^{4}\) is the vector of lengths of exchanged intervals, i.e. \(\lambda_{\alpha}=r_{\alpha}-l_{\alpha}\), and \(\pi=(\pi_{0},\pi_{1})\) is the pair of bijections \(\pi_{\varepsilon}:\mathcal{A}\to\{1,\ldots,d\}\) for \(\varepsilon=0,1\) (\(d=|\mathcal{A}|\) is the number of exchanged intervals) such that \(\pi_{0}(\alpha)\) is the item of \(I_{\alpha}\) before the translation and \(\pi_{1}(\alpha)\) after the translation.
For every \(\alpha\in\mathcal{A}\), denote by \(m_{\alpha}\) the middle point of \(I_{\alpha}\), i.e. \(m_{\alpha}=(l_{\alpha}+r_{\alpha})/2\). For every \(\varphi\in C^{1}(\sqcup_{\alpha\in\mathcal{A}}\operatorname{Int}I_{\alpha}, \mathbb{C})\) let us consider
\[p_{a}(\varphi):=\max_{\alpha\in\mathcal{A}}\Big{\{}\sup_{x\in(l_{\alpha},m_{ \alpha}]}|D\varphi(x)(x-l_{\alpha})^{1+a}|,\sup_{x\in[m_{\alpha},r_{\alpha})}|D \varphi(x)(r_{\alpha}-x)^{1+a}|\Big{\}}.\]
_Definition 1_.: For every integer \(n\geq 0\), we denote by \(C^{n+\mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) the space of functions \(\varphi\in C^{n+1}(\sqcup_{\alpha\in\mathcal{A}}\operatorname{Int}I_{\alpha}, \mathbb{C})\) such that \(p_{a}(D^{n}\varphi)<+\infty\) and for every \(\alpha\in\mathcal{A}\) the limits
\[C^{a,+}_{\alpha,n}(\varphi) =(-1)^{n}C^{+}_{\alpha}(D^{n}\varphi):=(-1)^{n+1}\lim_{x\searrow l _{\alpha}}D^{n+1}\varphi(x)(x-l_{\alpha})^{1+a},\] \[C^{a,-}_{\alpha,n}(\varphi) =C^{-}_{\alpha}(D^{n}\varphi):=\lim_{x\nearrow\tau_{\alpha}}D^{n+ 1}\varphi(x)(r_{\alpha}-x)^{1+a}\]
exist. We denote by \(C^{n+\mathrm{P_{a}}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\subset C^{n+ \mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) the subspace of functions \(\varphi\in C^{n+\mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) of _geometric type_, i.e. such that
\[C^{a,-}_{\pi_{0}^{-1}(d),n}(\varphi)\cdot C^{a,-}_{\pi_{1}^{-1}(d),n}(\varphi )=0\quad\text{and}\quad C^{a,+}_{\pi_{0}^{-1}(1),n}(\varphi)\cdot C^{a,+}_{ \pi_{1}^{-1}(1),n}(\varphi)=0.\]
For every \(0\leq a<1\) and every integer \(n\geq 0\), by Lemma 4.3 in [4], if \(\varphi\in C^{n+\mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) then \(D^{n}\varphi\in L^{1}(I)\). Let us consider the norm on \(C^{n+\mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) given by
\[\|\varphi\|_{C^{n+\mathrm{P_{a}}}}:=\sum_{k=0}^{n}\|D^{k}\varphi\|_{L^{1}(I)}+ p_{a}(D^{n}\varphi). \tag{2.1}\]
Recall that, by Lemma 4.2 in [4], for \(n=0\) the space \(C^{n+\mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) equipped with the norm \(\|\cdot\|_{C^{n+\mathrm{P_{a}}}}\) is Banach. This gives Banach's condition also for all \(n\geq 1\). Moreover, \(C^{n+\mathrm{P_{a}}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) is a closed subspace of \(C^{n+\mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) for any \(n\geq 0\).
Let \(\eta:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) be given by \(\eta(x)=-x\log x\) for \(x\in[0,e^{-1}]\) and \(\eta(x)=e^{-1}\) for \(x\geq e^{-1}\). Denote by \(C^{\eta}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) the space of functions \(f:I\to\mathbb{C}\) such that
\[|f|_{C^{\eta}}:=\max_{\alpha\in\mathcal{A}}\sup\left\{\frac{|f(x)-f(y)|}{\eta( |x-y|)}:x,y\in\operatorname{Int}I_{\alpha},x\neq y\right\}<+\infty.\]
Then \(C^{\eta}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) equipped with the norm \(\|f\|_{C^{\eta}}=\|f\|_{L^{1}}+|f|_{C^{\eta}}\) is a Banach space. For every \(0<a<1\) denote by \(C^{a}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) the space of piecewise \(a\)-Holder continuous functions, i.e. such that
\[|f|_{C^{a}}:=\max_{\alpha\in\mathcal{A}}\sup\left\{\frac{|f(x)-f(y)|}{|x-y|^{a} }:x,y\in\operatorname{Int}I_{\alpha},x\neq y\right\}<+\infty,\]
equipped with the Banach norm \(\|f\|_{C^{a}}=\|f\|_{L^{1}}+|f|_{C^{a}}\).
For every \(n\geq 0\) we also deal with the Banach spaces \(C^{n+\eta}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\), \(C^{n+a}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) equipped with the norms
\[\|\varphi\|_{C^{n+\eta}}=\sum_{k=0}^{n}\|D^{k}\varphi\|_{L^{1}}+|D^{n}\varphi |_{C^{\eta}},\quad\|\varphi\|_{C^{n+a}}=\sum_{k=0}^{n}\|D^{k}\varphi\|_{L^{1} }+|D^{n}\varphi|_{C^{a}},\text{ resp.}\]
For every non-natural real number \(r>0\) we will write \(C^{r}\) for \(C^{\lfloor r\rfloor+\{r\}}\).
_Remark 2.1_.: In view of Lemma 4.5 in [4], for every \(\varphi\in C^{0+\mathrm{P}_{a}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) and \(x\in\operatorname{Int}I_{\alpha}\),
\[|\varphi(x)| \leq\frac{\|\varphi\|_{L^{1}}}{|I|}+p_{a}(\varphi)\Big{(}\frac{1 }{a\min\{x-l_{\alpha},r_{\alpha}-x\}^{a}}+\frac{2^{a+2}}{a(1-a)|I_{\alpha}|^{a }}\Big{)}\text{ if }0<a<1\] \[|\varphi(x)| \leq\frac{\|\varphi\|_{L^{1}}}{|I|}+p_{a}(\varphi)\Big{(}\log \frac{|I_{\alpha}|}{2\min\{x-l_{\alpha},r_{\alpha}-x\}}+2\Big{)}\text{ if }a=0.\]
It follows that if \(\varphi\in C^{n+\mathrm{P}_{a}}\) for some \(n\geq 1\), then
\[\varphi\in C^{(n-1)+(1-a)}\text{ with }\|\varphi\|_{C^{(n-1)+(1-a)}}\leq \frac{2^{2+a}\max_{\alpha\in\mathcal{A}}|I_{\alpha}|^{1-2a}}{a(1-a)}\|\varphi \|_{C^{n+\mathrm{P}_{a}}}\text{ if }0<a<1\]
\[\varphi\in C^{(n-1)+\eta}\text{ with }\|\varphi\|_{C^{(n-1)+\eta}}\leq(|I|^{-1} +3)\|\varphi\|_{C^{n+\mathrm{P}_{a}}}\text{ if }a=0.\]
_Remark 2.2_.: For any \(0\leq a<1\) and any interval \(J\subset I_{\alpha}\) let
\[p_{a}(\varphi,J):=\sup\{(\min\{x-l_{\alpha},r_{\alpha}-x\})^{1+a}|\varphi^{ \prime}(x)|:x\in J\}.\]
Moreover, for any \(n\geq 0\) let
\[\|\varphi\|_{C^{n+\mathrm{P}_{a}}(J)}:=\sum_{k=0}^{n}\|D^{k}\varphi\|_{L^{1}(J )}+p_{a}(D^{n}\varphi,J).\]
In view of Lemma 4.3 in [4], if \(J=(l_{\alpha},l_{\alpha}+\varepsilon]\) or \(J=[r_{\alpha}-\varepsilon,r_{\alpha})\) with \(\varepsilon\leq|I_{\alpha}|/2\), then for every \(0\leq b<1\) we have
\[p_{b}(\varphi,J)\leq\|\varphi^{\prime}\|_{L^{1}(J)}+\frac{p_{a}(\varphi^{ \prime},J)}{1-a}. \tag{2.2}\]
Let \(n,n^{\prime}\geq 0\) and \(0\leq a,a^{\prime}<1\) such that \(n-a\leq n^{\prime}-a^{\prime}\). In view of (2.2), \(C^{n^{\prime}+\mathrm{P}_{a^{\prime}}}\subset C^{n+\mathrm{P}_{a}}\) and \(\|\varphi\|_{C^{n+\mathrm{P}_{a}}(J)}\leq\frac{1}{1-a^{\prime}}\|\varphi\|_{C^ {n^{\prime}+\mathrm{P}_{a^{\prime}}}(J)}\).
If \(J\subset[l_{\alpha}+\varepsilon,r_{\alpha}-\varepsilon]\) for some \(\varepsilon>0\) then for any \(n\geq 0\) and \(0\leq a<1\), \(\|\varphi\|_{C^{n+\mathrm{P}_{a}}(J)}\leq\|\varphi\|_{C^{n+1}(J)}\).
## 3. Local analysis around saddles
In this section we present a local representation of the flow near singularity. This analysis is the main ingredient for proving relations between the regularity of the function \(f:M\to\mathbb{C}\) and higher derivatives of associated cocycle \(\varphi_{f}:I\to\mathbb{C}\). Unlike previous approach developed for polynomial singularities appeared in [4, SS8], our
new methods generalizes the way of computing the \(C^{n+\mathrm{P}_{a}}\)-norm of \(\varphi_{f}\) in an angular sector.
Let \(m\geq 2\) be the multiplicity of a saddle point. Let \(G_{0}:\mathbb{C}\to\mathbb{C}\) be the principal branch of the \(m\)-th root \(G_{0}(re^{it})=r^{1/m}e^{it/m}\) if \(t\in[0,2\pi)\) and let \(\theta\) and \(\theta_{0}\) be the principal \(m\)-th and \(2m\)-th root of unity respectively. Then \(G=G_{l}:\mathbb{C}\to\mathbb{C}\) given by \(G_{l}=\theta^{l}G_{0}\) for \(0\leq l<m\) is the \(l\)-th branch of the \(m\)-th root.
Let \(f:\mathcal{D}\to\mathbb{C}\) be a bounded Borel map where \(\mathcal{D}=\mathcal{D}_{m}\) is the pre-image of the square \([-1,1]\times[-1,1]\) by the map \(\mathbb{C}\ni\omega\mapsto\omega^{m}\in\mathbb{C}\). We will usually treat \(f\) as a function depending on a pair of complex variables \((\omega,\bar{\omega})\). The purpose of this and the next section is to understand the properties of two types of functions \(\varphi_{f,l}:[-1,0)\cup(0,1]\to\mathbb{C}\) and \(\mathscr{F}_{f,l}:[-1,1]^{2}\setminus([0,1]\times\{0\})\to\mathbb{C}\) for \(0\leq l<m\) associated with \(f\), which are crucial in proving the main results of this article. They are given by
\[\varphi_{f,l}(s)=\int_{-1}^{1}\frac{f(G_{l}(u,s))}{(u^{2}+s^{2})^{\frac{m-1}{ m}}}\,du,\quad\mathscr{F}_{f,l}(u,s)=\int_{-1}^{u}\frac{f(G_{l}(v,s))}{(v^{2}+s^{ 2})^{\frac{m-1}{m}}}\,dv. \tag{3.1}\]
Then \(\varphi_{f,l}(s)=\mathscr{F}_{f,l}(1,s)\) for \(s\neq 0\). We will usually treat \(\mathscr{F}_{f,l}\) as a function depending on a pair of complex variables \((z,\bar{z})\), where \(z=u+ts\).
For any \(0\leq\alpha<\beta\leq 1\) let \(\mathcal{D}(\alpha,\beta):=\{\omega\in\mathcal{D}\setminus\{0\}:\mathrm{Arg}( \omega)\in(2\pi\alpha,2\pi\beta)\}\). We denote its closure by \(\overline{\mathcal{D}}(\alpha,\beta)\). For any \(A\subset\mathbb{C}\) denote by \(A^{1/m}\) the pre-image of \(A\) for the map \(\omega\mapsto\omega^{m}\). We will also need third type of associated function \(F_{f}:\mathcal{D}\setminus([0,1]\times\{0\})^{1/m}\to\mathbb{C}\). As \(\mathcal{D}\setminus([0,1]\times\{0\})^{1/m}=\bigcup_{0\leq l<m}\mathcal{D}( \frac{l}{m},\frac{l+1}{m})\), the map \(F_{f}\) is defined by
\[F_{f}(\omega,\overline{\omega}):=\mathscr{F}_{f,l}(\omega^{m},\overline{ \omega}^{m})\text{ on }\mathcal{D}(\frac{l}{m},\frac{l+1}{m}).\]
Note that \(F_{f\cdot V}\) is (in a sense) a local solution to the cohomological equation \(Xu=f\) in any angular sector \(\mathcal{D}(\frac{l}{m},\frac{l+1}{m})\). Indeed, since \(d\omega/dt=m\bar{\omega}^{m-1}/V\), we have
\[Xu=m\Big{(}\overline{\omega}^{m-1}\frac{\partial u}{\partial\omega}+\omega^{m -1}\frac{\partial u}{\partial\overline{\omega}}\Big{)}/V.\]
By definition,
\[\frac{\partial\mathscr{F}_{f\cdot V,l}(z,\bar{z})}{\partial z}+\frac{\partial \mathscr{F}_{f\cdot V,l}(z,\bar{z})}{\partial\bar{z}}=\frac{\partial \mathscr{F}_{f\cdot V,l}(u,s)}{\partial u}=\frac{(f\cdot V)(G_{l}(u,s))}{(u^{ 2}+s^{2})^{\frac{m-1}{m}}}=\frac{(f\cdot V)(G_{l}(z))}{|z|^{2\frac{m-1}{m}}}.\]
Then for any \(\omega\in\mathcal{D}(\frac{l}{m},\frac{l+1}{m})\),
\[XF_{f\cdot V}(\omega,\overline{\omega}) =\frac{m(\overline{\omega}^{m-1}\frac{\partial\mathscr{F}_{f\cdot V,l}(\omega^{m},\overline{\omega}^{m})}{\partial\omega}+\omega^{m-1}\frac{ \partial\mathscr{F}_{f\cdot V,l}(\omega^{m},\overline{\omega}^{m})}{ \partial\overline{\omega}})}{V(\omega,\overline{\omega})}\] \[=\frac{m^{2}|\omega|^{2(m-1)}(\frac{\partial\mathscr{F}_{f\cdot V,l}(\omega^{m},\overline{\omega}^{m})}{\partial\bar{z}}+\frac{\partial \mathscr{F}_{f\cdot V,l}(\omega^{m},\overline{\omega}^{m})}{\partial\overline {z}})}{V(\omega,\overline{\omega})}=m^{2}f(\omega,\overline{\omega}).\]
The map \(F_{f}\) is well defined and smooth on every open angular sector \(\mathcal{D}(\frac{l}{m},\frac{l+1}{m})\). One of the most important technical challenges of this article is to answer the question of when and how the map \(F_{f}\) extends smoothly into the closure \(\overline{\mathcal{D}}(\frac{l}{m},\frac{l+1}{m})\).
Some key properties of the three functions are taken in Theorems 3.11, 4.7 and 4.10. Since their proofs are very technical, long and intertwined, we precede them with a long list of auxiliary results, which should be regarded as intermediate steps in the proof of the main theorems.
### Preliminary calculations
For every \(r>0\) let \(\mathscr{S}(r)\) be the circular sector
\[\mathscr{S}(r)=\{(u,s)\neq(0,0):u\leq r|s|\}.\]
For any \(0<s\leq 1\) and any \(a\in\mathbb{R}\) let
\[\left\langle s\right\rangle^{a}=\left\{\begin{array}{ccc}\frac{a^{a}-1}{-a}+1 &\text{if}&a<0\\ 1-\log s&\text{if}&a=0\\ 1&\text{if}&a>0.\end{array}\right.\]
_Remark 3.1_.: Note that
* for any \(0<s\leq 1\) and any pair of real number \(a\leq b\) we have \(\langle s\rangle^{a}\geq\langle s\rangle^{b}\);
* for any \(a\geq 1\) we have \(s^{-a}/a\leq\langle s\rangle^{-a}\leq s^{-a}\);
* for any \(0<a\leq 1\) we have \(s^{-a}\leq\langle s\rangle^{-a}\leq s^{-a}/a\);
* for any \(m\geq 1\) and \(a\in\mathbb{R}\) we have \(\langle s^{m}\rangle^{a}\leq m\langle s\rangle^{am}\).
**Lemma 3.2**.: _For every \(a\in\mathbb{R}\) and \(r>0\) there exist \(C_{a},C_{a,r}>0\) such that_
\[\int_{-1}^{u}\frac{1}{(v^{2}+s^{2})^{a}}\,dv\leq C_{a}\langle|s|\rangle^{1-2a} \text{ for all }s\in[-1,1]\setminus\{0\} \tag{3.2}\]
_and_
\[\int_{-1}^{u}\frac{1}{(v^{2}+s^{2})^{a}}\,dv\leq C_{a,r}\langle\sqrt{\tfrac{u^ {2}+s^{2}}{2}}\rangle^{1-2a}\text{ for all }(u,s)\in[-1,1]^{2}\cap\mathscr{S}(r). \tag{3.3}\]
_If \(f:[-1,1]^{2}\to\mathbb{C}\) is continuous at \((0,0)\), \(f(0,0)=0\) and \(a\geq 1/2\) then_
\[\int_{-1}^{u}\frac{f(v,s)}{(v^{2}+s^{2})^{a}}\,dv=o(\langle|s|\rangle^{1-2a}). \tag{3.4}\]
Proof.: **Case 1.** Suppose that \(a>1/2\). If \(s\neq 0\) then
\[\int_{-1}^{u}\frac{1}{(v^{2}+s^{2})^{a}}\,dv=|s|^{1-2a}\int_{-1/|s|}^{u/|s|} \frac{1}{(t^{2}+1)^{a}}\,dt\leq|s|^{1-2a}\int_{-\infty}^{+\infty}\frac{1}{(t^{ 2}+1)^{a}}\,dt,\]
which gives (3.2).
If \(s=0\) and \(u<0\) then
\[\int_{-1}^{u}\frac{1}{(v^{2}+s^{2})^{a}}\,dv=\int_{|u|}^{1}v^{-2a}\,dv=\frac{1 }{2a-1}(|u|^{1-2a}-1)\leq\frac{1}{2a-1}|u|^{1-2a}. \tag{3.5}\]
Let us consider the function \(\nu:(-\infty,+\infty)\to\mathbb{R}_{+}\) given by \(\nu(x):=\int_{-\infty}^{x}\frac{1}{(t^{2}+1)^{a}}\,dt\). If \(s\neq 0\) then \(\int_{-1}^{u}(v^{2}+s^{2})^{-a}\,dv\leq|s|^{1-2a}\nu(u/|s|)\). As
\[\lim_{x\to-\infty}\frac{\nu^{\prime}(x)}{\frac{d}{dx}(x^{2}+1)^{1/2-a}}=\lim_ {x\to-\infty}\frac{(x^{2}+1)^{-a}}{(1-2a)x(x^{2}+1)^{-a-1/2}}=\frac{1}{2a-1},\]
we have \(\nu(x)/(x^{2}+1)^{1/2-a}\to 1/(2a-1)\) as \(x\to-\infty\). Therefore there exists \(C_{a,r}>0\) such that \(\nu(x)\leq C_{a,r}(\frac{x^{2}+1}{2})^{1/2-a}\) for \(x\leq r\). It follows that for every \((u,s)\in\mathscr{S}(r)\) with \(s\neq 0\),
\[\int_{-1}^{u}\frac{dv}{(v^{2}+s^{2})^{a}}\leq|s|^{1-2a}\nu(u/|s|)\leq C_{a,r}| s|^{1-2a}\big{(}\tfrac{(u/s)^{2}+1}{2}\big{)}^{1/2-a}=C_{a,r}\big{(}\tfrac{u^{2} +s^{2}}{2}\big{)}^{1/2-a},\]
which (together with (3.5)) gives (3.3).
**Case 2.** Suppose that \(a=1/2\). If \(s\neq 0\) then
\[\int_{-1}^{u}(v^{2}+s^{2})^{-1/2}\,dv=\log\frac{u+\sqrt{u^{2}+s^{2}}}{-1+\sqrt {1+s^{2}}}\leq-2\log\frac{s}{3},\]
which gives (3.2). If \(s=0\) and \(u<0\) then
\[\int_{-1}^{u}(v^{2}+s^{2})^{-1/2}\,dv=-\log|u|. \tag{3.6}\]
Moreover, for any \((u,s)\in\mathscr{S}(r)\) with \(s\neq 0\),
\[\int_{-1}^{u}(v^{2}+s^{2})^{-1/2}\,dv=\log\frac{1+\sqrt{1+s^{2}}}{-u+\sqrt{u^{ 2}+s^{2}}}\leq\log\frac{6(r^{2}+1)}{\sqrt{u^{2}+s^{2}}},\]
which (together with (3.6)) gives (3.3).
**Case 3.** Suppose that \(a<1/2\). If \(0<a<1/2\) then
\[\int_{-1}^{u}(v^{2}+s^{2})^{-a}\,dv\leq 2\int_{0}^{1}v^{-2a}\,dv=\frac{2}{1-2a}.\]
If \(a\leq 0\) then
\[\int_{-1}^{u}(v^{2}+s^{2})^{-a}\,dv\leq 2^{1-a},\]
which gives (3.2) and (3.3).
**Last claim.** Suppose that \(f:[-1,1]^{2}\to\mathbb{C}\) is continuous at \((0,0)\), \(f(0,0)=0\) and \(a\geq 1/2\). For any \(\varepsilon>0\) choose \(\delta>0\) such that \(|f(v,s)|\leq\varepsilon\) if \(|v|,|s|<\delta\). It follows that if \(|s|<\delta\) then
\[\Big{|}\int_{-1}^{u}\frac{f(v,s)}{(v^{2}+s^{2})^{a}}\,dv\Big{|} \leq\int_{-\delta}^{\delta}\frac{|f(v,s)|}{(v^{2}+s^{2})^{a}}\,dv +2\int_{\delta}^{1}\frac{\|f\|_{\sup}}{(v^{2}+s^{2})^{a}}\,dv\] \[\leq\varepsilon\int_{-1}^{1}\frac{1}{(v^{2}+s^{2})^{a}}\,dv+2 \int_{\delta}^{1}\frac{\|f\|_{\sup}}{v^{2a}}\,dv\] \[\leq\varepsilon C_{a}\langle|s|\rangle^{1-2a}+2\|f\|_{\sup} \langle|\delta|\rangle^{1-2a}.\]
This gives (3.4).
_Remark 3.3_.: For \(z=u+\iota s\), the followings hold:
\[\frac{\partial}{\partial u}=\left(\frac{\partial}{\partial z}+ \frac{\partial}{\partial\overline{z}}\right),\quad\frac{\partial}{\partial s} =\iota\left(\frac{\partial}{\partial z}-\frac{\partial}{\partial \overline{z}}\right) \tag{3.8}\] \[\frac{\partial}{\partial z}=\frac{1}{2}\left(\frac{\partial}{ \partial u}-\iota\frac{\partial}{\partial s}\right),\quad\frac{\partial}{ \partial\overline{z}} =\frac{1}{2}\left(\frac{\partial}{\partial u}+\iota\frac{\partial}{ \partial s}\right). \tag{3.7}\]
For any \(n_{1},n_{2},a_{1},a_{2}\in\mathbb{Z}_{\geq 0}\) and any \(f\in C^{n}(\mathcal{D})\) (\(n=n_{1}+n_{2}\)), we will deal with some auxiliary functions \(F_{n_{1},n_{2},a_{1},a_{2}},G_{n_{1},n_{2},a_{1},a_{2}}:[-1,1]^{2}\setminus([0,1]\times\{0\})\) given by
\[F_{n_{1},n_{2},a_{1},a_{2}}(z,\overline{z})=F_{n_{1},n_{2},a_{1 },a_{2}}(u,s)=\frac{\partial^{n}f}{\partial\omega^{n_{1}}\partial\overline{ \omega^{n_{2}}}}(G(u,s))\cdot G(u,s)^{-a_{1}}\overline{G(u,s)}^{-a_{2}},\] \[G_{n_{1},n_{2},a_{1},a_{2}}(z,\overline{z})=G_{n_{1},n_{2},a_{1},a_{2}}(u,s)=\int_{-1}^{u}F_{n_{1},n_{2},a_{1},a_{2}}(v,s)dv.\]
The functions \(F_{n_{1},n_{2},a_{1},a_{2}}\) and \(G_{n_{1},n_{2},a_{1},a_{2}}\) will be called \(F\)_-type_ and \(G\)_-type_ functions.
**Lemma 3.4**.: _For any \(f\in C^{n+1}(\mathcal{D})\) we have_
\[\frac{\partial F_{n_{1},n_{2},a_{1},a_{2}}}{\partial z} =\frac{1}{m}F_{n_{1}+1,n_{2},a_{1}+m-1,a_{2}}-\frac{a_{1}}{m}F_{n_ {1},n_{2},a_{1}+m,a_{2}}, \tag{3.10}\] \[\frac{\partial F_{n_{1},n_{2},a_{1},a_{2}}}{\partial\overline{z}} =\frac{1}{m}F_{n_{1},n_{2}+1,a_{1},a_{2}+m-1}-\frac{a_{2}}{m}F_{n_ {1},n_{2},a_{1},a_{2}+m},\] (3.11) \[\frac{\partial G_{n_{1},n_{2},a_{1},a_{2}}}{\partial z} =\frac{1}{2}F_{n_{1},n_{2},a_{1},a_{2}}+\frac{1}{2m}(G_{n_{1}+1,n _{2},a_{1}+m-1,a_{2}}-a_{1}G_{n_{1},n_{2},a_{1}+m,a_{2}})\] \[\quad-\frac{1}{2m}(G_{n_{1},n_{2}+1,a_{1},a_{2}+m-1}-a_{2}G_{n_{1},n_{2},a_{1},a_{2}+m}),\] (3.12) \[\frac{\partial G_{n_{1},n_{2},a_{1},a_{2}}}{\partial\overline{z}} =\frac{1}{2}F_{n_{1},n_{2},a_{1},a_{2}}-\frac{1}{2m}(G_{n_{1}+1,n _{2},a_{1}+m-1,a_{2}}-a_{1}G_{n_{1},n_{2},a_{1}+m,a_{2}})\] \[\quad+\frac{1}{2m}(G_{n_{1},n_{2}+1,a_{1},a_{2}+m-1}-a_{2}G_{n_{1},n_{2},a_{1},a_{2}+m}). \tag{3.9}\]
Proof.: Since \(\frac{\partial G}{\partial z}=\frac{1}{m}G^{1-m}\), \(\frac{\partial G}{\partial\overline{z}}=0\), \(\frac{\partial\overline{G}}{\partial\overline{z}}=\frac{1}{m}\overline{G}^{1-m}\) and \(\frac{\partial\overline{G}}{\partial\overline{z}}=0\), we obtain
\[\frac{\partial F_{n_{1},n_{2},a_{1},a_{2}}}{\partial z} =\frac{1}{m}\frac{\partial^{n_{1}+n_{2}+1}}{\partial\omega^{n_{1} +1}\partial\overline{\omega}^{n_{2}}}f(G,\overline{G})\cdot G^{-a_{1}+1-m} \overline{G}^{-a_{2}}\] \[\quad-\frac{a_{1}}{m}\frac{\partial^{n_{1}+n_{2}}}{\partial \omega^{n_{1}}\partial\overline{\omega}^{n_{2}}}f(G,\overline{G})\cdot G^{-a_{ 1}-m}\overline{G}^{-a_{2}}\] \[=\frac{1}{m}F_{n_{1}+1,n_{2},a_{1}+m-1,a_{2}}-\frac{a_{1}}{m}F_{n _{1},n_{2},a_{1}+m,a_{2}}.\]
We also verify (3.10) in the same manner.
To obtain (3.11), in view of (3.7) and (3.8), we get
\[\frac{\partial G_{n_{1},n_{2},a_{1},a_{2}}}{\partial z} =\frac{1}{2}\left(\frac{\partial}{\partial u}-\iota\frac{ \partial}{\partial s}\right)\int_{-1}^{u}F_{n_{1},n_{2},a_{1},a_{2}}(v,s)\,dv\] \[=\frac{1}{2}F_{n_{1},n_{2},a_{1},a_{2}}-\frac{\iota}{2}\frac{ \partial}{\partial s}\int_{-1}^{u}F_{n_{1},n_{2},a_{1},a_{2}}(v,s)\,dv\] \[=\frac{1}{2}F_{n_{1},n_{2},a_{1},a_{2}}+\frac{1}{2}\int_{-1}^{u} \left(\frac{\partial}{\partial z}-\frac{\partial}{\partial\overline{z}} \right)F_{n_{1},n_{2},a_{1},a_{2}}(v,s)\,dv.\]
Therefore, in view of (3.9) and (3.10), this gives (3.11). Likewise, we repeat the same for (3.12).
The quantities
\[d(F_{n_{1},n_{2},a_{1},a_{2}})=\frac{n_{1}+n_{2}+a_{1}+a_{2}}{m},\ d(G_{n_{1},n_{2},a_{1},a_{2}})= \frac{n_{1}+n_{2}+a_{1}+a_{2}}{m}-1\]
we call the _degrees_ of the functions \(F_{n_{1},n_{2},a_{1},a_{2}}\) and \(G_{n_{1},n_{2},a_{1},a_{2}}\). In view of (3.9)-(3.12), we have the following conclusion.
**Corollary 3.5**.: _For any \((l_{1},l_{2})\in\mathbb{Z}_{\geq 0}^{2}\) the partial derivative \(\frac{\partial^{l}G_{n_{1},n_{2},a_{1},a_{2}}}{\partial z^{1}\partial \overline{z}^{12}}\) is a linear combination of \(F\)-type and \(G\)-type functions of degree \(d(G_{n_{1},n_{2},a_{1},a_{2}})+l\), where \(l=l_{1}+l_{2}\). Moreover, each component of the linear combination is of the form \(F_{n_{1}^{\prime},n_{2}^{\prime},a_{1}^{\prime},a_{2}^{\prime}}\) and \(G_{n_{1}^{\prime},n_{2}^{\prime},a_{1}^{\prime},a_{2}^{\prime}}\) such that \(n\leq n_{1}^{\prime}+n_{2}^{\prime}\leq n+l\)._
**Lemma 3.6**.: _Let \(n,\in\,\mathbb{Z}_{\geq 0}\). Assume that \(f\in C^{k\lor n}(\mathcal{D})\) and \(D^{j}f(0,0)=0\) for \(0\leq j<k\). Then_
\[|F_{n_{1},n_{2},a_{1},a_{2}}(z,\overline{z})|\leq\|f\|_{C^{k\lor n}}|z|^{-d(F_{n _{1},n_{2},a_{1},a_{2}})+\frac{k\lor n}{m}}.\]
_Moreover, there exists \(C=C_{a_{1},a_{2},n,k}>0\) such that_
\[|G_{n_{1},n_{2},a_{1},a_{2}}(z,\overline{z})|\leq C\|f\|_{C^{k\lor n}}\langle| \Im z|\rangle^{-d(G_{n_{1},n_{2},a_{1},a_{2}})+\frac{k\lor n}{m}} \tag{3.13}\]
_for any \(z\in[-1,1]^{2}\setminus([-1,1]\times\{0\})\). For every \(r>0\) there exists \(C_{r}=C_{a_{1},a_{2},n,k,r}>0\) such that_
\[|G_{n_{1},n_{2},a_{1},a_{2}}(z,\overline{z})|\leq C_{r}\|f\|_{C^{k\lor n}} \langle|z|/\sqrt{2}\rangle^{-d(G_{n_{1},n_{2},a_{1},a_{2}})+\frac{k\lor n}{m}} \tag{3.14}\]
_on \([-1,1]^{2}\cap\mathscr{S}(r)\)._
_If additionally \(n\leq k\) and \(D^{k}f(0,0)=0\) then_
\[|F_{n_{1},n_{2},a_{1},a_{2}}(z,\overline{z})|=o(|\Im z|^{-d(F_{n _{1},n_{2},a_{1},a_{2}})+\frac{k}{m}})\text{ if }\frac{k}{m}\leq d(F_{n_{1},n_{2},a_{1},a_{2}})\text{ and}\] \[|G_{n_{1},n_{2},a_{1},a_{2}}(z,\overline{z})|=o(\langle|\Im z| \rangle^{-d(G_{n_{1},n_{2},a_{1},a_{2}})+\frac{k}{m}})\text{ if }\frac{k}{m}\leq d(G_{n_{1},n_{2},a_{1},a_{2}}). \tag{3.15}\]
Proof.: As \(D^{j}f(0,0)=0\) for \(0\leq j<k\),
\[\left|\frac{\partial^{n}f}{\partial\omega^{n_{1}}\partial\bar{\omega}^{n_{2}} }(\omega,\bar{\omega})\right|\leq\|D^{k\lor n}f\|_{C^{0}}|\omega|^{(k\lor n)-n}.\]
Hence
\[|F_{n_{1},n_{2},a_{1},a_{2}}(z,\overline{z})|\leq\|f\|_{C^{k\lor n}}|z|^{- \frac{a_{1}+a_{2}+n}{m}+\frac{k\lor n}{m}}=\|f\|_{C^{k\lor n}}|z|^{-d(F_{n_{1}, n_{2},a_{1},a_{2}})+\frac{k\lor n}{m}}.\]
If \(d_{F}=d(F_{n_{1},n_{2},a_{1},a_{2}})=\frac{n+a_{1}+a_{2}}{m}\) then
\[|G_{n_{1},n_{2},a_{1},a_{2}}(z,\overline{z})|\leq\int_{-1}^{u}|F_{n_{1},n_{2},a_{1},a_{2}}(v,s)|dv=\|f\|_{C^{k\lor n}}\int_{-1}^{u}(v^{2}+s^{2})^{\frac{-d_ {F}+\frac{k\lor n}{m}}{2}}dv.\]
As \(d_{G}=d(G_{n_{1},n_{2},a_{1},a_{2}})=d_{F}-1\), the inequalities (3.13) and (3.14) follow directly from Lemma 3.2.
Suppose that \(0\leq n\leq k\), \(f\in C^{k}(\mathcal{D})\) and \(D^{j}f(0,0)=0\) for \(0\leq j\leq k\). Then \(\left|\frac{\partial^{n}f}{\partial\omega^{n_{1}}\partial\bar{\omega}^{n_{2}} }(\omega,\bar{\omega})\right|=o(|\omega|^{k-n})\). Hence
\[|F_{n_{1},n_{2},a_{1},a_{2}}(z,\overline{z})|=o(|z|^{-\frac{a_{1}+a_{2}+n}{m} +\frac{k}{m}})=o(|\Im z|^{-d_{F}+\frac{k}{m}})\text{ if }\frac{k}{m}\leq d_{F}.\]
Moreover,
\[|G_{n_{1},n_{2},a_{1},a_{2}}(z,\overline{z})|\leq\int_{-1}^{u}|F_{n_{1},n_{2},a_{1},a_{2}}(v,s)|dv=\int_{-1}^{u}\frac{\xi(v,s)}{(v^{2}+s^{2})^{\frac{d_{G}+ 1-\frac{k}{m}}{2}}}dv,\]
where \(\lim_{(v,s)\to(0,0)}\xi(v,s)=0\). If \(d_{G}\geq\frac{k}{m}\) then the second line of (3.15) follows directly from (3.4).
### Higher derivatives of functions \(\mathscr{F}\) and \(F\)
In this section, using the results proved in Section 3.1, we study the behaviour around zero of the higher order partial derivatives for the functions \(\mathscr{F}_{f,l}\) and \(F_{f}\). For any \(a\in\mathbb{Z}_{\geq 0}\) and any bounded Borel map \(f:\mathcal{D}\to\mathbb{C}\) let us consider \(\mathscr{F}=\mathscr{F}_{l}=\mathscr{F}_{f,l}:[-1,1]^{2}\setminus([0,1]\times\{ 0\})\to\mathbb{C}\) given by
\[\mathscr{F}_{f,l}(u,s)=\int_{-1}^{u}\frac{f(G_{l}(v,s))}{(v^{2}+s^{2})^{\frac{ a}{m}}}\,dv.\]
**Lemma 3.7**.: _Assume that \(f\in C^{k\lor n}(\mathcal{D})\) and \(D^{j}f(0,0)=0\) for \(0\leq j<k\). Then there exists \(C=C_{a,n,k}>0\) such that for every \((n_{1},n_{2})\in\mathbb{Z}_{\geq 0}^{2}\) with \(n_{1}+n_{2}=n\) we have_
\[\left|\frac{\partial^{n}\mathscr{F}(z,\overline{z})}{\partial z^{n_{1}} \partial\overline{z}^{n_{2}}}\right|\leq C\|f\|_{C^{k\lor n}}\langle|\Im z| \rangle^{-(\frac{2a}{m}+(n-1)-\frac{k}{m})}\text{ if }\Im z\neq 0. \tag{3.16}\]
_For every \(r>0\) there exists \(C_{r}=C_{a,n,k,r}>0\) such that_
\[\left|\frac{\partial^{n}\mathscr{F}(z,\overline{z})}{\partial z^{n_{1}} \partial\overline{z}^{n_{2}}}\right|\leq C_{r}\|f\|_{C^{k\lor n}}\langle|z|/ \sqrt{2}\rangle^{-(\frac{2a}{m}+(n-1)-\frac{k}{m})}\text{ on }\mathscr{S}(r). \tag{3.17}\]
_If additionally \(0\leq n\leq k\) and \(D^{k}f(0,0)=0\) then_
\[\left|\frac{\partial^{n}\mathscr{F}(z,\overline{z})}{\partial z^{n_{1}} \partial\overline{z}^{n_{2}}}\right|=o(\langle|\Im z|\rangle^{-(\frac{2a}{m}+( n-1)-\frac{k}{m})})\text{ if }\frac{2a}{m}+(n-1)\geq\frac{k}{m}. \tag{3.18}\]
Proof.: By definition, \(\mathscr{F}=G_{0,0,a,a}\). In view of Corollary 3.5, the partial derivative \(\frac{\partial^{n}G_{0,0,a,a}}{\partial z^{n_{1}}\partial\overline{z}^{n_{2}}}\) is a linear combination of \(F\)-type and \(G\)-type functions of the form \(F_{n_{1}^{\prime},n_{2}^{\prime},a_{1}^{\prime},a_{2}^{\prime}}\) and \(G_{n_{1}^{\prime},n_{2}^{\prime},a_{1}^{\prime},a_{2}^{\prime}}\) such that their degree is \(2a/m+n-1\) and \(0\leq n^{\prime}:=n_{1}^{\prime}+n_{2}^{\prime}\leq n\).
Suppose that \(n\leq k\). Then (3.16) and (3.17) follow directly from Lemma 3.6. The same arguments combined with (3.15) yield (3.18).
Suppose that \(n>k\). Then \(k\leq n^{\prime}\lor k<n\). Therefore, \(\|f\|_{C^{n^{\prime}\lor k}}\leq\|f\|_{C^{n}}\) and for any \(0\leq s\leq 1\) and \(d\in\mathbb{R}\) we have \(\langle s\rangle^{-d+\frac{n^{\prime}\lor k}{m}}\leq\langle s\rangle^{-d+\frac {k}{m}}\). In view of Lemma 3.6 this gives (3.16) and (3.17).
By change of coordinates, we obtain the bound of higher derivatives of the map \(F=F_{f}:\mathcal{D}\setminus([0,1]\times\{0\})^{1/m}\to\mathbb{C}\) given by \(F_{f}(\omega,\overline{\omega})=\mathscr{F}_{f,l}(\omega^{m},\overline{\omega} ^{m})\) on \(\mathcal{D}(\frac{l}{m},\frac{l+1}{m})\).
**Lemma 3.8**.: _Assume that \(f\in C^{k\lor n}(\mathcal{D})\) and \(D^{j}f(0,0)=0\) for \(0\leq j<k\). Then for any \(r>0\) there exists \(C_{r,n}>0\) such that for every \((n_{1},n_{2})\in\mathbb{Z}_{\geq 0}^{2}\) with \(n_{1}+n_{2}=n\),_
\[\left|\frac{\partial^{n}F(\omega,\overline{\omega})}{\partial\omega^{n_{1}} \partial\overline{\omega}^{n_{2}}}\right|\leq C_{r,n}\|f\|_{C^{k\lor n}}(1+| \log|\omega||)|\omega|^{(-2a+m+k-n)\wedge 0} \tag{3.19}\]
_for \(\omega\in\mathcal{D}\cap\mathscr{S}(r)^{1/m}\)._
Proof.: Recall that \(F(\omega,\overline{\omega})=\mathscr{F}(\omega^{m},\overline{\omega}^{m})\). By Faa di Bruno's formula,
\[\frac{\partial^{n}F(\omega,\overline{\omega})}{\partial\omega^{n_ {1}}\partial\overline{\omega}^{n_{2}}} =\frac{d^{n}\mathscr{F}(\omega^{m},\overline{\omega}^{m})}{d \omega^{n_{1}}d\overline{\omega}^{n_{2}}}\] \[=\sum_{\bar{p},\bar{q}}C_{\bar{p},\bar{q}}\frac{\partial^{|\bar{p }|+|\bar{q}|}\mathscr{F}}{\partial z^{|\bar{p}|}\partial\overline{z}^{|\bar{q}| }}(\omega^{m},\overline{\omega}^{m})\cdot\prod_{j=1}^{n_{1}\wedge m}(\omega^{m- j})^{p_{j}}\prod_{j=1}^{n_{2}\wedge m}(\overline{\omega}^{m-j})^{q_{j}},\]
where the sum is over all \(n_{1}\)-tuples \(\bar{p}=(p_{1},\ldots,p_{n_{1}})\) and \(n_{2}\)-tuples \(\bar{q}=(q_{1},\ldots,q_{n_{2}})\) of non-negative integers satisfying the constraints \(\sum_{j=1}^{n_{1}}jp_{j}=n_{1}\), \(p_{j}=0\) for \(j>n_{1}\wedge m\) and \(\sum_{j=1}^{n_{2}}jq_{j}=n_{2}\), \(q_{j}=0\) for \(j>n_{1}\wedge m\), and we use the notation \(|\bar{p}|=\sum_{j=1}^{n_{1}}p_{j}\) and \(|\bar{q}|=\sum_{j=1}^{n_{2}}q_{j}\). Let
\[P:=\{|\bar{p}|:\sum_{j=1}^{n_{1}}jp_{j}=n_{1},p_{j}=0\text{ for }j>n_{1} \wedge m\}\] \[Q:=\{|\bar{q}|:\sum_{j=1}^{n_{2}}jq_{j}=n_{2},q_{j}=0\text{ for }j>n_{2} \wedge m\}.\]
Then
\[\frac{\partial^{n}F(\omega,\overline{\omega})}{\partial\omega^{n_{1}} \partial\overline{\omega}^{n_{2}}} =\sum_{\bar{p},\bar{q}}C_{\bar{p},\bar{q}}\frac{\partial^{|\bar{p} |+|\bar{q}|}\mathscr{F}}{\partial z^{|\bar{p}|}\partial\overline{z}^{|\bar{q}|} }(\omega^{m},\overline{\omega}^{m})\cdot\omega^{m|\bar{p}|-n_{1}}\overline{ \omega}^{m|\bar{q}|-n_{2}}\] \[=\sum_{p\in P,q\in Q}C^{\prime}_{p,q}\frac{\partial^{p+q}\mathscr{ F}}{\partial z^{p}\partial\overline{z}^{q}}(\omega^{m},\overline{\omega}^{m}) \cdot\omega^{mp-n_{1}}\overline{\omega}^{mq-n_{2}}.\]
In view of Lemma 3.7 and Remark 3.1, for every \(\omega\in\mathcal{D}\cap\mathscr{S}(r)^{1/m}\),
\[\left|\frac{\partial^{p+q}\mathscr{F}}{\partial z^{p}\partial\overline{z}^{q} }(\omega^{m},\overline{\omega}^{m})\right|\leq mC_{r}\|f\|_{C^{k\lor n}} \langle|\omega|/\sqrt[2m]{2}\rangle^{-(2a+(p+q-1)m-k)}.\]
Therefore, taking \(l=p+q\in P+Q\), for every \(\omega\in\mathcal{D}\cap\mathscr{S}(r)^{1/m}\),
\[\left|\frac{\partial^{n}F(\omega,\overline{\omega})}{\partial\omega^{n_{1}} \partial\overline{\omega}^{n_{2}}}\right|\leq\sum_{l\in P+Q}mC_{r}\|f\|_{C^{k \lor n}}\langle|\omega/\sqrt[2m]{2}\rangle^{-(2a+(l-1)m-k)}|\omega|^{ml-n}. \tag{3.20}\]
Moreover, for \(l=p+q\in P+Q\) we have
\[ml-n=\sum_{j=1}^{n_{1}\wedge m}(m-j)p_{j}+\sum_{j=1}^{n_{2}\wedge m}(m-j)q_{j} \geq 0.\]
If \(2a+(l-1)m-k>0\) then
\[\langle|\omega/\sqrt[2m]{2}|\rangle^{-(2a+(l-1)m-k)}|\omega|^{ml-n}=O(|\omega| ^{-(2a+n-m-k)}).\]
If \(2a+(l-1)m-k=0\) then
\[\langle|\omega/\sqrt[2m]{2}|\rangle^{-(2a+(l-1)m-k)}|\omega|^{ml-n}=|\omega|^{ ml-n}=O(1).\]
In view of (3.20), this gives (3.19).
### Preliminary results necessary to define invariant distributions
For any pair of integers \((a_{1},a_{2})\), let \(\mathfrak{F}_{a_{1},a_{2}}=\mathfrak{F}_{a_{1},a_{2}}^{l}:[-1,1]^{2}\setminus( [0,1]\times\{0\})\to\mathbb{C}\) and \(\mathfrak{G}_{a_{1},a_{2}}=\mathfrak{G}_{a_{1},a_{2}}^{l}:[-1,1]^{2}\setminus( [0,1]\times\{0\})\to\mathbb{C}\) be given by
\[\mathfrak{F}_{a_{1},a_{2}}(z,\overline{z}) =\mathfrak{F}_{a_{1},a_{2}}(u,s)=G_{l}(u,s)^{-a_{1}}\overline{G_{ l}(u,s)}^{-a_{2}},\] \[\mathfrak{G}_{a_{1},a_{2}}(z,\overline{z}) =\mathfrak{G}_{a_{1},a_{2}}(u,s)=\int_{-1}^{u}\mathfrak{F}_{a_{1},a_{2}}(v,s)dv.\]
Then \(\mathfrak{G}_{a_{1},a_{2}}^{l}=\theta^{l(a_{2}-a_{1})}\mathfrak{G}_{a_{1},a_{2}} ^{0}\) for every \(0\leq l<m\). As
\[G_{0}(-u-\iota s)=\theta_{0}G_{0}(u+\iota s)\text{ and }G_{0}(u-\iota s)=\theta_{0}^{2} \overline{G_{0}(u+\iota s)}\text{ for }s>0,\]
it follows that
\[\mathfrak{G}_{a_{1},a_{2}}^{l}(1,s)=\left\{\begin{array}{cc}\theta_{0}^{2l(a _{2}-a_{1})}\mathfrak{G}_{a_{1},a_{2}}^{0}(1,|s|)&\text{ if }&s\in(0,1]\\ \theta_{0}^{(2l+1)(a_{2}-a_{1})}\mathfrak{G}_{a_{1},a_{2}}^{0}(1,|s|)&\text{ if }&s\in[-1,0)\end{array}\right. \tag{3.21}\]
and
\[\mathfrak{G}_{a_{1},a_{2}}^{l}(u,s)=\left\{\begin{array}{cc}\theta_{0}^{2l(a _{2}-a_{1})}\mathfrak{G}_{a_{1},a_{2}}^{0}(u,|s|)&\text{ if }&s\in(0,1]\\ \theta_{0}^{(2l+2)(a_{2}-a_{1})}\overline{\mathfrak{G}_{a_{1},a_{2}}^{0}(u,|s| )}&\text{ if }&s\in[-1,0).\end{array}\right. \tag{3.22}\]
For any set \(A\subset\mathbb{R}^{d}\) denote by \(C^{\omega}(A)\) the space of complex-valued real-analytic maps on \(A\) which have analytic extention to the closure \(\bar{A}\). If \(f,g:A\to\mathbb{C}\) are such that \(f-g\in C^{\omega}(A)\) then we write \(f=g+C^{\omega}(A)\). We denote by \(C^{\omega,m}_{l}\) the space of functions \(f:[-1,1]^{2}\setminus([0,1]\times\{0\})\to\mathbb{C}\) such that \(\omega\mapsto f(\omega^{m},\overline{\omega}^{m})\) has a real analytic extension on \(\overline{\mathcal{D}}(\frac{l}{m},\frac{l+1}{m})\). For example, \(\mathfrak{F}_{a_{1},a_{2}}\in C^{\omega,m}_{l}\) if \(a_{1},a_{2}\) are non-positive.
**Lemma 3.9**.: _For any pair of integers \((a_{1},a_{2})\),_
\[a_{1}\mathfrak{G}^{l}_{a_{1}+m,a_{2}}(1,s)+a_{2}\mathfrak{G}^{l}_{a_{1},a_{2}+ m}(1,s)\in C^{\omega}((0,1])\cap C^{\omega}([-1,0)). \tag{3.23}\]
_If \(a_{1},a_{2}\) are additionally both non-positive then_
\[a_{1}\mathfrak{G}^{l}_{a_{1}+m,a_{2}}+a_{2}\mathfrak{G}^{l}_{a_{1},a_{2}+m}\in C ^{\omega,m}_{l}. \tag{3.24}\]
Proof.: Note that
\[\mathfrak{F}_{a_{1},a_{2}}(z,\overline{z})-\mathfrak{F}_{a_{1},a_ {2}}(-1+\iota\Im z,-1-\iota\Im z)=\mathfrak{F}_{a_{1},a_{2}}(u,s)-\mathfrak{F} _{a_{1},a_{2}}(-1,s)\] \[=\int_{-1}^{u}\frac{\partial}{\partial v}\mathfrak{F}_{a_{1},a_{ 2}}(v,s)\,dv=\int_{-1}^{u}\left(\frac{\partial}{\partial z}+\frac{\partial}{ \partial\overline{z}}\right)\mathfrak{F}_{a_{1},a_{2}}(v,s)\,dv\] \[=-\int_{-1}^{u}\left(\frac{a_{1}}{m}\mathfrak{F}_{a_{1}+m,a_{2}}( v,s)+\frac{a_{2}}{m}\mathfrak{F}_{a_{1},a_{2}+m}(v,s)\right)\,dv\] \[=-\frac{1}{m}(a_{1}\mathfrak{G}_{a_{1}+m,a_{2}}(z,\overline{z})+ a_{2}\mathfrak{G}_{a_{1},a_{2}+m}(z,\overline{z})).\]
It follows that
\[a_{1}\mathfrak{G}_{a_{1}+m,a_{2}}(1,s)+a_{2}\mathfrak{G}_{a_{1},a_{2}+m}(1,s)\]
\[=m(G_{l}(-1,s)^{-a_{1}}\overline{G_{l}(-1,s)}^{-a_{2}}-G_{l}(1,s)^{-a_{1}} \overline{G_{l}(1,s)}^{-a_{2}}).\]
Since the maps \([-1,1]\ni s\mapsto G_{l}(-1,s)\in\mathbb{C}\), \([0,1]\ni s\mapsto G_{l}(1,s)\in\mathbb{C}\) and \([-1,0)\ni s\mapsto G_{l}(1,s)\in\mathbb{C}\) are analytic and the latter has an analytic extension to \([-1,0]\), this gives (3.23). Moreover, for any \(\omega\in\mathcal{D}(\frac{l}{m},\frac{l+1}{m})\),
\[a_{1}\mathfrak{G}_{a_{1}+m,a_{2}}(\omega^{m},\overline{\omega}^{m})+a_{2} \mathfrak{G}_{a_{1},a_{2}+m}(\omega^{m},\overline{\omega}^{m})\]
\[=m(G_{l}(-1+\iota\Im\omega^{m})^{-a_{1}}\overline{G_{l}(-1+\iota\Im\omega^{m} )}^{-a_{2}}-G_{l}(\omega^{m})^{-a_{1}}\overline{G_{l}(\omega^{m})}^{-a_{2}})\]
\[=m(G_{l}(-1+\iota\Im\omega^{m})^{-a_{1}}\overline{G_{l}(-1+\iota\Im\omega^{m} )}^{-a_{2}}-\omega^{-a_{1}}\overline{\omega}^{-a_{2}}).\]
Since \(-a_{1},-a_{2}\) are non-negative integers, all functions on the RHS are analytic which completes the proof.
As a conclusion we obtain that for any integer \(k\neq m\),
\[\mathfrak{G}_{0,k}(1,s),\mathfrak{G}_{k,0}(1,s)\in C^{\omega}((0,1])\cap C^{ \omega}([-1,0))\]
and for any integer \(k<m\),
\[\mathfrak{G}_{0,k},\mathfrak{G}_{k,0}\in C^{\omega,m}_{l}. \tag{3.25}\]
Moreover,
\[\mathfrak{G}_{0,m}(1,s) =\int_{-1}^{1}\frac{1}{v-\iota s}dv=\int_{-1}^{1}\frac{v+\iota s }{v^{2}+s^{2}}dv=\iota\operatorname{sgn}(s)\int_{-1/|s|}^{1/|s|}\frac{1}{x^{2} +1}dx\] \[=\iota\operatorname{sgn}(s)(\arctan(1/|s|)-\arctan(-1/|s|))\] \[=\iota(\operatorname{arccot}(s)-\operatorname{arccot}(-s))+\iota \operatorname{sgn}(s)\pi.\]
Hence, \(\mathfrak{G}_{0,m}(1,s),\mathfrak{G}_{m,0}(1,s)\in C^{\omega}((0,1])\cap C^{\omega}([- 1,0))\). Using (3.23) again, we have
\[\mathfrak{G}_{k,-lm}(1,s),\mathfrak{G}_{-lm,k}(1,s)\in C^{\omega}((0,1])\cap C^ {\omega}([-1,0))\text{ for all }k\in\mathbb{Z},l\in\mathbb{Z}_{\geq 0}. \tag{3.26}\]
Invariant distributions \(\partial_{j}^{k}\) and their effect on the regularity of \(\mathscr{F}\) and \(F\)
For every \(m\geq 2\), \(0\leq l<m\), \(k\geq 0\) and \(f\in C^{k}(\mathcal{D})\) we deal with three associated functions \(\mathscr{F}_{f,l}\), \(\varphi_{f,l}\) and \(F_{f}\). Recall that \(\mathscr{F}_{f,l}:[-1,1]^{2}\setminus([0,1]\times\{0\})\to\mathbb{C}\) is given by
\[\mathscr{F}_{f,l}(z,\overline{z})=\mathscr{F}_{f,l}(u,s)=\int_{-1}^{u}\frac{f (G_{l}(v,s))}{(v^{2}+s^{2})^{\frac{m-1}{m}}}dv,\]
\(\varphi_{f,l}:[-1,0)\cup(0,1]\to\mathbb{C}\) is given by \(\varphi_{f,l}(s)=\mathscr{F}_{f,l}(1,s)\) and \(F_{f}:\mathcal{D}\setminus([0,1]\times\{0\})^{1/m}\to\mathbb{C}\) is given by \(F_{f}(\omega,\overline{\omega})=\mathscr{F}_{f,l}(\omega^{m},\overline{ \omega}^{m})\) on \(\mathcal{D}(\frac{l}{m},\frac{l+1}{m})\).
For every \(k\geq 0\) and let us consider functionals \(\partial_{j}^{k}:C^{k}(\mathcal{D})\to\mathbb{C}\) for \(0\leq j\leq k\wedge(m-1)\) given by
\[\partial_{j}^{k}(f)=\sum_{0\leq n\leq\frac{k-l}{m}}\frac{\binom{k}{j+nm}\binom {(\frac{m-1}{m}-j-1}{n})}{\left(\frac{(k-j)-(m-1)}{n}\right)}\frac{\partial^{ k}f}{\partial\omega^{j+nm}\partial\overline{\omega}^{k-j-nm}}(0,0). \tag{3.27}\]
Comparing with (1.4), functionals \(\partial_{j}^{k}\) will play a key role in understanding the meaning of distribution \(\mathfrak{d}_{\sigma,j}^{k}\). If \(0\leq k\leq m-2\) then \(\partial_{j}^{k}(f)=\binom{k}{j}\frac{\partial^{k}f}{\partial\omega^{j} \partial\overline{\omega}^{k-j}}(0,0)\). If \(k\geq m-1\) then as we will see in the following lemma, only \(m-2\) functionals matter. More precisely, \(\partial_{j}^{k}\) is irrelevant if \(j=m-1\) or \(j=k-(m-1)\operatorname{mod}m\). Note that if \(k=m-2\operatorname{mod}m\) then, in this exceptional case, we have \(m-1\) relevant functionals.
Recall that for any \(0\leq\alpha<\beta\leq 1\) let \(\mathcal{D}(\alpha,\beta):=\{\omega\in\mathcal{D}\setminus\{0\}:\operatorname {Arg}(\omega)\in(2\pi\alpha,2\pi\beta)\}\). We denote its closure by \(\overline{\mathcal{D}}(\alpha,\beta)\).
**Lemma 3.10**.: _Suppose that \(f\in\mathbb{C}[\omega,\overline{\omega}]\) is a polynomial of degree at most \(k\) such that \(\partial_{i}^{j}(f)=0\) for all \(0\leq j\leq k\) and \(0\leq i\leq j\wedge(m-2)\) with \(i\neq j-(m-1)\operatorname{mod}m\). Then_
\[F_{f}\in C^{\omega}(\mathcal{D}(\tfrac{l}{m},\tfrac{l+1}{m}))\text{ \ and \ }\varphi_{f,l}\in C^{\omega}([-1,0))\cap C^{\omega}((0,1])\text{ for }0\leq l<m.\]
_Moreover, for any \(n\geq 0\) there exists a constant \(C_{k}^{n}>0\) such that_
\[\|F_{f}\|_{C^{n}(\mathcal{D}(\tfrac{l}{m},\tfrac{l+1}{m}))}\leq C_{k}^{n}\|f\| _{C^{k}(\mathcal{D})}\text{ and }\|\varphi_{f,l}\|_{C^{n}([-1,0)\cup(0,1])}\leq C_{k}^{n}\|f\|_{C^{k}( \mathcal{D})}. \tag{3.28}\]
Proof.: First note that (3.28) follows directly from the first part of the lemma. Indeed, \(f\mapsto F_{f}\in C^{\omega}(\mathcal{D}(\tfrac{l}{m},\tfrac{l+1}{m}))\) and \(f\mapsto\varphi_{f,l}\in C^{\omega}([-1,0))\cap C^{\omega}((0,1])\) are linear operators on a finite-dimensional space, so they are bounded. This gives (3.28).
By assumption, \(f=\sum_{0\leq j\leq k}f_{j}\) with
\[f_{j}(\omega,\overline{\omega})=\frac{1}{j!}\sum_{0\leq i\leq j}\binom{j}{i} \frac{\partial^{j}f}{\partial\omega^{i}\partial\overline{\omega}^{j-i}}(0,0) \omega^{i}\overline{\omega}^{j-i}.\]
We will show that if \(\partial_{i}^{j}(f)=0\) for all \(0\leq i\leq j\wedge(m-2)\) with \(i\neq j-(m-1)\operatorname{mod}m\), then
\[F_{f_{j}}\in C^{\omega}(\mathcal{D}(\tfrac{l}{m},\tfrac{l+1}{m}))\text{ \ and \ }\varphi_{f_{j},l}\in C^{\omega}([-1,0))\cap C^{\omega}((0,1])\text{ for }0\leq l<m.\]
This gives our claim.
Note that
\[\mathscr{F}_{f_{j},l}=\frac{1}{j!}\sum_{0\leq i\leq j}\binom{j}{i}\frac{\partial^{ j}f}{\partial\omega^{i}\partial\overline{\omega}^{j-i}}(0,0)\mathfrak{G}_{(m-1)-i,(m-1)-(j-i) }=\frac{1}{j!}\sum_{0\leq i<m}\xi_{i},\]
where
\[\xi_{i}(z,\overline{z})=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: Let \(\mathscr{A}\) be an angular sector of \(\mathcal{D}\cap\mathscr{S}(r)^{1/m}\). Let us decompose \(f=f_{<k}+e_{f}\) with
\[f_{<k}(\omega,\overline{\omega})=\sum_{0\leq j<k}\frac{1}{j!}\sum_{0\leq i\leq j }\binom{j}{i}\frac{\partial^{j}f}{\partial\omega^{i}\partial\overline{\omega} ^{j-i}}(0,0)\omega^{i}\overline{\omega}^{j-i}.\]
Note that the operator \(C^{k}(\mathcal{D})\ni f\mapsto f_{<k}\in C^{k}(\mathcal{D})\) is bounded. By Lemma 3.10, \(F_{f_{<k}}\) is analytic and for every \(n\geq 0\) there exists \(C_{k}^{n}>0\) such that \(\|F_{f_{<k}}\|_{C^{n}(\mathcal{D}(\frac{1}{m},\frac{l+1}{m}))}\leq C_{k}^{n} \|f\|_{C^{k}(\mathcal{D})}\) for every \(0\leq l<m\). On the other hand, \(D^{j}(e_{f})=0\) for every \(0\leq j<k\). In view of Lemma 3.8, if \((n_{1},n_{2})\in\mathbb{Z}_{\geq 0}^{2}\) is such that \(n_{1}+n_{2}=n=k-(m-2)\) then for every \(\omega\in\mathscr{A}\),
\[\left|\frac{\partial^{n}F_{e_{f}}(\omega,\overline{\omega})}{\partial\omega^{ n_{1}}\partial\overline{\omega}^{n_{2}}}\right|\leq C_{r,n}\|e_{f}\|_{C^{k}( \mathcal{D})}(1+|\log|\omega||)|\omega|^{(k-n-(m-2))\wedge 0}.\]
As \(\|D^{k-(m-2)}F_{e_{f}}(\omega,\overline{\omega})\|\leq C_{r,n}\|e_{f}\|_{C^{k} (\mathcal{D})}(1+|\log|\omega||)\), \(D^{k-(m-1)}F_{e_{f}}\) on \(\mathscr{A}\) has a continuous extension on \(\overline{\mathscr{A}}=\mathscr{A}\cup\{(0,0)\}\) with the modulus of continuity bounded by a multiplicity of \(\eta\). Therefore, \(F_{e_{f}}\) can be extended to a \(C^{k-(m-1)+\eta}\)-function on \(\overline{\mathscr{A}}\) and \(\|F_{e_{f}}\|_{C^{k-(m-1)+\eta}(\overline{\mathscr{A}})}\leq C\|e_{f}\|_{C^{k }(\mathcal{D})}\leq C^{\prime}\|f\|_{C^{k}(\mathcal{D})}\). As \(F_{f}=F_{f_{<k}}+F_{e_{f}}\), this gives our claim.
## 4. Local analysis of \(\varphi_{f}\)
This section is devoted to computing a limiting behavior of higher derivatives of \(\varphi_{f}\) related to singularities on angular sectors of \(\mathcal{D}\). We introduce a family of functionals \(\mathscr{C}_{l}^{k}\) which are responsible for the asymptotic behaviour of \(\varphi_{f,l}\) around zero. The new result is inspired by the approach for multi-saddles (related to polynomial singularities) in [4]. The main results of this section (Theorem 4.7) plays a central role in proving Theorem 1.1 in SS5 as well as is applied to extend the regularity of \(F_{f}\) (obtained in Theorem 3.11) to the closure of any sector \(\mathcal{D}(\frac{1}{m},\frac{l+1}{m})\).
### Preliminary properties of \(\mathfrak{G}_{a_{1},a_{2}}(1,s)\)
Firstly we present limiting behaviour of \(\frac{d^{n}}{ds^{n}}\mathfrak{G}_{a_{1},a_{2}}(1,s)\) around zero. We show that for large enough higher derivatives their asymptotic is polynomial with a weight factor established by the Beta-like function \(\mathfrak{B}\). This is further used in evaluating asymtotics of \(D^{n+1}\varphi_{f,l}\) in SS4.2.
Note that for any pair of integers \(a_{1},a_{2}\),
\[\begin{split}\frac{d}{ds}\mathfrak{G}_{a_{1},a_{2}}(1,s)& =-\frac{2\iota a_{1}}{m}\mathfrak{G}_{a_{1}+m,a_{2}}(1,s)+C^{ \omega}((0,1])\cap C^{\omega}([-1,0))\\ &=\frac{2\iota a_{2}}{m}\mathfrak{G}_{a_{1},a_{2}+m}(1,s)+C^{ \omega}((0,1])\cap C^{\omega}([-1,0)).\end{split} \tag{4.1}\]
Indeed,
\[\begin{split}\frac{d}{ds}\mathfrak{G}_{a_{1},a_{2}}(u,s)& =\int_{-1}^{u}\frac{d}{ds}\mathfrak{F}_{a_{1},a_{2}}(v,s)dv=\int_ {-1}^{u}\iota\left(\frac{\partial}{\partial z}-\frac{\partial}{\partial \overline{z}}\right)\mathfrak{F}_{a_{1},a_{2}}(v,s)dv\\ &=\int_{-1}^{u}\iota\left(-\frac{a_{1}}{m}\mathfrak{F}_{a_{1}+m,a _{2}}(v,s)+\frac{a_{2}}{m}\mathfrak{F}_{a_{1},a_{2}+m}(v,s)\right)\,dv\\ &=\frac{\iota}{m}(-a_{1}\mathfrak{G}_{a_{1}+m,a_{2}}(u,s)+a_{2} \mathfrak{G}_{a_{1},a_{2}+m}(u,s)).\end{split}\]
In view of (3.23), this gives (4.1). It follows that for every \(n\geq 1\),
\[\frac{d^{n}}{ds^{n}}\mathfrak{G}_{a_{1},a_{2}}(1,s)=n!(-2\iota)^{n}\binom{- \frac{a_{2}}{m}}{n}\mathfrak{G}_{a_{1},a_{2}+nm}(1,s)+C^{\omega}((0,1])\cap C^{ \omega}([-1,0)) \tag{4.2}\]
and
\[\frac{d^{n}}{ds^{n}}\mathfrak{G}_{a_{1},a_{2}}=\iota^{n}n!\sum_{0\leq j\leq n}(-1)^ {n-j}\binom{-\frac{a_{1}}{m}}{j}\binom{-\frac{a_{2}}{m}}{n-j}\mathfrak{G}_{a_{1} +jm,a_{2}+(n-j)m}. \tag{4.3}\]
Suppose that \(a_{1},a_{2}\) are integers such that \(a_{1}+a_{2}>m\). Then for every \(s\in(0,1]\),
\[\mathfrak{G}^{0}_{a_{1},a_{2}}(1,s) =\int_{-1}^{1}G_{0}(v+\iota s)^{-a_{1}}\overline{G_{0}(v+\iota s) }^{-a_{2}}dv\] \[=s^{-\frac{a_{1}+a_{2}-m}{m}}\int_{-1/s}^{1/s}G_{0}(x+\iota)^{-a_ {1}}\overline{G_{0}(x+\iota)}^{-a_{2}}dx.\]
Therefore,
\[\lim_{s\to 0^{+}}s^{\frac{a_{1}+a_{2}-m}{m}}\mathfrak{G}^{0}_{a_{1},a_{2}}(1,s )=\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m}):=\int_{\mathbb{R}}G_{0}(x+ \iota)^{-a_{1}}\overline{G_{0}(x+\iota)}^{-a_{2}}dx.\]
Note that, by change of variables,
\[\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m})=\int_{0}^{\pi}\frac{e^{\iota \frac{a_{2}-a_{1}}{m}t}}{\sin^{-\frac{a_{1}+a_{2}}{m}+2}t}dt\]
and for \(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m}\notin\mathbb{Z}_{\leq 0}\),
\[\int_{0}^{\pi}\frac{e^{\iota\frac{a_{2}-a_{1}}{m}t}}{\sin^{-\frac{a_{1}+a_{2}} {m}+2}t}dt=\frac{\pi e^{\iota\frac{a_{2}-a_{1}}{2m}\pi}}{2^{\frac{a_{1}+a_{2}- m}{m}}\frac{a_{1}+a_{2}-m}{m}}\frac{\Gamma(\tfrac{a_{1}}{m}+\tfrac{a_{2}}{m}-1)}{ \Gamma(\tfrac{a_{1}}{m})\Gamma(\tfrac{a_{2}}{m})}.\]
For any pair \(x,y\) of real numbers such that \(x,y\notin\mathbb{Z}_{\leq 0}\) and \(x+y\notin\mathbb{Z}_{\leq 1}\) let
\[\mathfrak{B}(x,y)=\frac{\pi e^{\iota\frac{x}{2}(y-x)}}{2^{x+y-2}(x+y-1)B(x,y) }=\frac{\pi e^{\iota\frac{x}{2}(y-x)}}{2^{x+y-2}}\frac{\Gamma(x+y-1)}{\Gamma( x)\Gamma(y)}\neq 0.\]
Note that
\[\overline{\mathfrak{B}(x,y)}=\mathfrak{B}(y,x)=e^{-\iota\pi(y-x)}\mathfrak{B }(x,y). \tag{4.4}\]
By (3.21), for any pair of integers \(a_{1},a_{2}\) such that \(a_{1}+a_{2}>m\) and \(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m}\notin\mathbb{Z}_{\leq 0}\),
\[\begin{split}\lim_{s\to 0^{+}}|s|^{\frac{a_{1}+a_{2}-m}{m}} \mathfrak{G}^{l}_{a_{1},a_{2}}(1,s)&=\theta_{0}^{2l(a_{2}-a_{1 })}\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m})\\ \lim_{s\to 0^{-}}|s|^{\frac{a_{1}+a_{2}-m}{m}}\mathfrak{G}^{l}_{a_ {1},a_{2}}(1,s)&=\theta_{0}^{(2l+1)(a_{2}-a_{1})}\mathfrak{B}( \tfrac{a_{1}}{m},\tfrac{a_{2}}{m}).\end{split} \tag{4.5}\]
In view of (3.26), if \(\tfrac{a_{1}}{m}\in\mathbb{Z}_{\leq 0}\) or \(\tfrac{a_{2}}{m}\in\mathbb{Z}_{\leq 0}\) then the limit is zero. For this reason, we extend the definition of the function \(\mathfrak{B}\) by letting
\[\mathfrak{B}(x,y)=0\text{ if }x\in\mathbb{Z}_{\leq 0}\text{ or }y\in\mathbb{Z}_{\leq 0}. \tag{4.6}\]
**Lemma 4.1**.: _Suppose that \(a=a_{1}+a_{2}>m\). For every \(0<r<1\) there exist \(\rho^{\pm},\varrho^{\pm}\in C^{\omega}([0,r])\) such that if \(0<u\leq 1\) and \(0<|s|\leq ru\) then_
\[\begin{split}\mathfrak{G}^{l}_{a_{1},a_{2}}(u,s)& =\!\theta_{0}^{2l(a_{2}-a_{1})}\big{(}\mathfrak{B}(\tfrac{a_{1}} {m},\tfrac{a_{2}}{m})|s|^{-\frac{a-m}{m}}+\rho^{+}(|s|)+u^{-\frac{a-m}{m}}\varrho ^{+}(\tfrac{|s|}{u})\big{)}\text{ if }s>0\\ \mathfrak{G}^{l}_{a_{1},a_{2}}(u,s)&\!=\!\theta_{0} ^{(2l+1)(a_{2}-a_{1})}\big{(}\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m})|s| ^{-\frac{a-m}{m}}\!+\!\rho^{-}(|s|)\!+\!u^{-\frac{a-m}{m}}\varrho^{-}(\tfrac{ |s|}{u})\big{)}\text{ if }s<0.\end{split} \tag{4.7}\]
Proof.: In view of (3.22) and (4.4), it suffices to show the first line of (4.7) for \(l=0\).
By change of variables used twice, for every \(s,u\in(0,1]\),
\[\mathfrak{G}^{0}_{a_{1},a_{2}}(u,s) =s^{-\frac{a-m}{m}}\int_{-1/s}^{u/s}G_{0}(x+\iota)^{-a_{1}} \overline{G_{0}(x+\iota)}^{-a_{2}}dx\] \[=s^{-\frac{a-m}{m}}\Big{(}\int_{-\infty}^{+\infty}-\int_{-\infty} ^{-1/s}-\int_{u/s}^{+\infty}\Big{)}G_{0}(x+\iota)^{-a_{1}}\overline{G_{0}(x+ \iota)}^{-a_{2}}dx\] \[=s^{-\frac{a-m}{m}}\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m })+s^{1-\frac{a}{m}}\Big{(}\int_{0}^{s/u}\frac{\xi_{+}(t)}{t^{2-\frac{a}{m}}} dt+\int_{0}^{s}\frac{\xi_{-}(t)}{t^{2-\frac{a}{m}}}dt\Big{)},\]
where \(\xi_{\pm}:\mathbb{R}\to\mathbb{C}\), \(\xi_{\pm}(t)=-G_{0}(\pm 1+\iota t)^{-a_{1}}\overline{G_{0}(\pm 1+\iota t )}^{-a_{2}}\) is an analytic map with the radius of convergence at \(0\) equal to \(1\). Then for every \(0<r<1\) let \(\sum_{n\geq 0}|c_{n}^{\pm}|r^{n}<+\infty\) such that \(\sum_{n\geq 0}c_{n}^{\pm}t^{n}\) tends to \(\xi_{\pm}(t)\) uniformly on \([0,r]\). As \(\frac{a}{m}>1\),
\[\frac{1}{t^{2-\frac{a}{m}}}\sum_{n\geq 1}c_{n}^{\pm}t^{n}=\sum_{n\geq 1}c_{n}^{\pm} t^{(n-1)+\frac{a}{m}-1}\text{ tends on }[0,r]\text{ uniformly to }\frac{\xi_{\pm}(t)-c_{0}^{\pm}}{t^{2-\frac{a}{m}}}.\]
It follows that
\[s^{1-\frac{a}{m}}\int_{0}^{s}\frac{(\xi_{\pm}(t)-c_{0}^{\pm})}{t^{2-\frac{a}{ m}}}dt=s^{1-\frac{a}{m}}\sum_{n\geq 1}c_{n}^{\pm}\int_{0}^{s}t^{(n-1)+\frac{a}{m }-1}dt=\sum_{n\geq 1}\frac{c_{n}^{\pm}s^{n}}{n+\frac{a}{m}-1}.\]
Since \(\sum_{n\geq 0}|c_{n}^{\pm}|r^{n}<+\infty\), the map \(s^{1-\frac{a}{m}}\int_{0}^{s}\frac{(\xi_{\pm}(t)-c_{0}^{\pm})}{t^{2-\frac{a}{ m}}}dt\in C^{\omega}([0,r])\). Moreover,
\[s^{1-\frac{a}{m}}\int_{0}^{s}\frac{\xi_{\pm}(t)}{t^{2-\frac{a}{m}}}dt=\frac{c _{0}^{\pm}}{\frac{a}{m}-1}+s^{1-\frac{a}{m}}\int_{0}^{s}\frac{(\xi_{\pm}(t)-c_ {0}^{\pm})}{t^{2-\frac{a}{m}}}dt,\]
so \(\widetilde{\xi}_{\pm}(s)=s^{1-\frac{a}{m}}\int_{0}^{s}\frac{\xi_{\pm}(t)}{t^ {2-\frac{a}{m}}}dt\in C^{\omega}([0,r])\). As
\[\mathfrak{G}^{0}_{a_{1},a_{2}}(u,s)=s^{-\frac{a-m}{m}}\mathfrak{B}(\tfrac{a_ {1}}{m},\tfrac{a_{2}}{m})+u^{-\frac{a-m}{m}}\widetilde{\xi}_{+}(s/u)+ \widetilde{\xi}_{-}(s)\text{ if }0\leq s/u\leq r,\]
this completes the proof of (4.7).
By definition, for every natural number \(n\) if \(x,y\notin\mathbb{Z}\) and \(x+y\notin\mathbb{Z}_{\leq 1}\) then
\[(2\iota)^{n}\binom{-y}{n}\mathfrak{B}(x,y+n)=\binom{x+y+n-2}{n}\mathfrak{B}(x, y)=(-2\iota)^{n}\binom{-x}{n}\mathfrak{B}(x+n,y). \tag{4.8}\]
We can extend again the domain of the function \(\mathfrak{B}\) by adding the pairs \((x,y)\) such that \(x,y\notin\mathbb{Z}\) and \(x+y\in\mathbb{Z}_{\leq 1}\). For every such pair we let \(\mathfrak{B}(x,y)=\frac{\pi e^{i\frac{\pi}{2}(y-x)}}{2^{x+y-2}}\frac{\Gamma(x+ y-1)}{\Gamma(\Gamma)\Gamma(y)}\), where we adopt the convention \(\Gamma(0):=\lim_{x\to 0}x\Gamma(x)=1\) and \(\Gamma(-n):=\frac{1}{(-1)\cdots(-n)}=\frac{1}{(-1)\cdots(-n)}\) for any \(n\in\mathbb{N}\). Then for any \(n\in\mathbb{N}\) we also have
\[\binom{-x}{n}\mathfrak{B}(x+n,y-n)=(-1)^{n}\binom{-y+n}{n}\mathfrak{B}(x,y). \tag{4.9}\]
The extended \(\Gamma\)-function satisfies \(\Gamma(x+1)=x\Gamma(x)\) for all \(x\in\mathbb{R}\setminus\{0\}\) and \(\Gamma(1)=\Gamma(0)=1\). It follows that (4.8) holds even when \(x+y+n\in\mathbb{Z}_{\leq 1}\).
Finally note that, if \(x+y=1\) then we also have
\[\mathfrak{B}(x,y)=\frac{2\pi e^{i\pi(y-1/2)}}{\Gamma(1-y)\Gamma(y)}=-2\iota e^ {i\pi y}\sin(\pi y)=1-e^{2\pi y}=1+e^{i\pi(y-x)}. \tag{4.10}\]
**Lemma 4.2**.: _Suppose that \(a_{1}+a_{2}=m\). There exist \(\rho^{\pm},\varrho^{\pm}\in C^{\omega}([0,1])\) such that for all \(0<u\leq 1\) and \(0<|s|\leq u\),_
\[\mathfrak{G}^{l}_{a_{1},a_{2}}(u,s) =\!\theta_{0}^{2l(a_{2}-a_{1})}\!\left(\!-\mathfrak{B}(\tfrac{a_{ 1}}{m},\tfrac{a_{2}}{m})\!\log|s|\!+\!\log u\!+\!\rho^{+}(|s|)\!+\!\varrho^{+} (\tfrac{|s|}{u})\right)\text{ if }s>0\] \[\mathfrak{G}^{l}_{a_{1},a_{2}}(u,s) =\!\theta_{0}^{(2l+1)(a_{2}-a_{1})}\!\left(\!-\mathfrak{B}( \tfrac{a_{1}}{m},\tfrac{a_{2}}{m})\!\log|s|\!+\!\theta_{0}^{a_{2}-a_{1}}\log u \!+\!\rho^{-}(|s|)\!+\!\varrho^{-}(\tfrac{|s|}{u})\right)\text{ if }s<0. \tag{4.11}\]
Proof.: In view of (3.22) and (4.4), it suffices to show the first line of (4.11) for \(l=0\).
By change of variables, for every \(u,s\in(0,1]\),
\[\psi(u,s) :=\int_{0}^{u}G_{0}(v+\iota s)^{-a_{1}}\overline{G_{0}(v+\iota s) }^{-a_{2}}dv-\int_{0}^{u}\frac{1}{v-\iota s}dv\] \[=\int_{0}^{u}\left(\left(\overline{\frac{G_{0}(v+\iota s)}{G_{0}( v+\iota s)}}\right)^{a_{1}}-1\right)\frac{1}{v-\iota s}dv=\int_{0}^{u/s}\frac{ \left(\left(\overline{\frac{G_{0}(x+\iota)}{G_{0}(x+\iota)}}\right)^{a_{1}}- 1\right)}{x-\iota}dx.\]
It follows that \(\psi(u,s)=\widetilde{\psi}(s/u)\), where
\[\widetilde{\psi}^{\prime}(x)=-\frac{1}{x^{2}}\frac{\left(\overline{\frac{G_{ 0}(1/x+\iota)}{G_{0}(1/x+\iota)}}\right)^{a_{1}}-1}{1/x-\iota}=\frac{\left( \overline{\frac{G_{0}(1+\iota x)}{G_{0}(1+\iota x)}}\right)^{a_{1}}-1}{x} \frac{1}{\iota x-1}.\]
As the map \(x\mapsto\left(\overline{\frac{G_{0}(1+\iota x)}{G_{0}(1+\iota x)}}\right)^{a _{1}}-1\) is real analytic and vanishes at \(0\), \(\widetilde{\psi}^{\prime}\) is also analytic. Hence \(\widetilde{\psi}\in C^{\omega}(\mathbb{R})\). Moreover,
\[\int_{0}^{u}\frac{1}{v-\iota s}dv=\int_{0}^{u}\frac{v+\iota s}{v^{2}+s^{2}}dv =\log\sqrt{u^{2}+s^{2}}-\log s+\iota\operatorname{arccot}(s/u).\]
Hence
\[\int_{0}^{u}G_{0}(v+\iota s)^{-a_{1}}\overline{G_{0}(v+\iota s)}^{-a_{2}}dv=- \log(s/u)+\varrho^{+}(s/u),\]
where \(\varrho^{+}(x)=\log\sqrt{1+x^{2}}+\widetilde{\psi}(x)+\iota\operatorname{ arccot}(x)\) is analytic. In particular,
\[\int_{0}^{1}G_{0}(v+\iota s)^{-a_{1}}\overline{G_{0}(v+\iota s)}^{-a_{2}}dv=- \log s+\varrho^{+}(s). \tag{4.12}\]
Since
\[\int_{-1}^{0}G_{0}(v+\iota s)^{-a_{1}}\overline{G_{0}(v+\iota s)}^{-a_{2}}dv= \int_{0}^{1}G_{0}(-v+\iota s)^{-a_{1}}\overline{G_{0}(-v+\iota s)}^{-a_{2}}dv\]
and \(G_{0}(-v+\iota s)=\theta_{0}\overline{G_{0}(v+\iota s)}\) if \(s,v\in(0,1]\), we get
\[\int_{-1}^{0}G_{0}(v+\iota s)^{-a_{1}}\overline{G_{0}(v+\iota s)}^{-a_{2}}dv= \theta_{0}^{(a_{2}-a_{1})}\int_{0}^{1}G_{0}(v+\iota s)^{-a_{2}}\overline{G_{0 }(v+\iota s)}^{-a_{1}}dv.\]
In view of (4.12), this gives
\[\mathfrak{G}^{0}_{a_{1},a_{2}}(u,s)=-(1+\theta_{0}^{(a_{2}-a_{1})})\log s+ \log u+\theta_{0}^{(a_{2}-a_{1})}\overline{\varrho^{+}(s)}+\varrho^{+}(s/u).\]
Since, by (4.10), \(1+\theta_{0}^{(a_{2}-a_{1})}=1+e^{\pi t(\frac{a_{2}}{m}-\frac{a_{1}}{m})}= \mathfrak{B}(\frac{a_{1}}{m},\frac{a_{2}}{m})\), which gives the first line of (4.11).
**Lemma 4.3**.: _Suppose that \(a=a_{1}+a_{2}<m\) and \(\frac{a_{1}}{m},\frac{a_{2}}{m}\notin\mathbb{Z}\). If \(\frac{a}{m}\notin\mathbb{Z}\) then for every \(0<r<1\) there exist \(\rho^{\pm},\varrho^{\pm}\in C^{\omega}([0,r])\) such that if \(0<u\leq 1\) and \(0<|s|\leq ru\) then_
\[\begin{split}\mathfrak{G}^{l}_{a_{1},a_{2}}(u,s)& =\theta_{0}^{2l(a_{2}-a_{1})}\big{(}\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m})|s|^{\frac{m-a}{m}}+\rho^{+}(|s|)+u^{\frac{m-a}{m}}\varrho^{+ }(\tfrac{|s|}{u})\big{)}\text{ if }s>0\\ \mathfrak{G}^{l}_{a_{1},a_{2}}(u,s)&=\theta_{0}^{(2 l+1)(a_{2}-a_{1})}\big{(}\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m})|s|^{ \frac{m-a}{m}}+\rho^{-}(|s|)+u^{\frac{m-a}{m}}\varrho^{-}(\tfrac{|s|}{u}) \big{)}\text{ if }s<0.\end{split} \tag{4.13}\]
_If \(\frac{a}{m}\in\mathbb{Z}\) then there exist \(\rho^{\pm},\varrho^{\pm}\in C^{\omega}([0,1])\) and \(c_{\pm}\in\mathbb{C}\) such that if \(0<u\leq 1\) and \(0<|s|\leq u\) then_
\[\begin{split}\mathfrak{G}^{l}_{a_{1},a_{2}}(u,s)& =\theta_{0}^{2l(a_{2}-a_{1})}\Big{(}-\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m})|s|^{\frac{m-a}{m}}\log|s|\\ &\quad+c_{+}|s|^{\frac{m-a}{m}}\log u+\rho^{+}(|s|)+u^{\frac{m-a }{m}}\varrho^{+}(\tfrac{|s|}{u})\Big{)}\text{ if }s>0\\ \mathfrak{G}^{l}_{a_{1},a_{2}}(u,s)&=\theta_{0}^{(2 l+1)(a_{2}-a_{1})}\Big{(}-\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m})|s|^{ \frac{m-a}{m}}\log|s|\\ &\quad+c_{-}|s|^{\frac{m-a}{m}}\log u+\rho^{-}(|s|)+u^{\frac{m-a }{m}}\varrho^{-}(\tfrac{|s|}{u})\Big{)}\text{ if }s<0.\end{split} \tag{4.14}\]
Proof.: In view of (3.22) and (4.4), it suffices to show the first line of (4.13) and (4.14) for \(l=0\). Let \(n=\lceil\frac{m-a}{m}\rceil\). By (4.3), for every \(k\geq 0\),
\[\frac{d^{k}}{ds^{k}}\mathfrak{G}^{0}_{a_{1},a_{2}}=\iota^{k}\sum_{0\leq j\leq k }k!(-1)^{k-j}\binom{-\frac{a_{1}}{m}}{j}\binom{-\frac{a_{2}}{m}}{k-j}\mathfrak{ G}^{0}_{a_{1}+jm,a_{2}+(k-j)m}.\]
A direct computation shows that if \(a=a_{1}+a_{2}<m\) and \(u\in[0,1]\) then
\[\mathfrak{G}^{0}_{a_{1},a_{2}}(u,0)=\frac{m}{m-a}(u^{\frac{m-a}{m}}+\theta_{0 }^{a_{2}-a_{1}}).\]
It follows that if \(k<\frac{m-a}{m}\) (i.e. \(k<n\)) then there exists \(c_{k,1},c_{k,0}\in\mathbb{C}\) such that
\[\frac{d^{k}}{ds^{k}}\mathfrak{G}^{0}_{a_{1},a_{2}}(u,0)=c_{k,1}u^{\frac{m-a}{m }-k}+c_{k,0}. \tag{4.15}\]
If \(\frac{a}{m}\notin\mathbb{Z}\) then for any \(0\leq j\leq n\) we have \(a_{1}+jm+a_{2}+(n-j)m=a+nm>m\). Hence, by Lemma 4.1, there exist \(\rho^{+}_{n},\varrho^{+}_{n}\in C^{\omega}([0,r])\) such that for all \(0<u\leq 1\) and \(0<s\leq ru\),
\[\begin{split}\frac{d^{n}}{ds^{n}}&\mathfrak{G}^{0}_ {a_{1},a_{2}}(u,s)=\rho^{+}_{n}(s)+u^{\frac{m-a}{m}-n}\varrho^{+}_{n}(\tfrac{s }{u})\\ &\quad+\iota^{n}\sum_{0\leq j\leq n}n!(-1)^{n-j}\binom{-\frac{a_{1 }}{m}}{j}\binom{-\frac{a_{2}}{m}}{n-j}\mathfrak{B}(\tfrac{a_{1}}{m}+j,\tfrac {a_{2}}{m}+n-j)s^{\frac{m-a}{m}-n}.\end{split}\]
If \(\frac{a}{m}\in\mathbb{Z}\) then \(n=\frac{m-a}{m}\) and \(a_{1}+jm+a_{2}+(n-j)m=a+nm=m\). Hence, by Lemma 4.2, there exist \(\rho^{+}_{n},\varrho^{+}_{n}\in C^{\omega}([0,1])\) such that for all \(0<u\leq 1\) and \(0<s\leq u\),
\[\begin{split}\frac{d^{n}}{ds^{n}}&\mathfrak{G}^{0}_ {a_{1},a_{2}}(u,s)=\rho^{+}_{n}(s)+\varrho^{+}_{n}(\tfrac{s}{u})\\ &\quad+\iota^{n}\sum_{0\leq j\leq n}n!(-1)^{n-j}\binom{-\frac{a_ {1}}{m}}{j}\binom{-\frac{a_{2}}{m}}{n-j}\big{(}-\mathfrak{B}(\tfrac{a_{1}}{m} +j,\tfrac{a_{2}}{m}+n-j)\log s+\log u\big{)}.\end{split}\]
By (4.8) (and its extension in the integer case),
\[\iota^{n}\sum_{0\leq j\leq n}n!(-1)^{n-j}\binom{-\frac{a_{1}}{m}}{j} \binom{-\frac{a_{2}}{m}}{n-j}\mathfrak{B}(\tfrac{a_{1}}{m}+j,\tfrac{a_{2}}{m}+n-j)\] \[=(-1/2)^{n}\sum_{0\leq j\leq n}n!\binom{n}{j}\binom{\frac{a}{m}+n-2 }{n}\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m})=(-1)^{n}n!\binom{\frac{a}{ m}+n-2}{n}\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m}).\]
Therefore, in the non-integer case, for all \(0<u\leq 1\) and \(0<s\leq ru\),
\[\frac{d^{n}}{ds^{n}}\mathfrak{G}^{0}_{a_{1},a_{2}}(u,s)=\rho_{n}^{+}(s)+u^{ \frac{m-a}{m}-n}\phi_{n}^{+}(\tfrac{s}{u})+(-1)^{n}n!\binom{-(\frac{m-a}{m}-(n- 1))}{}\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m})s^{\frac{m-a}{m}-n}.\]
In the integer case, for all \(0<u\leq 1\) and \(0<s\leq u\),
\[\frac{d^{n}}{ds^{n}}\mathfrak{G}^{0}_{a_{1},a_{2}}(u,s)=\rho_{n}^{+}(s)+\varrho _{n}^{+}(\tfrac{s}{u})-n!\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac{a_{2}}{m})\log s +c_{n}\log u.\]
Since
\[\frac{d^{k}}{ds^{k}}\mathfrak{G}^{0}_{a_{1},a_{2}}(u,s)=\frac{d^{k}}{ds^{k}} \mathfrak{G}^{0}_{a_{1},a_{2}}(u,0)+\int_{0}^{s}\frac{d^{k+1}}{ds^{k+1}} \mathfrak{G}^{0}_{a_{1},a_{2}}(u,t)dt\text{ for all }0\leq k<n,\]
using the formulae for \(\frac{d^{n}}{ds^{n}}\mathfrak{G}^{0}_{a_{1},a_{2}}\) together with (4.15) and induction, we obtain (4.13) and (4.14).
_Remark 4.4_.: To summarize, by Lemmas 4.1, 4.2 and 4.3, for any pair of integer numbers \(a_{1},a_{2}\) such that \(\frac{a_{1}}{m},\frac{a_{2}}{m}\notin\mathbb{Z}\) if \(\frac{m-a}{m}\notin\mathbb{Z}\) (\(a=a_{1}+a_{2}\)) or \(\frac{m-a}{m}\in\mathbb{Z}_{<0}\) then
\[\mathfrak{G}^{l}_{a_{1},a_{2}}(1,s) =\theta_{0}^{2l(a_{2}-a_{1})}\mathfrak{B}(\tfrac{a_{1}}{m},\tfrac {a_{2}}{m})|s|^{\frac{m-a}{m}}+C^{\omega}((0,1])\] \[\mathfrak{G}^{l}_{a_{1},a_{2}}(1,s) =\theta_{0}^{(2l+1)(a_{2}-a_{1})}\mathfrak{B}(\tfrac{a_{1}}{m}, \tfrac{a_{2}}{m})|s|^{\frac{m-a}{m}}+C^{\omega}([-1,0)). \tag{4.16}\]
If \(\frac{m-a}{m}\in\mathbb{Z}_{\geq 0}\) then
\[\mathfrak{G}^{l}_{a_{1},a_{2}}(1,s) =-\theta_{0}^{2l(a_{2}-a_{1})}\mathfrak{B}(\tfrac{a_{1}}{m}, \tfrac{a_{2}}{m})|s|^{\frac{m-a}{m}}\log|s|+C^{\omega}((0,1])\] \[\mathfrak{G}^{l}_{a_{1},a_{2}}(1,s) =-\theta_{0}^{(2l+1)(a_{2}-a_{1})}\mathfrak{B}(\tfrac{a_{1}}{m}, \tfrac{a_{2}}{m})|s|^{\frac{m-a}{m}}\log|s|+C^{\omega}([-1,0)). \tag{4.17}\]
Indeed, in the non-integer case, we obtain the analyticity of the remainder only on intervals \([-r,0]\) and \([0,r]\) for any \(0<r<1\). Nevertheless, for any choice of integer \(a_{1},a_{2}\), the function \(\mathfrak{G}^{l}_{a_{1},a_{2}}(1,s)\) is analytic on \([r,1]\) and \([-1,-r]\) for any \(0<r<1\). This gives our claim.
### Evaluation of asymptotic factors for \(\varphi_{f,l}\)
The behaviour of higher derivatives of \(\varphi_{f,l}\) at zero is evaluated by linear combinations of invariant distributions \(\partial_{j}^{k}\). For this reason, we define a list of new functionals \(\mathscr{C}_{l}^{k}:C^{k}(\mathcal{D})\to\mathbb{C}\) for \(k\geq 0\) and \(0\leq l<2m\) given by
\[\mathscr{C}_{l}^{k}(f)=\sum_{\begin{subarray}{c}0\leq j\leq k\wedge(m-2)\\ j\neq k-(m-1)\,\mathrm{mod}\,m\end{subarray}}\theta_{0}^{l(2j-k)}\mathfrak{B}( \tfrac{(m-1)-j}{m},\tfrac{(m-1)-(k-j)}{m})\partial_{j}^{k}(f).\]
Comparing with (1.5), functionals \(\mathscr{C}_{l}^{k}\) play a key role in understanding the meaning of distribution \(\mathfrak{C}_{o,l}^{k}\).
From now on, we adopt the convention \(\binom{0}{n}:=\lim_{x\to 0}\binom{x}{n}/x=\frac{(-1)^{n-1}}{n}\).
**Theorem 4.5**.: _For any \(k\geq 0\) let \(n=\lceil\frac{k-(m-2)}{m}\rceil\) and \(b=n-\frac{k-(m-2)}{m}\). Suppose that \(f\in C^{k\vee(n+1)}(\mathcal{D})\) is such that \(\partial_{i}^{j}(f)=0\) for all \(0\leq j<k\) and \(0\leq i\leq j\wedge(m-2)\) with \(i\neq j-(m-1)\operatorname{mod}m\). Then \(\varphi_{f,l}\in C^{n+\operatorname{P_{b}}}([-1,0)\cup(0,1])\) and there exists \(C>0\) such that \(\|\varphi_{f,l}\|_{C^{n+\operatorname{P_{b}}}([-1,0)\cup(0,1])}\leq C\|f\|_{C^ {k\vee(n+1)}(\mathcal{D})}\). Moreover, for every \(0\leq l<m\),_
\[\lim_{s\to 0^{+}}|s|^{b+1}D^{n+1}\varphi_{f,l}(s) =(-1)^{n+1}\frac{(n+1)!}{k!}\binom{b}{n+1}\mathscr{C}_{2l}^{k}(f) \tag{4.19}\] \[\lim_{s\to 0^{-}}|s|^{b+1}D^{n+1}\varphi_{f,l}(s) =\frac{(n+1)!}{k!}\binom{b}{n+1}\mathscr{C}_{2l+1}^{k}(f). \tag{4.18}\]
_Remark 4.6_.: Before the proof, let us note that
\[k\vee(n+1)=\left\{\begin{array}{cl}k+1&\text{if $k=0$ or $(k=1$ with $m=2$)}\\ k&\text{otherwise.}\end{array}\right.\]
Indeed, the inequality \(\frac{k-(m-2)}{m}+1\leq k\) is equivalent to \(2\leq k(m-1)\). It follows that if \(k\geq 1\) with \(m\geq 3\) or \(k\geq 2\) then \(n<k\), so \(k\vee(n+1)=k\).
Proof.: Let us decompose \(f=f_{<k}+f_{k}+e_{f}\) with
\[f_{<k}(\omega,\overline{\omega})=\sum_{0\leq j<k}\frac{1}{j!} \sum_{0\leq i\leq j}\binom{j}{i}\frac{\partial^{j}f}{\partial\omega^{i} \partial\overline{\omega}^{j-i}}(0,0)\omega^{i}\overline{\omega}^{j-i},\] \[f_{k}(\omega,\overline{\omega})=\frac{1}{k!}\sum_{0\leq i\leq k} \binom{k}{i}\frac{\partial^{k}f}{\partial\omega^{i}\partial\overline{\omega} ^{k-i}}(0,0)\omega^{i}\overline{\omega}^{k-i}.\]
By Lemma 3.10,
\[\varphi_{f_{<k},l}\in C^{\omega}([-1,0))\cap C^{\omega}((0,1]) \text{ for }0\leq l<m, \tag{4.21}\] \[\|\varphi_{f_{<k},l}\|_{C^{n+1}([-1,0)\cup(0,1])}\leq C_{k}^{n+1} \|f\|_{C^{k}(\mathcal{D})}. \tag{4.20}\]
Since \(D^{j}(f_{k}+e_{f})=0\) for every \(0\leq j<k\), in view of (3.16), if \((j_{1},j_{2})\in\mathbb{Z}_{\geq 0}^{2}\) is such that \(j_{1}+j_{2}=j\leq n+1\) then
\[\left|\frac{\partial^{j}\mathscr{F}_{f_{k}+e_{f},l}(z,\overline{z })}{\partial z^{j_{1}}\partial\overline{z}^{j_{2}}}\right| =O\big{(}\|f_{k}+e_{f}\|_{C^{k\vee(n+1)}(\mathcal{D})}\langle| \Im z|\rangle^{-(\frac{(m-2)-k}{m}+j)}\big{)}\] \[=O\big{(}\|f\|_{C^{k\vee(n+1)}(\mathcal{D})}\langle|\Im z|\rangle ^{(n+1-j)-(b+1)}\big{)}.\]
As \(\frac{d^{j}}{ds^{j}}=\big{(}\iota\left(\frac{\partial}{\partial z}-\frac{ \partial}{\partial\overline{z}}\right)\big{)}^{j}\), this gives
\[|D^{j}\varphi_{f_{k}+e_{f},l}(s)| =O\big{(}\|f\|_{C^{k\vee(n+1)}(\mathcal{D})}\big{)}\text{ if }0\leq j \leq n-1 \tag{4.23}\] \[|D^{n}\varphi_{f_{k}+e_{f},l}(s)| =O\big{(}\|f\|_{C^{k\vee(n+1)}(\mathcal{D})}\langle|s|\rangle^{-b} \big{)}\] (4.24) \[|D^{n+1}\varphi_{f_{k}+e_{f},l}(s)| =O\big{(}\|f\|_{C^{k\vee(n+1)}(\mathcal{D})}|s|^{b+1}\big{)}. \tag{4.22}\]
By (4.23),
\[\|D^{n}\varphi_{f_{k}+e_{f},l}\|_{L^{1}}=O\big{(}\|f\|_{C^{k\vee(n+1)}( \mathcal{D})}\big{)}.\]
In view of (4.21), (4.22) and (4.24), this gives
\[\|\varphi_{f,l}\|_{C^{n+\operatorname{P_{b}}}}\leq\|\varphi_{f_{<k},l}\|_{C^{n +\operatorname{P_{b}}}}+\|\varphi_{f_{k}+e_{f}}\|_{C^{n+\operatorname{P_{b}}}}=O \big{(}\|f\|_{C^{k\vee(n+1)}(\mathcal{D})}\big{)}.\]
Since \(D^{j}(e_{f})=0\) for every \(0\leq j\leq k\), we also have
\[|D^{n+1}\varphi_{e_{f},l}(s)|=o(|s|^{-(b+1)}). \tag{4.25}\]
Indeed, if \(k\vee(n+1)=k+1\), i.e. \(k=n\) then, again by (3.16),
\[\big{\|}D^{n+1}\mathscr{F}_{e_{f},l}(z,\overline{z})\big{\|}=O(\langle|\Im z|^{-( \frac{(m-2)-(k+1)}{m}+n+1)})=O(|\Im z|^{-(b+1)+\frac{1}{m}}).\]
If \(k\vee(n+1)=k\), i.e. \(n+1\leq k\) then, by (3.18),
\[\big{\|}D^{n+1}\mathscr{F}_{e_{f},l}(z,\overline{z})\big{\|}=o(|\Im z|^{-( \frac{(m-2)-k}{m}+n+1)})=o(|\Im z|^{-(b+1)}).\]
Both yield (4.25).
Therefore, by (4.20) and (4.25),
\[|s|^{b+1}D^{n+1}\varphi_{f,l}(s)=|s|^{b+1}D^{n+1}\varphi_{f_{k},l}(s)+o(1). \tag{4.26}\]
By (3.30) (see the proof of Lemma 3.10),
\[\mathscr{F}_{f_{k},l}(1,s)\!=\!\frac{1}{k!}\sum_{\begin{subarray}{c}0\leq j \leq k\wedge(m-2)\\ j\neq k-(m-1)\,\mathrm{mod}\,m\end{subarray}}\partial_{j}^{k}(f)\mathfrak{G}_{( m-1)-j,(m-1)-(k-j)}^{l}(1,s)\!+\!C^{\omega}((0,1])\!\cap\!C^{\omega}([-1,0)).\]
As \(\frac{m-((m-1)-j+(m-1)-(k-j))}{m}=\frac{k-(m-2)}{m}=n-b\), by (4.16), (4.17) and the definition of \(\mathscr{C}_{l}^{k}(f)\),
\[\varphi_{f_{k},l}(s)=\frac{\mathscr{C}_{2l}^{k}(f)}{k!}|s|^{n-b}+C^{\omega}(( 0,1]),\ \varphi_{f_{k},l}(s)=\frac{\mathscr{C}_{2l+1}^{k}(f)}{k!}|s|^{n-b}+C^{\omega}( [-1,0)) \tag{4.27}\]
if \(0<b<1\) and
\[\varphi_{f_{k},l}(s) =-\frac{\mathscr{C}_{2l}^{k}(f)}{k!}|s|^{n}\log|s|+C^{\omega}((0, 1])\] \[\varphi_{f_{k},l}(s) =-\frac{\mathscr{C}_{2l+1}^{k}(f)}{k!}|s|^{n}\log|s|+C^{\omega}([ -1,0)) \tag{4.28}\]
if \(b=0\). After \(n+1\) times differentiation, it follows that
\[D^{n+1}\varphi_{f_{k},l}(s) =|s|^{-(b+1)}(-1)^{n+1}\frac{(n+1)!}{k!}\binom{b}{n+1}\mathscr{C }_{2l}^{k}(f)+C^{\omega}((0,1])\] \[D^{n+1}\varphi_{f_{k},l}(s) =|s|^{-(b+1)}\frac{(n+1)!}{k!}\binom{b}{n+1}\mathscr{C}_{2l+1}^{ k}(f)+C^{\omega}([-1,0)).\]
Finally, by (4.20) and (4.26), this yields (4.18) and (4.19).
**Theorem 4.7**.: _Let \(k\geq 0\), \(0\leq l<m\) and \(\epsilon\in\{0,1\}\). Suppose that \(f\in C^{k\vee(n+1)}(\mathcal{D})\) and \(\mathscr{C}_{2l+\epsilon}^{j}(f)=0\) for all \(0\leq j<k\). Then \(\varphi_{f,l}\in C^{n+\mathrm{P}_{b}}((0,(-1)^{\epsilon}])\) with_
\[\lim_{\begin{subarray}{c}s\to 0\\ s\in(0,(-1)^{\epsilon}]\end{subarray}}|s|^{b+1}D^{n+1}\varphi_{f,l}(s)=(-1)^ {(1-\epsilon)(n+1)}\frac{(n+1)!}{k!}\binom{b}{n+1}\mathscr{C}_{2l+\epsilon}^{ k}(f) \tag{4.29}\]
_and_
\[\text{there exists }C>0\text{ such that }\|\varphi_{f,l}\|_{C^{n+\mathrm{P}_{b}}((0,(-1)^{ \epsilon}])}\leq C\|f\|_{C^{k\vee(n+1)}(\mathcal{D})}. \tag{4.30}\]
_In particular, if \(k\geq m-1\) then \(\varphi_{f,l}\in C^{\mathfrak{e}(\sigma,k)}((0,(-1)^{\epsilon}])\) and there exists \(C>0\) such that \(\|\varphi_{f,l}\|_{C^{\mathfrak{e}(\sigma,k)}((0,(-1)^{\epsilon}])}\leq C\|f\|_{ C^{k\vee(n+1)}(\mathcal{D})}\)._
_On the other hand, if \(f\in C^{k\vee(n+1)}(\mathcal{D})\) is such that \(\varphi_{f,l}\in C^{r}((0,(-1)^{\epsilon}])\) for some \(r\in\mathbb{R}_{\eta}\) with \(0<v(r)\leq\mathfrak{o}(\sigma,k)\) then \(\mathscr{C}_{2l+\epsilon}^{j}(f)=0\) for all \(j\geq 0\) such that \(\mathfrak{o}(\sigma,j)<v(r)\)._
Proof.: We will focus only on the even case, when \(\epsilon=0\). The proof in the odd case proceeds in the same way. Let us decompose \(f=f_{<k}+f_{k}+e_{f}\), where \(f_{<k}=\sum_{0\leq j<k}f_{j}\) with
\[f_{j}(\omega,\overline{\omega})=\frac{1}{j!}\sum_{0\leq i\leq j}\binom{j}{i} \frac{\partial^{j}f}{\partial\omega^{i}\partial\overline{\omega}^{j-i}}(0,0) \omega^{i}\overline{\omega}^{j-i}.\]
By (4.27), (4.28),
\[\begin{split}\varphi_{f_{j},l}(s)&=\frac{ \mathscr{C}_{2l}^{j}(f)}{j!}s^{\frac{j-(m-2)}{m}}+C^{\omega}((0,1])\text{ if }j\neq m-2\ \operatorname{mod}m\\ \varphi_{f_{j},l}(s)&=-\frac{\mathscr{C}_{2l}^{j}(f )}{j!}s^{\frac{j-(m-2)}{m}}\log s+C^{\omega}((0,1]))\text{ if }j=m-2\ \operatorname{mod}m.\end{split} \tag{4.31}\]
Since the operator \(f\mapsto f_{j}\) takes values in the finite-dimensional space of homogenous polynomials of degree \(j\), for every \(0\leq j<k\) there exists \(C_{j}>0\) such that
\[\|\varphi_{f_{j},l}(s)-\frac{\mathscr{C}_{2l}^{j}(f)}{j!}s^{\frac{j-(m-2)}{m} }\|_{C^{n+\operatorname{P_{b}}}((0,1])}\leq C_{j}\|f\|_{C^{k}(\mathcal{D})} \text{ or }\]
\[\|\varphi_{f_{j},l}(s)+\frac{\mathscr{C}_{2l}^{j}(f)}{j!}s^{\frac{j-(m-2)}{m} }\log s\|_{C^{n+\operatorname{P_{b}}}((0,1])}\leq C_{j}\|f\|_{C^{k}(\mathcal{D })}.\]
If \(\mathscr{C}_{2l}^{j}(f)=0\) for all \(0\leq j<k\) then
\[\varphi_{f_{<k},l}\in C^{\omega}((0,1])\text{ and }\|\varphi_{f_{<k},l}\|_{C^{n+ \operatorname{P_{b}}}((0,1])}\leq\sum_{0\leq j<k}C_{j}\|f\|_{C^{k}(\mathcal{D})}. \tag{4.32}\]
Again, by Theorem 4.5 applied to \(f_{k}+e_{f}\), we have \(\varphi_{f_{k}+e_{f},l}\in C^{n+\operatorname{P_{b}}}((0,1])\),
\[\|\varphi_{f_{k}+e_{f},l}\|_{C^{n+\operatorname{P_{b}}}((0,1])}\leq C\|f_{k}+e _{f}\|_{C^{k\vee(n+1)}(\mathcal{D})}\leq C^{\prime}\|f\|_{C^{k\vee(n+1)}( \mathcal{D})}\]
and
\[\lim_{s\to 0^{+}}s^{b+1}D^{n+1}\varphi_{f_{k}+e_{f},l}(s)=(-1)^{(n+1)}\frac{(n+ 1)!}{k!}\binom{b}{n+1}\mathscr{C}_{2l}^{k}(f).\]
Since \(\varphi_{f,l}=\varphi_{f_{<k},l}+\varphi_{f_{k}+e_{f},l}\), in view of (4.32), this yields (4.29) and (4.30). As \(n-b=\mathfrak{o}(\sigma,k)\), by Remark 2.1, this gives \(\varphi_{f,l}\in C^{\mathfrak{c}(\sigma,k)}((0,1])\).
Now suppose that \(f\in C^{k\vee(n+1)}(\mathcal{D})\) is such that \(\varphi_{f,l}\in C^{r}((0,1])\) for some \(r\in\mathbb{R}_{\eta}\) with \(0<v(r)\leq\mathfrak{o}(\sigma,k)\). Choose \(m-2<j_{0}\leq k\) such that \(\mathfrak{o}(\sigma,j_{0}-1)<v(r)\leq\mathfrak{o}(\sigma,j_{0})\). By the first part of the theorem, \(\varphi_{f-f_{<j_{0}},l}\in C^{\mathfrak{c}(\sigma,j_{0})}((0,1])\). As \(\varphi_{f,l}\in C^{r}((0,1])\) and \(v(r)\leq\mathfrak{o}(\sigma,j_{0})\), it follows that \(\varphi_{f_{<j_{0}},l}\in C^{r}((0,1])\). In view of (4.31),
\[\varphi_{f_{<j_{0}},l}(s)=\sum_{\begin{subarray}{c}0\leq j<j_{0}\\ j\neq m-2\operatorname{mod}m\end{subarray}}\frac{\mathscr{C}_{2l}^{j}(f)}{j!}s^ {\frac{j-(m-2)}{m}}+\sum_{\begin{subarray}{c}0\leq j<j_{0}\\ j=m-2\operatorname{mod}m\end{subarray}}\frac{\mathscr{C}_{2l}^{j}(f)}{j!}s^{ \frac{j-(m-2)}{m}}(-\log s)+C^{\omega}((0,1]).\]
Therefore,
\[\sum_{\begin{subarray}{c}0\leq j<j_{0}\\ j\neq m-2\operatorname{mod}m\end{subarray}}\frac{\mathscr{C}_{2l}^{j}(f)}{j!}s^ {\frac{j-(m-2)}{m}}-\sum_{\begin{subarray}{c}0\leq j<j_{0}\\ j=m-2\operatorname{mod}m\end{subarray}}\frac{\mathscr{C}_{2l}^{j}(f)}{j!}s^{ \frac{j-(m-2)}{m}}\log s\in C^{r}((0,1])\]
with \(\frac{j-(m-2)}{m}\leq\mathfrak{o}(\sigma,j_{0}-1)<v(r)\) for \(0\leq j<j_{0}\). It follows that \(\mathscr{C}_{2l}^{j}(f)=0\) for \(0\leq j<j_{0}\).
By the proof of Theorem 4.7, we also have the following.
**Corollary 4.8**.: _Let \(k\geq 0\), \(0\leq l<m\) and \(\epsilon\in\{0,1\}\). Suppose that \(f\in C^{k\vee(n+1)}(\mathcal{D})\). Then_
\[\begin{split}\varphi_{f,l}(s)=&-\sum_{ \begin{subarray}{c}0\leq j<k\\ j=m-2\,\mathrm{mod}\,m\end{subarray}}\frac{\mathscr{C}_{2l+\epsilon}^{j}(f)}{j!}|s|^{\frac{j-(m-2)}{m}}\log|s|\\ &+\sum_{\begin{subarray}{c}0\leq j<k\\ j\neq m-2\,\mathrm{mod}\,m\end{subarray}}\frac{\mathscr{C}_{2l+\epsilon}^{j}(f)}{j!}|s|^{\frac{j-(m-2)}{m}}+C^{n+\mathrm{P}_{\mathrm{b}}}((0,(-1)^{\epsilon}]). \end{split} \tag{4.33}\]
### Basic properties of \(\mathscr{C}_{l}^{k}\)
Recall that \(\mathscr{C}_{l}^{k}:C^{k}(\mathcal{D})\to\mathbb{C}\) for \(k\geq 0\) and \(0\leq l<2m\) are given by
\[\mathscr{C}_{l}^{k}(f)=\sum_{\begin{subarray}{c}0\leq j\leq k\wedge(m-2)\\ j\neq k-(m-1)\,\mathrm{mod}\,m\end{subarray}}\theta_{0}^{l(2j-k)}\mathfrak{B}( \tfrac{(m-1)-j}{m},\tfrac{(m-1)-(k-j)}{m})\partial_{j}^{k}(f).\]
The functionals \(\mathscr{C}_{l}^{k}\), \(0\leq l<2m\) are not independent. By definition,
\[\mathscr{C}_{l+m}^{k}=(-1)^{k}\mathscr{C}_{l}^{k}\text{ for any }0\leq l<m. \tag{4.34}\]
Moreover, we can also get back the value of \(\partial_{j}^{k}\) from \(\mathscr{C}_{l}^{k}\). Indeed, for every \(0\leq j\leq k\wedge(m-2)\) with \(j\neq k-(m-1)\,\mathrm{mod}\,m\),
\[\mathfrak{B}(\tfrac{(m-1)-j}{m},\tfrac{(m-1)-(k+1)+j}{m})\partial_{j}^{k}= \frac{1}{2m}\sum_{0\leq l<2m}\theta_{0}^{l(k-2j)}\mathscr{C}_{l}^{k}=\frac{1}{ m}\sum_{0\leq l<m}\theta_{0}^{l(k-2j)}\mathscr{C}_{l}^{k}.\]
Similarly, if \(k\wedge(m-2)<j\leq m-2\) or \(j=m-1\) or \(j=k-(m-1)\,\mathrm{mod}\,m\), then
\[\sum_{0\leq l<2m}\theta_{0}^{l(k-2j)}\mathscr{C}_{l}^{k}=2\sum_{0\leq l<m} \theta_{0}^{l(k-2j)}\mathscr{C}_{l}^{k}=0.\]
Together with (4.34) this gives all linear relations involving the functionals \(\mathscr{C}_{l}^{k}\).
Moreover, using (3.27), we obtain an elegant formula for \(\mathscr{C}_{l}^{k}\) depending on the partial derivatives of the function \(f\). Indeed, if \(0\leq j\leq m-2\), \(j\neq k-(m-1)\,\mathrm{mod}\,m\) and \(0\leq n\leq\frac{j-i}{m}\) then, by (4.9),
\[\begin{split}\mathfrak{B}(\tfrac{(m-1)-j}{m},\tfrac{(m-1)-(k-j)} {m})\frac{\binom{\binom{m-1}{m}-j}{n}}{\binom{\binom{k-j-(m-1)}{m}}{n}}\\ =\mathfrak{B}(\tfrac{(m-1)-j-nm}{m}+n,\tfrac{(m-1)-(k-j)+nm}{m}-n )(-1)^{n}\frac{\binom{-(m-1)-j-nm}{m}}{\binom{-(m-1)-(k-j)+nm}{m}}\\ =\mathfrak{B}(\tfrac{(m-1)-j-nm}{m},\tfrac{(m-1)-(k-j)+nm}{m}). \end{split}\]
By the definition of \(\partial_{j}^{k}\), it follows that
\[\mathscr{C}_{l}^{k}(f) =\sum_{\begin{subarray}{c}0\leq j\leq k\wedge(m-2)\\ j\neq k-(m-2)\,\mathrm{mod}\,m\end{subarray}}\left(\theta_{0}^{l(2j-k)} \mathfrak{B}(\tfrac{(m-1)-j}{m},\tfrac{(m-1)-(k-j)}{m})\right.\] \[\qquad\left.\sum_{\begin{subarray}{c}0\leq n\leq\frac{k-j}{m}\\ i\neq k-(m-1)\,\mathrm{mod}\,m\end{subarray}}\frac{\binom{(k-1)-j}{m}-1}{ \binom{(k-j)-(m-1)}{m}}\frac{\partial^{k}f}{\partial\omega^{j+nm}\partial \overline{\omega}^{k-j-nm}}(0,0)\right)\] \[=\sum_{\begin{subarray}{c}0\leq i\leq k\\ i\neq m-1\,\mathrm{mod}\,m\\ i\neq k-(m-1)\,\mathrm{mod}\,m\end{subarray}}\theta_{0}^{l(2i-k)}\binom{k}{i} \mathfrak{B}(\tfrac{(m-1)-i}{m},\tfrac{(m-1)-(k-i)}{m})\frac{\partial^{k}f}{ \partial\omega^{i}\partial\overline{\omega}^{k-i}}(0,0).\]
According to (4.6),
\[\mathscr{C}_{l}^{k}(f)=\sum_{0\leq i\leq k}\theta_{0}^{l(2i-k)}\binom{k}{i} \mathfrak{B}(\tfrac{(m-1)-i}{m},\tfrac{(m-1)-(k-i)}{m})\frac{\partial^{k}f}{ \partial\omega^{i}\partial\overline{\omega}^{k-i}}(0,0). \tag{4.35}\]
_Remark 4.9_.: This formula generalizes the one for \(C_{\alpha}^{\pm}(\varphi_{f,l}),\alpha\in\mathcal{A}\) in [4, Theorem 9.1] by replacing new functionals for higher order derivatives.
We now strengthen Theorem 3.11 by proving that \(F_{f}\) is also smooth (with some drop of regularity) on the closed sectors \(\overline{\mathcal{D}}(\frac{l}{2m},\frac{l+1}{2m})\).
**Theorem 4.10**.: _Fix \(k\geq m-1\) and \(0\leq l<2m\). Let \(m-1\leq\underline{k}\leq k\) be the natural number given by \(\widehat{\mathfrak{o}}(\sigma,\underline{k})=\underline{k}-(m-2)=\lceil \frac{k-(m-2)}{m}\rceil=\lceil\mathfrak{o}(\sigma,k)\rceil=:n\). Suppose that \(f\in C^{k\vee(n+1)}(\mathcal{D})\) is such that \(\partial_{i}^{j}(f)=0\) for all \(0\leq j<\underline{k}\) and \(0\leq i\leq j\wedge(m-2)\) with \(i\neq j-(m-1)\,\mathrm{mod}\,m\) and \(\mathscr{C}_{l}^{j}(f)=0\) for all \(0\leq j<k\). Then the map \(F_{f}:\mathcal{D}(\frac{l}{2m},\frac{l+1}{2m})\to\mathbb{C}\) has a \(C^{\mathfrak{e}(\sigma,k)}\)-extension on \(\overline{\mathcal{D}}(\frac{l}{2m},\frac{l+1}{2m})\) and there exists \(C>0\) such that \(\|F_{f}\|_{C^{\mathfrak{e}(\sigma,k)}(\overline{\mathcal{D}}(\frac{l}{2m}, \frac{l+1}{2m}))}\leq C\|f\|_{C^{k\vee(n+1)}(\mathcal{D})}\)._
Proof.: We focus only on the even sectors \(\mathcal{D}(\frac{2l}{2m},\frac{2l+1}{2m})\). The proof in the odd case proceeds in the same way. By Theorem 3.11, for every \(0<\varepsilon<1/2\) the map \(F_{f}\) has a \(C^{\widehat{\mathfrak{e}}(\sigma,\underline{k})}\)-extension on \(\overline{\mathcal{D}}(\frac{2l+\varepsilon}{2m},\frac{2l+2}{2m})\subset \overline{\mathcal{D}}(\frac{2l+\varepsilon}{2m},\frac{2l+2-\varepsilon}{2m})\) and there exists \(C_{\varepsilon}>0\) so that \(\|F_{f}\|_{C^{\widehat{\mathfrak{e}}(\sigma,\underline{k})}(\overline{\mathcal{ D}}(\frac{2l+\varepsilon}{2m},\frac{2l+1}{2m}))}\leq C_{\varepsilon}\|f\|_{C \underline{k}(\mathcal{D})}\). Moreover,
\[\mathscr{F}_{f,l}(u,s) =\int_{-1}^{u}\frac{f(G_{l}(v,s))}{(v^{2}+s^{2})^{\frac{m-1}{m}}} dv=\varphi_{f,l}(s)-\int_{u}^{1}\frac{f(G_{l}(v,s))}{(v^{2}+s^{2})^{\frac{m-1}{m}}}dv\] \[=\varphi_{f,l}(s)-\int_{-1}^{-u}\frac{f(G_{l}(-v,s))}{(v^{2}+s^{2 })^{\frac{m-1}{m}}}dv.\]
As \(G_{l}(-v,s)=\theta_{0}^{-1}G_{l}(v,-s)\) for \(s>0\), this gives
\[\mathscr{F}_{f,l}(z,\overline{z})=\varphi_{f,l}(\Im z)-\mathscr{F}_{f\circ \theta_{0}^{-1},l}(-z,-\overline{z})\text{ if }\Im z>0.\]
It follows that
\[F_{f}(\omega,\overline{\omega})=\varphi_{f,l}(\Im\omega^{m})-F_{f\circ\theta_ {0}^{-1}}(\theta_{0}\omega,\theta_{0}^{-1}\overline{\omega})\text{ on }\mathcal{D}(\frac{2l}{2m},\frac{2l+1}{2m}). \tag{4.36}\]
Note that
\[\frac{\partial^{j}(f\circ\theta_{0}^{-1})}{\partial\omega^{i}\partial\overline {\omega}^{j-i}}(0,0)=\theta_{0}^{-(2i-j)}\frac{\partial^{j}f}{\partial\omega^ {i}\partial\overline{\omega}^{j-i}}(0,0).\]
By (3.27), it follows that \(\partial_{i}^{j}(f\circ\theta_{0}^{-1})=\theta_{0}^{-(2i-j)}\partial_{i}^{j}(f)\). Therefore, by assumption, \(\partial_{i}^{j}(f\circ\theta_{0}^{-1})=0\) for all \(0\leq j<\underline{k}\) and \(0\leq i\leq j\wedge(m-2)\) with \(i\neq j-(m-1)\operatorname{mod}m\). Using Theorem 3.11 again, we obtain the map \(F_{f\circ\theta_{0}^{-1}}\) has a \(C^{\widehat{\mathfrak{e}}(\sigma,\underline{k})}\)-extension on \(\overline{\mathcal{D}}(\frac{2l+1}{2m},\frac{2l+2-\varepsilon}{2m})\subset \overline{\mathcal{D}}(\frac{2l+\varepsilon}{2m},\frac{2l+2-\varepsilon}{2m})\) and \(\|F_{f\circ\theta_{0}^{-1}}\|_{C^{\widehat{\mathfrak{e}}(\sigma,\underline{k })}(\overline{\mathcal{D}}(\frac{2l+1}{2m},\frac{2l+2-\varepsilon}{2m}))}\leq C _{\varepsilon}\|f\circ\theta_{0}^{-1}\|_{C^{\underline{k}}(\mathcal{D})}\). In particular,
\[\begin{split}& F_{f\circ\theta_{0}^{-1}}(\theta_{0}\omega, \theta_{0}^{-1}\overline{\omega})\text{ is of the class }C^{\widehat{\mathfrak{e}}(\sigma, \underline{k})}\text{ on }\overline{\mathcal{D}}(\frac{2l}{2m},\frac{2l+1- \varepsilon}{2m})\text{ and }\\ &\|F_{f\circ\theta_{0}^{-1}}(\theta_{0}\omega,\theta_{0}^{-1} \overline{\omega})\|_{C^{\widehat{\mathfrak{e}}(\sigma,\underline{k})}( \overline{\mathcal{D}}(\frac{2l}{2m},\frac{2l+1-\varepsilon}{2m}))}\leq C_{ \varepsilon}\|f\|_{C^{\underline{k}}(\mathcal{D})}.\end{split} \tag{4.37}\]
By Theorem 4.7, \(\varphi_{f,l}\) has a \(C^{\mathfrak{e}(\sigma,k)}\)-extension on \([0,1]\) with
\[\|\varphi_{f,l}\|_{C^{\mathfrak{e}(\sigma,\underline{k})}([0,1])}\leq C\|f\|_{ C^{\mathfrak{e}\vee(n+1)}(\mathcal{D})}.\]
Therefore, \(\omega\to\varphi_{f,l}(\operatorname{S}\!\omega^{m})\) has a \(C^{\mathfrak{e}(\sigma,k)}\)-extension on \(\overline{\mathcal{D}}(\frac{2l}{2m},\frac{2l+1}{2m})\)
\[\|\varphi_{f,l}(\operatorname{S}\!\omega^{m})\|_{C^{\mathfrak{e}(\sigma,k)}( \overline{\mathcal{D}}(\frac{2l}{2m},\frac{2l+1}{2m}))}\leq C^{\prime}\|f\|_{ C^{\mathfrak{e}\vee(n+1)}(\mathcal{D})}.\]
As \(F_{f}\) is a \(C^{\widehat{\mathfrak{e}}(\sigma,\underline{k})}\)-map on \(\overline{\mathcal{D}}(\frac{2l+\varepsilon}{2m},\frac{2l+1}{2m})\) with \(\|F_{f}\|_{C^{\widehat{\mathfrak{e}}(\sigma,\underline{k})}(\overline{ \mathcal{D}}(\frac{2l+\varepsilon}{2m},\frac{2l+1}{2m}))}\leq C_{\varepsilon} \|f\|_{C^{\underline{k}}(\mathcal{D})}\), \(\mathfrak{o}(\sigma,k)\leq\lceil\mathfrak{o}(\sigma,k)\rceil=\widehat{ \mathfrak{o}}(\sigma,\underline{k})\) and \(\underline{k}\leq k\), in view of (4.36) and (4.37), this gives our claim.
We now show that Theorem 4.10 is optimal.
**Theorem 4.11**.: _Fix \(k\geq m-1\) and \(0\leq l<2m\). If \(f\in C^{k\vee(n+1)}(\mathcal{D})\) is such that \(F_{f}\in C^{r}(\overline{\mathcal{D}}(\frac{l}{2m},\frac{l+1}{2m}))\) for some \(r\in\mathbb{R}_{\eta}\) with \(0<v(r)\leq\mathfrak{o}(\sigma,k)\) then \(\mathscr{C}_{l}^{j}(f)=0\) for all \(j\geq 0\) such that \(\mathfrak{o}(\sigma,j)<v(r)\) and \(\partial_{i}^{j}(f)=0\) for all \(j\geq 0\) with \(\widehat{\mathfrak{o}}(\sigma,j)<v(r)\) and \(0\leq i\leq j\wedge(m-2)\) with \(i\neq j-(m-1)\operatorname{mod}m\)._
Proof.: We will focus only on the even sectors \(\mathcal{D}(\frac{2l}{2m},\frac{2l+1}{2m})\). The proof in the odd case proceeds in the same way.
By definition, \(\varphi_{f,l}(s)=\mathscr{F}_{f,l}(1,s)=F_{f}(G_{l}(1+\iota s),\overline{G_{l} (1+\iota s)})\) on \((0,1]\). As \(F_{f}\in C^{r}(\overline{\mathcal{D}}(\frac{2l}{2m},\frac{2l+1}{2m}))\), it follows that \(\varphi_{f,l}\in C^{r}((0,1])\). In view of Theorem 4.7, \(\mathscr{C}_{2l}^{j}(f)=0\) for all \(j\geq 0\) such that \(\mathfrak{o}(\sigma,j)<v(r)\).
The proof of the vanishing of \(\partial_{i}^{j}\) is much more involved. Choose \(m-1\leq\underline{k}\leq\overline{k}<k\) such that \(\mathfrak{o}(\sigma,\overline{k}-1)<v(r)\leq\mathfrak{o}(\sigma,\underline{k})\) and \(\widehat{\mathfrak{o}}(\sigma,\underline{k}-1)<v(r)\leq\widehat{\mathfrak{o}} (\sigma,\underline{k})\). By the first part of the theorem, \(\mathscr{C}_{2l}^{j}(f)=0\) for all \(0\leq j<\overline{k}\). Let us decompose \(f=f_{<\underline{k}}+e_{f}\), where \(f_{<\underline{k}}=\sum_{0\leq j<\underline{k}}f_{j}\) with
\[f_{j}(\omega,\overline{\omega})=\frac{1}{j!}\sum_{0\leq i\leq j}\binom{j}{i} \frac{\partial^{j}f}{\partial\omega^{i}\partial\overline{\omega}^{j-i}}(0,0) \omega^{i}\overline{\omega}^{j-i}.\]
Then for every \(0\leq j<\underline{k}\) we have \(D^{j}e_{f}(0,0)=0\) and \(\mathscr{C}_{2l}^{j}(e_{f})=0\) and for \(\underline{k}\leq j<\overline{k}\) we have \(\mathscr{C}_{2l}^{j}(e_{f})=\mathscr{C}_{2l}^{j}(f)=0\). Since \(\widehat{\mathfrak{o}}(\sigma,\underline{k})=\lceil\mathfrak{o}(\sigma,\overline{ k})\rceil\), in view of Theorem 4.10, this gives \(F_{e_{f}}\in C^{\mathfrak{e}(\sigma,\underline{k})}(\overline{\mathcal{D}}(\frac{2l}{2m}, \frac{2l+1}{2m}))\). As \(F_{f}\in C^{r}(\overline{\mathcal{D}}(\frac{2l}{2m},\frac{2l+1}{2m}))\) and \(v(r)\leq\mathfrak{o}(\sigma,\overline{k})\), this yields \(F_{f_{<\underline{k}}}=F_{f}-F_{e_{f}}\in C^{r}(\overline{\mathcal{D}}(\frac{2l }{2m},\frac{2l+1}{2m}))\).
For every \(0<a<1\) let \(\Delta_{a}=\{(u,s):0<u\leq 1,0<s\leq au\}\). By Lemmas 4.1, 4.2, 4.3 and (4.35), for every \(0<a<1\), there exist \(\varrho_{j}\in C^{\omega}([0,a])\) and \(c_{j}\in\mathbb{C}\) for
\(0\leq j<\underline{k}\) and \(\rho\in C^{\omega}([0,a])\) such that for any \((u,s)\in\Delta_{a}\),
\[\mathscr{F}_{f_{<\underline{k}},l}(u,s) =\sum_{0\leq j<\underline{k}}\frac{1}{j}\sum_{0\leq i\leq j} \binom{j}{i}\frac{\partial^{j}f(0,0)}{\partial\omega^{i}\partial\overline{ \omega}^{j-i}}\mathfrak{G}_{(m-1)-i,(m-1)-(j-i)}^{l}(u,s)\] \[=\sum_{0\leq j<\underline{k}}u^{\frac{j-(m-2)}{m}}\varrho_{j}(s/u )+\log u\sum_{0\leq j<\underline{k}}c_{j}s^{\frac{j-(m-2)}{m}}+\rho(s).\]
Let \(\alpha\in(0,1/4)\) so that \(\tan(\pi\alpha)=a\). Fix any \(0<\beta<\alpha\) and let \(\omega_{0}=e^{2\pi i\frac{2l+\beta}{2m}}\). Then for any \(t\in(0,a]\),
\[\mathscr{F}_{f_{<\underline{k}},l}((t\omega_{0})^{m},\overline{t \omega_{0}}^{m}) =\mathscr{F}_{f_{<\underline{k}},l}(t^{m}\cos(\pi\beta),t^{m}\sin (\pi\beta))\] \[=\sum_{0\leq j<\underline{k}}\cos(\pi\beta)^{\frac{j-(m-2)}{m}} \varrho_{j}(\tan(\pi\beta))t^{j-(m-2)}+\rho(t^{m}\sin(\pi\beta))\] \[\quad+\sum_{0\leq j<\underline{k}}c_{j}\sin(\pi\beta)^{\frac{j-(m -2)}{m}}t^{j-(m-2)}\log(t^{m}\cos(\pi\beta)).\]
Since \((0,a]\ni t\mapsto\mathscr{F}_{f_{<\underline{k}},l}((t\omega_{0})^{m}, \overline{t\omega_{0}}^{m})\in\mathbb{C}\) is of class \(C^{r}\), \([0,a]\ni t\mapsto\rho(t^{m}\sin(\pi\beta))\in\mathbb{C}\) is analytic and \(v(r)>\widehat{\mathfrak{d}}(\sigma,\underline{k}-1)=\underline{k}-1-(m-2)\geq j -(m-2)\) for every \(0\leq j<\underline{k}\), it follows that \(c_{j}=0\) for all \(0\leq j<\underline{k}\), so \(\mathscr{F}_{f_{<\underline{k}},l}(u,s)=\sum_{0\leq j<\underline{k}}u^{\frac{ j-(m-2)}{m}}\varrho_{j}(s/u)+\rho(s)\).
For every \(0\leq j<\underline{k}\) let \(\Upsilon_{j}:\Delta_{a}\to\mathbb{C}\) be a real analytic homogenous map of degree \(\frac{j-(m-2)}{m}\) given by \(\Upsilon_{j}(u,s)=u^{\frac{j-(m-2)}{m}}\varrho_{j}(s/u)\). Then
\[\mathscr{F}_{f_{<\underline{k}},l}(z,\overline{z})=\sum_{0\leq j<\underline{ k}}\Upsilon_{j}(z,\overline{z})+\rho(\Im z)\text{ on }\Delta_{a}\]
and
\[\mathscr{F}_{f_{<\underline{k}},l}(\omega^{m},\overline{\omega}^{m})=\sum_{0 \leq j<\underline{k}}\Upsilon_{j}(\omega^{m},\overline{\omega}^{m})+\rho( \Im\omega^{m})\text{ on }\mathcal{D}(\tfrac{2l}{2m},\tfrac{2l+\alpha}{2m}).\]
Since \(F_{f_{<\underline{k}}}\in C^{r}(\overline{\mathcal{D}}(\tfrac{2l}{2m},\tfrac {2l+1}{2m}))\) and \(\rho(\Im\omega^{m})\in C^{\omega}(\overline{\mathcal{D}}(\tfrac{2l}{2m}, \tfrac{2l+\alpha}{2m}))\), we have
\[\sum_{0\leq j<\underline{k}}\Upsilon_{j}(\omega^{m},\overline{\omega}^{m})\in C ^{r}(\overline{\mathcal{D}}(\tfrac{2l}{2m},\tfrac{2l+\alpha}{2m}))\]
and \(\Upsilon_{j}(\omega^{m},\overline{\omega}^{m})\) is a homogenous map of degree \(j-(m-2)<v(r)\) for \(0\leq j<\underline{k}\). Then standard arguments for smooth homogenous maps show that \(\Upsilon_{j}=0\) for \(0\leq j<m-2\) and \(\Upsilon_{j}(\omega^{m},\overline{\omega}^{m})\) is a homogenous polynomial of degree \(j-(m-2)\) for \(m-2\leq j<\underline{k}\). Suppose that
\[\Upsilon_{j}(\omega^{m},\overline{\omega}^{m})=\sum_{0\leq i\leq j-(m-2)}a_{j,i }\omega^{i}\overline{\omega}^{(j-i)-(m-2)}\text{ for }m-2\leq j<\underline{k}.\]
Then
\[\sum_{0\leq j<\underline{k}}\frac{1}{j!}\sum_{0\leq i\leq j}\binom{j}{i}\frac{ \partial^{j}f(0,0)}{\partial\omega^{i}\partial\overline{\omega}^{j-i}} \mathfrak{G}_{(m-1)-i,(m-1)-(j-i)}^{l}(u,s)=\mathscr{F}_{f_{<\underline{k}},l}( u,s)\]
\[= \sum_{0\leq j<\underline{k}}\Upsilon_{j}(u,s)+\rho(s)=\sum_{m-2\leq j< \underline{k}}\sum_{0\leq i\leq j-(m-2)}a_{j,i}G_{l}(u,s)^{i}\overline{G_{l}( u,s)}^{(j-i)-(m-2)}+\rho(s).\]
Differentiating with respect \(u\), we get
\[\sum_{0\leq j<\underline{k}} \frac{1}{j!}\sum_{0\leq i\leq j}\binom{j}{i}\frac{\partial^{j}f(0,0)} {\partial\omega^{i}\partial\overline{\omega}^{j-i}}G_{l}^{i-(m-1)}\overline{G}_ {l}^{(j-i)-(m-1)}\] \[=\sum_{m-2\leq j<\underline{k}}\sum_{0\leq i\leq j-(m-2)}a_{j,i} \big{(}\tfrac{i}{m}G_{l}^{i-m}\overline{G}_{l}^{(j-i)-(m-2)}+\tfrac{(j-i)-(m-2) }{m}G_{l}^{i}\overline{G}_{l}^{(j-i)-2(m-1)}\big{)}\] \[=\sum_{m-1\leq j<\underline{k}}\Big{(}\sum_{0\leq i\leq j-(m-1)}a _{j,i+1}\tfrac{i+1}{m}G_{l}^{i-(m-1)}\overline{G}_{l}^{(j-i)-(m-1)}\] \[\qquad\qquad+\sum_{m-1\leq i\leq j}a_{j,i-(m-1)}\tfrac{(j-i)+1}{ m}G_{l}^{i-(m-1)}\overline{G}_{l}^{(j-i)-(m-1)}\Big{)}.\]
It follows that \(D^{j}f(0,0)=0\) for \(0\leq j\leq m-2\) and for every \(m-1\leq j<\underline{k}\) and \(0\leq i\leq j\),
\[\frac{1}{j!}\binom{j}{i}\frac{\partial^{j}f(0,0)}{\partial\omega^{i}\partial \overline{\omega}^{j-i}}=a_{j,i+1}\tfrac{i+1}{m}+a_{j,i-(m-1)}\tfrac{(j-i)+1}{ m},\]
here we adhere to the convention that \(a_{j,i}=0\) if \(i<0\) or \(i>j-(m-2)\). It follows that for any \(m-1\leq j<\underline{k}\) and \(0\leq i\leq m-2\) with \(i\neq j-(m-1)\operatorname{mod}m\),
\[\frac{\partial_{i}^{j}(f)}{j!} =\sum_{0\leq n\leq\frac{j-i}{m}}\frac{\big{(}\tfrac{(m-1)-i}{n}- 1\big{)}}{\frac{(j-i)-(m-1)}{n}}\frac{1}{j!}\binom{j}{mn+i}\frac{\partial^{j} f}{\partial\omega^{mn+i}\partial\overline{\omega}^{j-(mn+i)}}(0,0)\] \[=\sum_{n\geq 0}\frac{\binom{i-(m-1)}{m}+n}{(-\tfrac{(j-i)-(m-1) }{n}+(n-1))}\Big{(}a_{j,i-(m-1)+m(n+1)}(\tfrac{i-(m-1)}{m}+n+1)\] \[\qquad\qquad+a_{j,i-(m-1)+mn}(\tfrac{(j-i)-(m-1)}{m}-(n-1))\Big{)}\] \[=\sum_{n\geq 0}\frac{\binom{i-(m-1)+n+1}{m+1}(n+1)}{(-\tfrac{(j- i)-(m-1)+(n-1)}{n}+(n-1))}a_{j,i-(m-1)+m(n+1)}\] \[\qquad-\sum_{n\geq 1}\frac{\binom{i-(m-1)}{m}n}{(-\tfrac{(j-i)- (m-1)+(n-2)}{n-1}}a_{j,i-(m-1)+mn}-a_{j,i-(m-1)}(\tfrac{(j-i)-(m-1)}{m}+1).\]
As \(i-(m-1)<0\), we have \(a_{j,i-(m-1)}=0\). Hence \(\partial_{i}^{j}(f)=0\) for every \(0\leq j<\underline{k}\) and \(0\leq i\leq j\wedge(m-2)\) with \(i\neq j-(m-1)\operatorname{mod}m\).
## 5. Global properties
In this section, by combining previous results for local analysis near singularity, we finally obtain solutions for cohomological equations with optimal loss of regularity.
### Transition from local to global results
Let \(M\) be a compact connected orientable \(C^{\infty}\)-surface. Let \(\psi_{\mathbb{R}}\) be a locally Hamiltonian \(C^{\infty}\)-flow on \(M\) with isolated fixed points and such that all its saddles are perfect and all saddle connections are loops. Let \(M^{\prime}\subset M\) be a minimal component of the flow and let \(I\subset M^{\prime}\) be a transversal curve. The corresponding IET \(T:I\to I\) exchanges the intervals \(\{I_{\alpha}:\alpha\in\mathcal{A}\}\). There exists \(0<\varepsilon\leq 1\) such that for every \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\) we have \(\mathcal{D}_{\sigma,\varepsilon}\subset U_{\sigma}\), where \(\mathcal{D}_{\sigma,\varepsilon}\) is the pre-image of the square \([-\varepsilon,\varepsilon]\times[-\varepsilon,\varepsilon]\) via the map \(z\mapsto z^{m_{\sigma}}\) in local singular coordinates. Moreover, we can assume that every orbit
starting from \(I\) meets at most one set \(\mathcal{D}_{\sigma,\varepsilon}\) (maybe many times) before return to \(I\). For every \(0\leq l<2m_{\sigma}\) let \(\mathcal{D}^{l}_{\sigma,\varepsilon}=\overline{\mathcal{D}}_{\sigma, \varepsilon}(\frac{l}{2m_{\sigma}},\frac{l+1}{2m_{\sigma}})\) be the \(l\)-th closed angular sector of \(\mathcal{D}_{\sigma,\varepsilon}\).
_Remark 5.1_.: By Lemma 8.2 in [4], the enter and exit sets of \(\mathcal{D}_{\sigma,\varepsilon}(\frac{l}{m_{\sigma}},\frac{l+1}{m_{\sigma}})\) are \(C^{\infty}\)-curves with standard parametrization
\[[-\varepsilon,\varepsilon]\ni s\mapsto G_{l}(-\varepsilon-\iota s)\in \mathcal{D}_{\sigma,\varepsilon}\text{ and }[-\varepsilon,0)\cup(0,\varepsilon]\ni s \mapsto G_{l}(\varepsilon-\iota s)\in\mathcal{D}_{\sigma,\varepsilon}\text{ resp.}\]
Every \(\omega\in\mathcal{D}_{\sigma,\varepsilon}(\frac{l}{m_{\sigma}},\frac{l+1}{m_{ \sigma}})\) lies on the positive semi-orbit of \(G_{l}(-\varepsilon-\iota s)\), \(s\in[-\varepsilon,\varepsilon]\) so that \(\psi_{\xi_{l}(\omega)}G_{l}(-\varepsilon-\iota s)=\omega\) for some \(\xi_{l}(\omega)>0\). By the proof of Lemma 8.2 in [4], \(z=\omega^{m_{\sigma}}=u-\iota s\) for some \(u\in[-\varepsilon,\varepsilon]\) and for any \(f\in C(M)\),
\[\begin{split}\int_{0}^{\xi_{l}(\omega)}& f(\psi_{t}G_{l}(- \varepsilon-\iota s))dt=\frac{1}{m_{\sigma}^{2}}\int_{-\varepsilon}^{u}\frac{ (f\cdot V)(G_{l}(v-\iota s))}{(v^{2}+s^{2})^{\frac{m_{\sigma}-1}{m_{\sigma}}}}dv \\ &=\frac{\varepsilon^{-\frac{m_{\sigma}-2}{m_{\sigma}}}}{m_{\sigma }^{2}}\mathscr{F}_{(f\cdot V)\circ\varepsilon^{1/m_{\sigma}},l}(z/\varepsilon) =\frac{\varepsilon^{-\frac{m_{\sigma}-2}{m_{\sigma}}}}{m_{\sigma}^{2}}F_{(f \cdot V)\circ\varepsilon^{1/m_{\sigma}}}(\varepsilon^{-1/m_{\sigma}}\omega). \end{split} \tag{5.1}\]
In particular, if \(\omega=G_{l}(\varepsilon-\iota s)\) for \(s\in[-\varepsilon,\varepsilon]\setminus\{0\}\) then \(\tau_{l}(s):=\xi_{l}(G_{l}(-\varepsilon-\iota s))\) is the transit time of \(G_{l}(-\varepsilon-\iota s)\) through the set \(\mathcal{D}_{\sigma,\varepsilon}(\frac{l}{m_{\sigma}},\frac{l+1}{m_{\sigma}})\), \(u-\iota s=G_{l}(\varepsilon-\iota s)^{m_{\sigma}}=\varepsilon-\iota s\) and
\[\begin{split}\int_{0}^{\tau_{l}(s)}f(\psi_{t}G_{l}(-\varepsilon- \iota s))dt&=\frac{\varepsilon^{-\frac{m_{\sigma}-2}{m_{\sigma}}} }{m_{\sigma}^{2}}\mathscr{F}_{(f\cdot V)\circ\varepsilon^{1/m_{\sigma}},l}(( \varepsilon-\iota s)/\varepsilon)\\ &=\frac{\varepsilon^{-\frac{m_{\sigma}-2}{m_{\sigma}}}}{m_{\sigma }^{2}}\varphi_{(f\cdot V)\circ\varepsilon^{1/m_{\sigma}},l}(-s/\varepsilon). \end{split} \tag{5.2}\]
_Remark 5.2_.: Recall that \(m\geq 2\) is the maximal multiplicity of saddles in \(\operatorname{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\). Then for any \(r\geq-\frac{m-2}{m}\) we have \(\lceil r\rceil+1\leq k_{r}\). Indeed, if \(-\frac{m-2}{m}\leq r\leq-\frac{m-3}{m}\) then \(-\frac{m-2}{m-1}\leq r\). Hence \(r+1\leq mr+(m-1)\), which yields \(\lceil r\rceil+1\leq\lceil mr+(m-1)\rceil=k_{r}\). If \(-\frac{m-3}{m}<r\) with \(m\geq 3\) or \(1\leq r\) with \(m=2\) then \(-\frac{m-3}{m-1}\leq r\). Hence \(r+1\leq mr+(m-2)\), which yields \(\lceil r\rceil+1\leq\lceil mr+(m-2)\rceil=k_{r}\). Suppose that \(m=2\) and \(\frac{1}{2}=-\frac{m-3}{m}<r<1\). Then \(\left\lceil r\rceil+1=2\leq\lceil 2r\rceil=\lceil mr+(m-2)\rceil=k_{r}\).
_Remark 5.3_.: For any \(r\geq-\frac{m-2}{m}\) and \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\) let \(k\geq 0\) be such that \(\mathfrak{o}(\sigma,k-1)<r\leq\mathfrak{o}(\sigma,k)\). It follows that \(n:=\left\lceil\mathfrak{o}(\sigma,k)\right\rceil=\left\lceil r\right\rceil\). In view of (1.6), \(k\leq\left\lceil mr+(m-2)\right\rceil\leq k_{r}\). Moreover, by Remark 5.2, \(n+1=\left\lceil r\right\rceil+1\leq k_{r}\). Therefore, \(k\vee(n+1)\leq k_{r}\).
Proof of Theorem 1.1.: Let \(\tau:I\to\mathbb{R}_{>0}\cup\{+\infty\}\) be the first return time map for the flow \(\psi_{\mathbb{R}}\) restricted to \(M^{\prime}\). For any interval (set) \(J\subset I\) avoiding the set \(disc(T)\) of discontinuities of \(T\) let \(J^{\tau}=\{\psi_{t}s:s\in J,0\leq t\leq\tau(s)\}\). If an interval \(J\) contains some elements of \(disc(T)\) then \(J^{\tau}\) is the closure of \((J\setminus disc(T))^{\tau}\).
**Case 1.** Suppose that \(J\subset I_{\alpha}\) is a closed interval such that \(\sup\tau(J)<\infty\) and \(\max\tau(J)<2\min\tau(J)\). Choose any \(t_{J}<\min\tau(J)\) so that \(2t_{J}>\max\tau(J)\). Let us consider the set \(J^{\tau}\) and its two subsets
\[J^{\tau}_{+}=\{\psi_{t}s:s\in J,0\leq t\leq t_{J}\},\quad J^{\tau}_{-}=\{\psi_{-t} (Ts):s\in J,0\leq t\leq t_{J}\}. \tag{5.3}\]
By assumption, \(J_{+}^{\tau}\cup J_{-}^{\tau}=J^{\tau}\). Let \(\rho_{+},\rho_{-}:J^{\tau}\to[0,1]\) be the corresponding \(C^{\infty}\)-partition of unity, i.e. \(\rho_{\pm}\) are \(C^{\infty}\)-maps such that \(\rho_{+}+\rho_{-}=1\) and \(\rho_{\pm}=0\) on \(J^{\tau}\setminus J_{\pm}^{\tau}\). Let \(v_{\pm}:J\times[0,t_{J}]\to J_{\pm}^{\tau}\) be given by \(v_{+}(s,t)=\psi_{t}s\) and \(v_{-}(s,t)=\psi_{-t}(Ts)\). Then
\[\varphi_{f}(s)=\int_{0}^{t_{J}}(\rho_{+}\cdot f)\circ v_{+}(s,t)dt+\int_{0}^{t _{J}}(\rho_{-}\cdot f)\circ v_{-}(s,t)dt.\]
Since \(v_{\pm}\) are of class \(C^{\infty}\), it follows that for every \(q>0\) if \(f\in C^{q}(M)\) then \(\varphi_{f}\in C^{q}(J)\) and there exists \(C_{J}^{q}>0\) such that \(\|\varphi_{f}\|_{C^{q}(J)}\leq C_{J}^{q}\|f\|_{C^{q}(M)}\) for any \(f\in C^{q}(M)\). Suppose that \(f\in C^{k_{r}}(M)\). In view of Remark 5.3, \(n+1\leq k_{r}\), and hence \(\varphi_{f}\in C^{n+1}(J)\) with
\[\|\varphi_{f}\|_{C^{n+1}(J)}\leq C_{J}^{n+1}\|f\|_{C^{k_{r}}(M)}\text{ for any }f\in C^{k_{r}}(M). \tag{5.4}\]
**Case 2.** Suppose that \(J\subset I_{\alpha}\) is of the form \(J=[l_{\alpha},l_{\alpha}+\varepsilon]\). Suppose that \(l_{\alpha}\) is the first backward meeting point of a separatrix incoming to \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\). It follows that the orbits starting from \(J\) meet the set \(\mathcal{D}_{\sigma,\varepsilon}\) before return to \(I\). Suppose that each such orbit meets \(\mathcal{D}_{\sigma,\varepsilon}\) only once and it meets a sector \(\mathcal{D}_{\sigma,\varepsilon}^{2l+1}\) for some \(0\leq l<m_{\sigma}\). In general, the orbits of \(J\) can meet \(\mathcal{D}_{\sigma,\varepsilon}\) several times in different sectors. This case arises when the saddle \(\sigma\) has saddle loops, but this situation is discussed later.
For every \(s\in J\) denote by \(\tau_{+}(s)\) the first forward entrance time of the orbit of \(s\) to \(\mathcal{D}_{\sigma,\varepsilon}\) and by \(\tau_{-}(s)\) the first backward entrance time of the orbit of \(Ts\) to \(\mathcal{D}_{\sigma,\varepsilon}\). Then \(\psi_{\tau_{+}(s)}(s)=G_{l}(-\varepsilon-\iota(s-l_{\alpha}))\). Since \(\tau(s)\to+\infty\) as \(s\to l_{\alpha}\) and \(\tau_{\pm}\) are bounded, decreasing \(\varepsilon\), if necessary, we can assume that \(\min\tau(J)>\max\tau_{\pm}(J)\). Choose \(\max\tau_{\pm}(J)<t_{J}<\min\tau(J)\) and let us consider two subsets \(J_{\pm}^{\tau}\subset J^{\tau}\) given by (5.3). Then \(J^{\tau}=J_{+}^{\tau}\cup\mathcal{D}_{\sigma,\varepsilon}^{2l+1}\cup J_{-}^{\tau}\). Let us consider the corresponding \(C^{\infty}\)-partition of unity \(\rho_{+},\rho_{\sigma},\rho_{-}\cdot\cdot\cdot\cdot\cdot\cdot\to[0,1]\), i.e. \(\rho_{+}\), \(\rho_{\sigma}\), \(\rho_{-}\) are \(C^{\infty}\)-maps such that \(\rho_{+}+\rho_{\sigma}+\rho_{-}=1\), \(\rho_{\pm}=0\) on \(J^{\tau}\setminus J_{\pm}^{\tau}\) and \(\rho_{\sigma}=0\) on \(J^{\tau}\setminus\mathcal{D}_{\sigma,\varepsilon}\). Then
\[\varphi_{f}(s)=\int_{0}^{t_{J}}(\rho_{+}\cdot f)\circ v_{+}(s,t)dt+\int_{0}^{t _{J}}(\rho_{-}\cdot f)\circ v_{-}(s,t)dt+\int_{\tau_{+}(s)}^{\tau(s)-\tau_{-}( s)}(\rho_{\sigma}\cdot f)(\psi_{t}s)dt.\]
Repeating the arguments used in Case 1, for any \(q>0\) we get \(C_{J}^{q}>0\) such that
\[\Big{\|}\int_{0}^{t_{J}}(\rho_{+}\cdot f)(\psi_{t}\cdot)dt+\int_{0}^{t_{J}}( \rho_{-}\cdot f)(\psi_{-t}(T\cdot)dt\Big{\|}_{C^{q}(J)}\leq C_{J}^{q}\|f\|_{C ^{q}(M)} \tag{5.5}\]
for any \(f\in C^{q}(M)\).
Note that for every \(s\in(0,\varepsilon]\),
\[\varphi_{f}^{\sigma}(l_{\alpha}+s): =\int_{\tau_{+}(l_{\alpha}+s)}^{\tau(l_{\alpha}+s)-\tau_{-}(l_{ \alpha}+s)}(\rho_{\sigma}\cdot f)(\psi_{t}(l_{\alpha}+s))dt\] \[=\int_{0}^{\tau(s)}(\rho_{\sigma}\cdot f)(\psi_{t}G_{l}(- \varepsilon-\iota s))dt.\]
By (5.2), it follows that for any \(s\in(0,1]\), \(\varphi_{f}^{\sigma}(l_{\alpha}+\varepsilon s)=\frac{\varepsilon-\frac{m_{ \sigma}-2}{m_{\sigma}^{2}}}{m_{\sigma}^{2}}\varphi_{\tilde{f},l}(-s)\), where \(\tilde{f}(\omega,\overline{\omega})=(\rho_{\sigma}\cdot f\cdot V)(\varepsilon^ {\frac{1}{m_{\sigma}}}\omega,\varepsilon^{\frac{1}{m_{\sigma}}}\overline{ \omega})\).
Suppose that \(f\in C^{k_{r}}(M)\) for some \(r\geq-\frac{m-2}{m}\). Choose \(k\geq 0\) such that \(\mathfrak{o}(\sigma,k-1)<r\leq\mathfrak{o}(\sigma,k)\). By Remark 5.3, we have \(\lceil\mathfrak{o}(\sigma,k)\rceil=\lceil\tau\rceil=n\) and \(k\vee(n+1)\leq k_{r}\). Assume that \(\mathfrak{C}_{\sigma,2l+1}^{j}(f)=0\) for all \(0\leq j<k\), or equivalently for all \(j\geq 0\) such that \(\mathfrak{o}(\mathfrak{C}_{\sigma,2l+1}^{j})<r\). Since \(\rho_{\sigma}=1\) in a neighborhood of \(\sigma\), it follows
that \(\mathscr{C}^{j}_{2l+1}(\tilde{f})=\varepsilon^{\frac{j}{m\sigma}}\mathscr{C}^{j}_{2l +1}(f\cdot V)=\varepsilon^{\frac{j}{m\sigma}}\mathfrak{C}^{j}_{\sigma,2l+1}(f)=0\) for all \(0\leq j<k\). Let \(a_{0}:=\lceil\mathfrak{o}(\sigma,k)\rceil-\mathfrak{o}(\sigma,k)=n-\mathfrak{o }(\sigma,k)\). Then \(n-a_{0}=\mathfrak{o}(\sigma,k)\geq r=n-a\).
As \(k\vee(n+1)\leq k_{r}\) and both \(f\) and \(\tilde{f}\) are of class \(C^{k_{r}}\), in view of Theorem 4.7, \(\varphi_{\tilde{f},l}\in C^{n+\mathrm{Pa}_{0}}([-1,0))\) and there exists \(C^{r}_{\sigma,l}>0\) such that
\[\|\varphi_{\tilde{f},l}\|_{C^{n+\mathrm{Pa}_{0}}([-1,0))}\leq C^{r}_{\sigma, l}\|\tilde{f}\|_{C^{k\vee(n+1)}(\mathcal{D})}\leq C^{r}_{\sigma,l}\|\rho_{ \sigma}\cdot V\|_{C^{k_{r}}(\mathcal{D})}\|f\|_{C^{k_{r}}(\mathcal{D})}.\]
As \(\varphi^{\sigma}_{f}(l_{\alpha}+\varepsilon s)=\frac{\varepsilon^{-\frac{m \sigma-2}{m\sigma}}}{m^{2}_{\sigma}}\varphi_{\tilde{f},l}(-s)\) and \(n-a\leq n-a_{0}\), in view of Remark 2.2, for any \(f\in C^{k_{r}}(M)\) with \(\mathfrak{C}^{j}_{\sigma,2l+1}(f)=0\) for all \(0\leq j<k\),
\[\varphi^{\sigma}_{f}\in C^{n+\mathrm{Pa}_{0}}(J)\text{ and }\|\varphi^{ \sigma}_{f}\|_{C^{n+\mathrm{Pa}_{0}}(J)}\leq\widetilde{C}^{r}_{\sigma,l}\|f \|_{C^{k_{r}}(\mathcal{D})}.\]
In view of (5.5) and Remark 2.2, it follows that for any \(f\in C^{k_{r}}(M)\cap\bigcap\limits_{0\leq j<k}\ker(\mathfrak{C}^{j}_{\sigma,2 l+1})\),
\[\|\varphi_{f}\|_{C^{n+\mathrm{Pa}_{0}}(J)}\leq\widetilde{C}^{r}_{\sigma,l}\|f \|_{C^{k_{r}}(M)}+C^{n+1}_{J}\|f\|_{C^{n+1}(M)}\leq(\widetilde{C}^{r}_{\sigma, l}+C^{n+1}_{J})\|f\|_{C^{k_{r}}(M)}.\]
**Case 3.** Suppose that \(J\subset I_{\alpha}\) is of the form \(J=[l_{\alpha},l_{\alpha}+\varepsilon]\), where \(l_{\alpha}\) is the first backward meeting point of a separatrix incoming to \(\sigma\in\mathrm{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\). Suppose that \(\sigma\) has some saddle loops and \(J^{\tau}\) meets \(\sigma\)\(N\)-times (\(1<N=N_{J}<m_{\sigma}\)). Then all orbits starting from \(\mathrm{Int}\,J\) meet the set \(\mathcal{D}_{\sigma,\varepsilon}\)\(N\)-times before return to \(I\). Assume that each such orbit meets its sectors \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\) for \(1\leq i\leq N\) consecutively. Then \(\sigma\) has \(N-1\) saddle loops \(sl_{i}\) connecting the sector \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\) with \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\) for \(1\leq i\leq N-1\). In particular,
\[J^{\tau}=J^{\tau}_{+}\cup J^{\tau}_{-}\cup\bigcup\limits_{i=1}^{N}\mathcal{D} ^{2l_{i}+1}_{\sigma,\varepsilon}\cup\bigcup\limits_{i=1}^{N-1}J^{\tau}_{i}, \tag{5.6}\]
where \(J^{\tau}_{i}=\{\psi_{t}\gamma_{i}(s):s\in[0,\varepsilon],t\in[0,t_{i}]\}\) is a rectangle whose base is a \(C^{\infty}\)-curve \(\gamma_{i}([0,\varepsilon])\subset\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\) with a standard parametrization while its left side \(\{\psi_{t}\gamma_{i}(0):t\in[0,t_{i}]\}\) is a part of the loop \(sl_{i}\). Using a partition of unity associated to the cover (5.6) and repeating the arguments used in Case 1 and 2, for every \(r>0\) we get \(C^{r}_{J}>0\) such that
\[\|\varphi_{f}\|_{C^{n+\mathrm{Pa}_{0}}(J)}\leq C^{r}_{J}\|f\|_{C^{k_{r}}(M)} \text{ for }f\in C^{k_{r}}(M)\cap\bigcap\limits_{1\leq i\leq N_{J}}\bigcap\limits_{0\leq j<k }\ker(\mathfrak{C}^{j}_{\sigma,2l_{i}+1}). \tag{5.7}\]
**Case 4.** Suppose that \(J\subset I_{\alpha}\) is of the form \(J=[r_{\alpha}-\varepsilon,r_{\alpha}]\), where \(r_{\alpha}\) is the first backward meeting point of a separatrix incoming to \(\sigma\in\mathrm{Sd}(\psi_{\mathbb{R}})\). Suppose that \(J^{\tau}\) meets \(\sigma\)\(N\)-times (\(N=N_{J}\)) and the orbits starting from \(\mathrm{Int}\,J\) meet the set \(\mathcal{D}^{2l_{i}}_{\sigma,\varepsilon}\) for \(1\leq i\leq N\) consecutively before return to \(I\). Then repeating the arguments used in Case 1, 2 and 3, for every \(r>0\) we get \(C^{r}_{J}>0\) such that
\[\|\varphi_{f}\|_{C^{n+\mathrm{Pa}}(J)}\leq C^{r}_{J}\|f\|_{C^{k_{r}}(M)}\text{ for }f\in C^{k_{r}}(M)\cap\bigcap\limits_{1\leq i\leq N_{J}}\bigcap\limits_{0\leq j<k }\ker(\mathfrak{C}^{j}_{\sigma,2l_{i}}). \tag{5.8}\]
**Final step.** We can find a finite family of closed subintervals \(\{J_{q}\}_{q=1}^{Q}\) of \(I\) which covers the whole interval \(I\) and such that every \(J_{q}\) is of the form \([l_{\alpha},l_{\alpha}+\varepsilon]\) (or \([r_{\alpha}-\varepsilon,r_{\alpha}]\)) with \(\min\tau_{\pm}(J_{q})>\max\tau(J_{q})\), or \(J_{q}\subset\mathrm{Int}\,I_{\alpha}\) with \(2\min\tau(J_{q})>\max\tau(J_{q})\).
If \(J_{q}\) is an interval of the form \([l_{\alpha},l_{\alpha}+\varepsilon]\) or \([r_{\alpha}-\varepsilon,r_{\alpha}]\) then by (5.7) and (5.8),
\[\|\varphi_{f}\|_{C^{n+\mathrm{P}_{\mathrm{a}}}(J_{q})}\leq C_{J_{q}}^{r}\|f\|_{ C^{k_{r}}(M)}\text{ for }f\in C^{k_{r}}(M)\cap\bigcap_{\begin{subarray}{c}(\sigma,j,l)\in \mathscr{T}\mathscr{C}\\ \mathfrak{o}(\sigma,j)<r\end{subarray}}\ker(\mathfrak{C}_{\sigma,l}^{j}).\]
If \(J_{q}\subset\operatorname{Int}I_{\alpha}\) then, by Remark 2.2 and (5.4),
\[\|\varphi_{f}\|_{C^{n+\mathrm{P}_{\mathrm{a}}}(J_{q})}\leq\|\varphi_{f}\|_{C^ {n+1}(J_{q})}\leq C_{J_{q}}^{n+1}\|f\|_{C^{n+1}(M)}\leq C_{J_{q}}^{n+1}\|f\|_{ C^{k_{r}}(M)}\text{ for }f\in C^{k_{r}}(M).\]
This yields \(\varphi_{f}\in C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_ {\alpha})\) and
\[\|\varphi_{f}\|_{C^{n+\mathrm{P}_{\mathrm{a}}}}\leq\sum_{q=1}^{Q}\|\varphi_{ f}\|_{C^{n+\mathrm{P}_{\mathrm{a}}}(J_{q})}\leq C\|f\|_{C^{k_{r}}(M)}\]
for all \(f\in C^{k_{r}}(M)\) such that \(\mathfrak{C}_{\sigma,l}^{j}(f)=0\) for \((\sigma,j,l)\in\mathscr{T}\mathscr{C}\) with \(\mathfrak{o}(\sigma,j)<r\).
Recall that, by assumption, the right end of \(I\) is the first meeting point of a separatrix (that is not a saddle connection) emanating by a fixed point \(\sigma\) (incoming or outgoing) with the interval \(I\). Suppose that the right end is the first backward meeting point of a separatrix incoming to \(\sigma\). Let \(\alpha=\pi_{1}^{-1}(d)\), i.e. the interval \(I_{\alpha}=[l_{\alpha},r_{\alpha})\) is the latest after the exchange. It follows that for every \(0<\varepsilon<|I_{\alpha}|\) the strip \([r_{\alpha}-\varepsilon,r_{\alpha}]^{\tau}\) avoids all fixed points, so \(\sup\tau([r_{\alpha}-\varepsilon,r_{\alpha}])<\infty\). By the continuity of \(\tau\), we can choose \(\varepsilon>0\) so that \(\max\tau([r_{\alpha}-\varepsilon,r_{\alpha}])<2\min\tau([r_{\alpha}- \varepsilon,r_{\alpha}])\). In view of Case 1, \(\varphi_{f}\in C^{n+1}([r_{\alpha}-\varepsilon,r_{\alpha}])\). Hence, \(C_{\alpha,n}^{a,-}(\varphi_{f})=\lim_{x\nearrow r_{\alpha}}D^{n+1}\varphi_{f} (x)(r_{\alpha}-x)^{1+a}=0\). The same argument shows that if the right end is the first forward meeting point of a separatrix outgoing from \(\sigma\) then \(C_{\alpha,n}^{a,-}(\varphi_{f})=0\) for \(\alpha=\pi_{0}^{-1}(d)\). Finally we have \(C_{\pi_{0}^{-1}(d),n}^{a,-}(\varphi_{f})\cdot C_{\pi_{1}^{-1}(d),n}^{a,-}( \varphi_{f})=0\). Analyzing the orbit of the left end in the same way, we get \(C_{\pi_{0}^{-1}(1),n}^{a,+}(\varphi_{f})\cdot C_{\pi_{1}^{-1}(1),n}^{a,+}( \varphi_{f})=0\), which shows that \(\varphi_{f}\in C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in \mathcal{A}}I_{\alpha})\).
For all \((\sigma,k,j)\in\mathscr{T}\mathscr{D}\) let \(\chi_{\sigma,j}^{k}:M\to\mathbb{C}\) be a \(C^{\infty}\)-map such that \(\chi_{\sigma,j}^{k}(\omega,\overline{\omega})=\omega^{j}\overline{\omega}^{k-j }/(k!V(\omega,\overline{\omega}))\) on \(U_{\sigma}\) and it is equal to zero on all \(U_{\sigma^{\prime}}\) for \(\sigma^{\prime}\neq\sigma\). By definition, \(\mathfrak{d}_{\sigma,j}^{k}(\chi_{\sigma,j}^{k})=1\) and \(\mathfrak{d}_{\sigma^{\prime},j^{\prime}}^{k^{\prime}}(\chi_{\sigma,j}^{k})=0\) if \((\sigma^{\prime},k^{\prime},j^{\prime})\neq(\sigma,k,j)\).
In view of Theorem 1.1, we get the following result.
**Corollary 5.4**.: _For every \(r\geq-\frac{m-2}{m}\) and any \(f\in C^{k_{r}}(M)\) we have a decomposition_
\[f=\sum_{\begin{subarray}{c}(\sigma,k,j)\in\mathscr{T}\mathscr{D}\\ \mathfrak{o}(\sigma,k)<r\end{subarray}}\mathfrak{d}_{\sigma,j}^{k}(f)\chi_{ \sigma,j}^{k}+\mathfrak{R}_{r}(f) \tag{5.9}\]
_such that \(\varphi_{\mathfrak{R}_{r}(f)}\in C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup _{\alpha\in\mathcal{A}}I_{\alpha})\) with \(n=\lceil r\rceil\) and \(a=n-r\). Moreover, the operators \(\mathfrak{R}_{r}:C^{k_{r}}(M)\to C^{k_{r}}(M)\) and \(C^{k_{r}}(M)\ni f\mapsto\varphi_{\mathfrak{R}_{r}(f)}\in C^{n+\mathrm{P}_{ \mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) are bounded._
Let us consider an equivalence relation \(\sim\) on \(\mathscr{T}\mathscr{C}\) as follows: \((\sigma,k,l)\sim(\sigma,k,l^{\prime})\) if the angular sectors \(U_{\sigma,l}\) and \(U_{\sigma,l^{\prime}}\) are connected through a chain of saddle loops emanating from the saddle \(\sigma\). For every equivalence class \([(\sigma,k,l)]\in\mathscr{T}\mathscr{C}/\sim\), let
\[\mathfrak{C}_{[(\sigma,k,l)]}(f):=\sum_{(\sigma,k,l^{\prime})\sim(\sigma,k,l)} \mathfrak{C}_{\sigma,l}^{k}(f).\]
For any \([(\sigma,k,l)]\in\mathscr{T}\mathscr{C}/\sim\) there exists \(\alpha\in\mathcal{A}\) and an interval \(J\) of the form \([l_{\alpha},l_{\alpha}+\varepsilon]\) or \([r_{\alpha}-\varepsilon,r_{\alpha}]\) such that \(l_{\alpha}\) or \(r_{\alpha}\) is the first backward meeting point of
a separatrix incoming to \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\) and \(J^{\tau}\) contains all angular sectors \(U_{\sigma,l^{\prime}}\) for which \((\sigma,k,l^{\prime})\sim(\sigma,k,l)\). Let \(\xi_{[(\sigma,k,l)]}:I\to\mathbb{R}\) be given as follows:
* \(\xi_{[(\sigma,k,l)]}\) is zero on any interval \(I_{\beta}\) with \(\beta\neq\alpha\);
* if \(J=[l_{\alpha},l_{\alpha}+\varepsilon]\) then for any \(s\in I_{\alpha}\), \[\xi_{[(\sigma,k,l)]}(s) =\frac{(s-l_{\alpha})^{\frac{k-(m_{\sigma}-2)}{m_{\sigma}}}}{m_{ \sigma}^{2}k!}\text{ if }k\neq m_{\sigma}-2\ \operatorname{mod}m_{\sigma}\] \[\xi_{[(\sigma,k,l)]}(s) =-\frac{(s-l_{\alpha})^{\frac{k-(m_{\sigma}-2)}{m_{\sigma}}}\log (s-l_{\alpha})}{m_{\sigma}^{2}k!}\text{ if }k=m_{\sigma}-2\ \operatorname{mod}m_{\sigma};\]
* if \(J=[r_{\alpha}-\varepsilon,r_{\alpha}]\) then for any \(s\in I_{\alpha}\), \[\xi_{[(\sigma,k,l)]}(s) =\frac{(r_{\alpha}-s)^{\frac{k-(m_{\sigma}-2)}{m_{\sigma}}}}{m_{ \sigma}^{2}k!}\text{ if }k\neq m_{\sigma}-2\ \operatorname{mod}m_{\sigma}\] \[\xi_{[(\sigma,k,l)]}(s) =-\frac{(r_{\alpha}-s)^{\frac{k-(m_{\sigma}-2)}{m_{\sigma}}}\log (r_{\alpha}-s)}{m_{\sigma}^{2}k!}\text{ if }k=m_{\sigma}-2\ \operatorname{mod}m_{\sigma}.\]
Of course, \(\xi_{[(\sigma,k,l)]}\in C^{n+\operatorname{P_{a}G}}(\sqcup_{\alpha\in\mathcal{ A}}I_{\alpha})\) with \(n:=\lceil\mathfrak{o}(\sigma,k)\rceil\) and \(a:=n-\mathfrak{o}(\sigma,k)\).
In view of the proof of Theorem 1.1 we also have the following.
**Corollary 5.5**.: _Fix \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\), \(k\geq 0\) and let \(n:=\lceil\mathfrak{o}(\sigma,k)\rceil\) and \(a:=n-\mathfrak{o}(\sigma,k)\). Suppose that \(f\in C^{k\vee(n+1)}(M)\) is such that it is equal to zero on \(U_{\sigma^{\prime}}\) for \(\sigma^{\prime}\neq\sigma\). Then_
\[\varphi_{f}=\sum_{[(\sigma,j,l)]\in\mathscr{T}\mathscr{S}/\sim}\mathfrak{C}_ {[(\sigma,j,l)]}(f)\xi_{[(\sigma,j,l)]}+C^{n+\operatorname{P_{a}G}}(\sqcup_{ \alpha\in\mathcal{A}}I_{\alpha}). \tag{5.10}\]
Proof.: The proof proceeds in the same way as the proof of Theorem 1.1, except that we use Corollary 4.8 instead of Theorem 4.7 in the key reasoning. For example, using the notations introduced in the proof of the Theorem 1.1, for any \(s\in(0,1]\)
\[\varphi_{f}^{\sigma}(l_{\alpha}+\varepsilon s)=\frac{\varepsilon^{-\frac{m_{ \sigma}-2}{m_{\sigma}}}}{m_{\sigma}^{2}}\varphi_{\tilde{f},l}(-s)\text{ with }\tilde{f}(\omega,\overline{\omega})=(\rho_{\sigma}\cdot f \cdot V)(\varepsilon^{\frac{1}{m_{\sigma}}}\omega,\varepsilon^{\frac{1}{m_{ \sigma}}}\overline{\omega})\]
and \(\mathscr{C}_{2l+1}^{j}(\tilde{f})=\varepsilon^{\frac{j}{m_{\sigma}}}\mathscr{ C}_{2l+1}^{j}(f\cdot V)=\varepsilon^{\frac{j}{m_{\sigma}}}\mathfrak{C}_{ \sigma,2l+1}^{j}(f)\). In view of Corollary 4.8, for \(s\in[-1,0)\),
\[\varphi_{\tilde{f},l}(s) =-\sum_{\begin{subarray}{c}0\leq j<k\\ j=m_{\sigma}-2\operatorname{mod}m_{\sigma}\end{subarray}}\frac{\mathscr{C}_{2l +1}^{j}(\tilde{f})}{j!}(-s)^{\frac{j-(m_{\sigma}-2)}{m_{\sigma}}}\log(-s)\] \[\quad+\sum_{\begin{subarray}{c}0\leq j<k\\ j\neq m_{\sigma}-2\operatorname{mod}m_{\sigma}\end{subarray}}\frac{\mathscr{C}_{2l +1}^{j}(\tilde{f})}{j!}(-s)^{\frac{j-(m_{\sigma}-2)}{m_{\sigma}}}+C^{n+ \operatorname{P_{a}}}([-1,0)).\]
It follows that for \(s\in(l_{\alpha},l_{\alpha}+\varepsilon]\),
\[\varphi_{f}^{\sigma}(s) =\frac{\varepsilon^{-\frac{m_{\sigma}-2}{m_{\sigma}}}}{m_{\sigma}^ {2}}\varphi_{\hat{f},l}((l_{\alpha}-s)/\varepsilon)\] \[=-\frac{\varepsilon^{-\frac{m_{\sigma}-2}{m_{\sigma}}}}{m_{\sigma}^ {2}}\sum_{\begin{subarray}{c}0\leq j<k\\ j=m_{\sigma}-2\,\mathrm{mod}\,m_{\sigma}\end{subarray}}\varepsilon^{\frac{j} {m_{\sigma}}}\mathfrak{C}^{j}_{\sigma,2l+1}(f)\frac{(s-l_{\alpha})}{j!}\Big{(} \frac{s-l_{\alpha}}{\varepsilon}\Big{)}^{\frac{j-(m_{\sigma}-2)}{m_{\sigma}}} \log\Big{(}\frac{s-l_{\alpha}}{\varepsilon}\Big{)}\] \[\quad+\frac{\varepsilon^{-\frac{m_{\sigma}-2}{m_{\sigma}}}}{m_{ \sigma}^{2}}\sum_{\begin{subarray}{c}0\leq j<k\\ j\neq m_{\sigma}-2\,\mathrm{mod}\,m_{\sigma}\end{subarray}}\frac{\varepsilon^ {\frac{j}{m_{\sigma}}}\mathfrak{C}^{j}_{\sigma,2l+1}(f)}{j!}\Big{(}\frac{s-l_ {\alpha}}{\varepsilon}\Big{)}^{\frac{j-(m_{\sigma}-2)}{m_{\sigma}}}+C^{n+ \mathrm{P}_{\mathrm{a}}}((l_{\alpha},l_{\alpha}+\varepsilon])\] \[=-\sum_{\begin{subarray}{c}0\leq j<k\\ j=m_{\sigma}-2\,\mathrm{mod}\,m_{\sigma}\end{subarray}}\mathfrak{C}^{j}_{ \sigma,2l+1}(f)\frac{(s-l_{\alpha})^{\frac{j-(m_{\sigma}-2)}{m_{\sigma}}}\log( s-l_{\alpha})}{m_{\sigma}^{2}j!}\] \[\quad+\sum_{\begin{subarray}{c}0\leq j<k\\ j\neq m_{\sigma}-2\,\mathrm{mod}\,m_{\sigma}\end{subarray}}\mathfrak{C}^{j}_{ \sigma,2l+1}(f)\frac{(s-l_{\alpha})^{\frac{j-(m_{\sigma}-2)}{m_{\sigma}}}}{m_{ \sigma}^{2}j!}+C^{n+\mathrm{P}_{\mathrm{a}}}((l_{\alpha},l_{\alpha}+ \varepsilon]).\]
This key observation makes it possible to get (5.10) proceeding further as in the proof of Theorem 1.1.
**Theorem 5.6**.: _For any \(r\geq-\frac{m-2}{m}\) let \(n=\lceil r\rceil\) and \(a=n-r\). Then for any \(f\in C^{k_{r}}(M)\) we have_
\[\mathfrak{s}_{r}(f)=\varphi_{f}-\sum_{\begin{subarray}{c}[(\sigma,k,l)]\in \mathscr{T}\mathscr{E}/\sim\\ \mathfrak{o}(\sigma,k)<r\end{subarray}}\mathfrak{C}_{((\sigma,k,l)]}(f)\xi_{[( \sigma,k,l)]}\in C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha}) \tag{5.11}\]
_and the operator \(\mathfrak{s}_{r}:C^{k_{r}}(M)\to C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{ \alpha\in\mathcal{A}}I_{\alpha})\) is bounded._
Proof.: Let \(\{\rho_{\sigma}:\sigma\in\mathrm{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\}\) be a \(C^{\infty}\)-partition of unity of \(M\) such that \(\rho_{\sigma}=1\) on \(U_{\sigma}\). For any \(\sigma\in\mathrm{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\) choose \(k_{\sigma}\geq 1\) so that \(\mathfrak{o}(\sigma,k_{\sigma}-1)<r\leq\mathfrak{o}(\sigma,k_{\sigma})\). Let \(n_{\sigma,k_{\sigma}}=\lceil\mathfrak{o}(\sigma,k_{\sigma})\rceil\) and \(a_{\sigma,k_{\sigma}}=n_{\sigma,k_{\sigma}}-\mathfrak{o}(\sigma,k_{\sigma})\). By Remark 5.3, \(k_{\sigma}\vee(n_{\sigma,k_{\sigma}}+1)\leq k_{r}\). Therefore, Corollary 5.5 applied to \(f\cdot\rho_{\sigma}\), shows that
\[\varphi_{f\cdot\rho_{\sigma}}=\sum_{\begin{subarray}{c}[(\sigma,j,l)]\in \mathscr{T}\mathscr{E}/\sim\\ 0\leq j<k_{\sigma}\end{subarray}}\mathfrak{C}_{[(\sigma,j,l)]}(f\cdot\rho_{ \sigma})\xi_{[(\sigma,j,l)]}+C^{n_{\sigma,k_{\sigma}}+\mathrm{P}_{\mathrm{a} _{\sigma,k_{\sigma}}}\,\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}).\]
As \(r\leq\mathfrak{o}(\sigma,k_{\sigma})\), by Remark 2.2, \(C^{n_{\sigma,k_{\sigma}}+\mathrm{P}_{\mathrm{a}_{\sigma,k_{\sigma}}}\,\mathrm{G}} \subset C^{n+\mathrm{P}_{\mathrm{a}}}\). Since \(\mathfrak{C}_{[(\sigma,j,l)]}(f\cdot\rho_{\sigma})=\mathfrak{C}_{[(\sigma,j,l )]}(f)\), this gives
\[\varphi_{f\cdot\rho_{\sigma}}=\sum_{\begin{subarray}{c}[(\sigma,j,l)]\in \mathscr{T}\mathscr{E}/\sim\\ 0\leq j<k_{\sigma}\end{subarray}}\mathfrak{C}_{[(\sigma,j,l)]}(f)\xi_{[(\sigma,j,l)]}+C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}).\]
When summed against \(\sigma\), this yields (5.11).
To prove that the operator \(\mathfrak{s}_{r}\) is bounded, we use the decomposition (5.9). Indeed,
\[\mathfrak{s}_{r}(f)=\sum_{\begin{subarray}{c}(\sigma,k,j)\in\mathscr{T} \mathscr{D}\\ \mathfrak{o}(\sigma,k)<r\end{subarray}}\mathfrak{Q}^{k}_{\sigma,j}(f)\mathfrak{ s}_{r}(\chi^{k}_{\sigma,j})+\mathfrak{s}_{r}(\mathfrak{R}_{r}(f))=\sum_{ \begin{subarray}{c}(\sigma,k,j)\in\mathscr{T}\mathscr{D}\\ \mathfrak{o}(\sigma,k)<r\end{subarray}}\mathfrak{Q}^{k}_{\sigma,j}(f)\mathfrak{ s}_{r}(\chi^{k}_{\sigma,j})+\varphi_{\mathfrak{R}_{r}(f)}.\]
Since the functionals \(\mathfrak{Q}^{k}_{\sigma,j}\) and the operator \(f\mapsto\varphi_{\mathfrak{R}_{r}(f)}\) (by Corollary 5.4) are bounded, this gives that \(\mathfrak{s}_{r}\) is bounded.
Proof of Theorem 1.2.: Arguments presented in Section 1.2 show that if \(g\in C^{r}(I)\) is a solution of the cohomological equation \(g\circ T-g=\varphi_{f}\), then the corresponding function \(u=u_{g,f}:M^{\prime}\setminus(\operatorname{Sd}(\psi_{\mathbb{R}})\cup \operatorname{SL}(\psi_{\mathbb{R}}))\to\mathbb{C}\) given by
\[u(x):=g(\psi_{t}x)-\int_{0}^{t}f(\psi_{s}x)\,ds\]
whenever \(\psi_{t}x\in I\) for some \(t\in\mathbb{R}\), is of class \(C^{r}\) on \(M^{\prime}\setminus(\operatorname{Sd}(\psi_{\mathbb{R}})\cup\operatorname{SL }(\psi_{\mathbb{R}}))\). We need to show that if \(\mathfrak{d}^{k}_{\sigma,j}(f)=0\) for all \((\sigma,k,j)\in\mathscr{T}\mathscr{D}\) such that \(\widehat{\mathfrak{o}}(\mathfrak{d}^{k}_{\sigma,j})<v(r)\) and \(\mathfrak{C}^{k}_{\sigma,l}(f)=0\) for all \((\sigma,k,l)\in\mathscr{T}\mathscr{C}\) such that \(\mathfrak{o}(\mathfrak{C}^{k}_{\sigma,l})<v(r)\) then \(u\) has a \(C^{r}\)-extension to \(M^{\prime}_{e}\) and
\[\|u\|_{C^{r}(M^{\prime}_{e})}\leq C(\|g\|_{C^{r}(I)}+\|f\|_{C^{k_{\sigma(r)}(M )}}). \tag{5.12}\]
We split the proof of our claim into several steps. In fact, we split \(M^{\prime}_{e}\) into subsets of two kinds: subsets which are far from saddles and saddle loops, and sets surrounding saddles or saddle loops.
**Step 1. Sets far from saddles and saddle loops.** We will show that for any compact subset \(A\subset M^{\prime}\setminus(\operatorname{Sd}(\psi_{\mathbb{R}})\cup \operatorname{SL}(\psi_{\mathbb{R}}))\) there exists \(C_{A}>0\) such that
\[\|u\|_{C^{r}(A)}\leq C_{A}(\|g\|_{C^{r}(I)}+\|f\|_{C^{k_{\sigma(r)}(M)}}). \tag{5.13}\]
Recall that, by arguments from Section 1.2, for any \(x_{0}\in M^{\prime}\setminus(\operatorname{Sd}(\psi_{\mathbb{R}})\cup \operatorname{SL}(\psi_{\mathbb{R}}))\) there exist closed intervals \([\tau_{1},\tau_{2}]\) and \(J\subset\operatorname{Int}I\) such that the set \(R(x_{0})=\{\psi_{t}x:x\in J,t\in[\tau_{1},\tau_{2}]\}\) is a rectangle in \(M^{\prime}\), i.e. the map
\[J\times[\tau_{1},\tau_{2}]\ni(x,t)\mapsto\nu(x,t)=\psi_{t}x\in R(x_{0})\]
is a \(C^{\infty}\)-diffeomorphism and \(x_{0}\in\operatorname{Int}R(x_{0})\). Moreover,
\[u\circ\nu(x,t)=g(x)+\int_{0}^{t}f\circ\nu(x,s)ds\text{ on }J\times[\tau_{1}, \tau_{2}].\]
By Remark 5.2, it follows that there exists \(C_{x_{0}}>0\) such that
\[\|u\|_{C^{r}(R(x_{0}))}\leq C_{x_{0}}(\|g\|_{C^{r}(I)}+\|f\|_{C^{r}(M)})\leq C _{x_{0}}(\|g\|_{C^{r}(I)}+\|f\|_{C^{k_{\sigma(r)}(M)}}).\]
Covering \(A\) by a finite number of rectangles, this yields (5.13).
**Step 2. Some sets far from saddles.** Suppose that \(\gamma:[a,b]\to M\setminus\operatorname{Fix}(\psi_{\mathbb{R}})\) is a standard \(C^{\infty}\)-parametrization of a curve and \(\xi:[a,b]\to\mathbb{R}_{>0}\) is a \(C^{\infty}\) map such that
\[[a,b]^{\xi}\ni(x,t)\mapsto\nu(x,t)=\psi_{t}x\in\nu([a,b]^{\xi})=:(\gamma[a,b] )^{\xi}\]
is a \(C^{\infty}\)-diffeomorphism, where \([a,b]^{\xi}=\{(x,t):x\in[a,b],0\leq t\leq\xi(x)\}\). Then the arguments used in Step 1 show that if \(u\circ\gamma\in C^{r}([a,b])\) then \(u\in C^{r}([a,b]^{\gamma})\) and there exists \(C_{\gamma,\xi}>0\) such that
\[\|u\|_{C^{r}(\gamma([a,b])^{\xi})}\leq C_{\gamma,\xi}(\|u\circ\gamma\|_{C^{r}( [a,b])}+\|f\|_{C^{k_{\sigma(r)}(M)}}). \tag{5.14}\]
**Step 3. Strips touching saddles and saddle loops and their decomposition.** From now on we will use a notation introduced in the proof of Theorem 1.1. Let \(\tau:I\to\mathbb{R}_{>0}\cup\{+\infty\}\) be the first return time map. Suppose that \(J\subset I_{\alpha}\) is of the form \(J=[l_{\alpha},l_{\alpha}+\varepsilon]\), where \(l_{\alpha}\) is the first backward meeting point of a separatrix incoming to \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\). Suppose that \(J^{\tau}\) meets \(\sigma\) exactly \(N\)-times (\(1\leq N=N_{J}<m_{\sigma}\)) and the orbits starting from \(\operatorname{Int}J\) meet \(\mathcal{D}_{\sigma,\varepsilon}\) in its sectors \(\mathcal{D}^{2l_{\alpha}+1}_{\sigma,\varepsilon}\) for \(1\leq i\leq N\) consecutively before return to \(I\). Then \(\sigma\) has \(N-1\) saddle
loops \(sl_{i}\) connecting the sector \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\) with \(\mathcal{D}^{2l_{i+1}+1}_{\sigma,\varepsilon}\) for \(1\leq i\leq N-1\). Recall that \(J^{\tau}\) is the closure of \((\operatorname{Int}J)^{\tau}\). Then
\[J^{\tau}=\bigcup_{i=1}^{N}\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\cup \bigcup_{i=0}^{N}E_{i}, \tag{5.15}\]
where each \(E_{i}\) is of the form \(\gamma_{i}([0,\varepsilon])^{\xi_{i}}\) with
* \(\gamma_{0}(s)=\gamma(l_{\alpha}+s)\) (here \(\gamma\) is the parametrization of \(I\)) and \(\xi_{0}(s)\) is the time spent to go from \(J\) to \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\);
* for \(0\leq i\leq N-1\), \(\gamma_{i}(s)=G_{l_{i}}(\varepsilon-\iota s)\) and \(\xi_{i}(s)\) is the time spent to go from \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\) to \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\);
* \(\gamma_{N}(s)=G_{l_{N}}(\varepsilon-\iota s)\) and \(\xi_{N}(s)\) is the time spent to go from \(\mathcal{D}^{2l_{N}+1}_{\sigma,\varepsilon}\) to \(I\).
**Step 4.0. The set \(E_{0}\).** In view of (5.14) in Step 2,
\[u\in C^{r}(E_{0})\text{ and }\|u\|_{C^{r}(E_{0})}\leq C_{\gamma_{0},\xi_{0}}(\|g \|_{C^{r}(I)}+\|f\|_{C^{k_{v}(r)}(M)}). \tag{5.16}\]
**Step 4.1. The sets \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\) surrounding the saddle \(\sigma\).** We will show that for every \(1\leq i\leq N\) there exist \(C_{i},C^{\prime}_{i}>0\) such that if \(u\) has a \(C^{r}\)-extension on \(E_{i-1}\) then it has \(C^{r}\)-extension on \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\) and
\[\|u\|_{C^{r}(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon})}\leq C_{i}\|u\|_{C^ {r}(E_{i-1})}+C^{\prime}_{i}\|f\|_{C^{k_{v}(r)}(M)}. \tag{5.17}\]
This is the main inductive step running to the proof of (5.12) restricted to \(J^{\tau}\).
By Remark 5.1, for every \(\omega\in\mathcal{D}_{\sigma,\varepsilon}(\frac{l_{i}}{m_{\sigma}},\frac{l_{ i}+1}{m_{\sigma}})\) we have \(\psi_{\xi_{l}(\omega)}G_{l_{i}}(-\varepsilon-\iota s)=\omega\) for \(s=-\Im\omega^{m_{\sigma}}\in[0,\varepsilon]\) and
\[u(\omega)-u(G_{l_{i}}(-\varepsilon-\iota s))=\int_{0}^{\xi_{l}(\omega)}f(\psi _{t}G_{l_{i}}(-\varepsilon-\iota s))dt.\]
In view of (5.1), for \(\omega\in\mathcal{D}_{\sigma,\varepsilon}(\frac{2l_{i}+1}{2m_{\sigma}},\frac{ 2l_{i}+2}{2m_{\sigma}})\),
\[u(\omega)-u(G_{l_{i}}(-\varepsilon+\iota\Im\omega^{m_{\sigma}}))=\frac{ \varepsilon^{-\frac{m_{\sigma}-2}{m_{\sigma}}}}{m_{\sigma}^{2}}F_{(f\cdot V) \circ\varepsilon^{l/m_{\sigma}}}(\varepsilon^{-1/m_{\sigma}}\omega). \tag{5.18}\]
Choose \(m-1\leq\underline{k}\leq k\leq k_{v(r)}\) such that \(\mathfrak{o}(\sigma,k-1)<v(r)\leq\mathfrak{o}(\sigma,k)\) and \(\widehat{\mathfrak{o}}(\sigma,\underline{k}-1)<v(r)\leq\widehat{\mathfrak{o}}( \sigma,\underline{k})\). Then \(\widehat{\mathfrak{o}}(\sigma,\underline{k})=\lceil\mathfrak{o}(\sigma,k)\rceil\). Moreover, by Remark 5.3, \(n:=\lceil v(r)\rceil=\lceil\mathfrak{o}(\sigma,k)\rceil\) and \(k\vee(n+1)\leq k_{v(r)}\).
By assumption, for every \(0\leq j<k\) and \(0\leq i\leq j\wedge(m_{\sigma}-2)\) with \(i\neq j-(m_{\sigma}-1)\operatorname{mod}m_{\sigma}\) we have \(\partial_{i}^{j}(f\cdot V)=\mathfrak{d}_{\sigma,i}^{j}(f)=0\) and \(\mathcal{C}_{l}^{j}(f\cdot V)=\mathfrak{C}_{\sigma,l}^{j}(f)=0\) for all \(0\leq j<k\) and \(l=2l_{i}+1\), \(1\leq i\leq N\).
In view of Theorem 4.10, the map \(F_{(f\cdot V)\circ\varepsilon^{1/m_{\sigma}}}\circ\varepsilon^{-1/m_{\sigma}}: \mathcal{D}_{\sigma,\varepsilon}(\frac{2l_{i}+1}{2m_{\sigma}},\frac{2l_{i}+2}{ 2m_{\sigma}})\to\mathbb{C}\) has a \(C^{\epsilon(\sigma,k)}\)-extension on \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}=\overline{\mathcal{D}}_{\sigma, \varepsilon}(\frac{2l_{i}+1}{2m_{\sigma}},\frac{2l_{i}+2}{2m_{\sigma}})\) and there exists \(C^{\prime}_{i}>0\) such that
\[\Big{\|}\frac{\varepsilon^{-\frac{m_{\sigma}-2}{m_{\sigma}}}}{m_{\sigma}^{2}}F_{(f \cdot V)\circ\varepsilon^{1/m_{\sigma}}}\circ\varepsilon^{-1/m_{\sigma}}\Big{\|} _{C^{\epsilon(\sigma,k)}(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon})}\leq C^{ \prime}_{i}\|f\|_{C^{k\vee(n+1)}(M)}. \tag{5.19}\]
Moreover, the map \(\mathcal{D}_{\sigma,\varepsilon}(\frac{2l_{i}+1}{2m_{\sigma}},\frac{2l_{i}+2}{ 2m_{\sigma}})\ni\omega\mapsto G_{l_{i}}(-\varepsilon+\iota\Im\omega^{m_{ \sigma}})\in\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\cap E_{i-1}\) has an obvious analytic extension on \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\). It follows that there exists \(C_{i}>0\) such that if \(u\) is of class \(C^{r}\) on \(E_{i-1}\) then \(u\circ G_{l_{i}}(-\varepsilon+\iota\Im\omega^{m_{\sigma}})\) has a \(C^{r}\)-extension to \(\mathcal{D}^{2l_{i}+1}_{\sigma,\varepsilon}\) and
\[\|u\circ G_{l_{i}}(-\varepsilon+\iota\Im\omega^{m_{\sigma}})\|_{C^{r}(\mathcal{D}^ {2l_{i}+1}_{\sigma,\varepsilon})}\leq C_{i}\|u\|_{C^{r}(E_{i-1})}. \tag{5.20}\]
As \(v(r)\leq\mathfrak{o}(\sigma,k)\) and \(k\vee(n+1)\leq k_{v(r)}\), by (5.18), (5.19) and (5.20), \(u\) has a \(C^{r}\)-extension on \(\mathcal{D}^{2l_{l}+1}_{\sigma,\varepsilon}\) and (5.17) holds.
**Step 4.2. The sets \(E_{i}\) surrounding the saddle loops.** We will show that for every \(1\leq i\leq N\) there exist \(C^{\prime\prime}_{i},C^{\prime\prime\prime}_{i}>0\) such that if \(u\) has a \(C^{r}\)-extension on \(\mathcal{D}^{2l_{l}+1}_{\sigma,\varepsilon}\) then it has \(C^{r}\)-extension on \(E_{i}\) and
\[\|u\|_{C^{r}(E_{i})}\leq C^{\prime\prime}_{i}\|u\|_{C^{r}(\mathcal{D}^{2l_{l}+ 1}_{\sigma,\varepsilon})}+C^{\prime\prime\prime}_{i}\|f\|_{C^{k_{v(r)}}(M)}.\]
This is an easy inductive step leading to the proof of (5.12) restricted to \(J^{\tau}\), which follows directly from (5.14). Indeed, as \(\gamma_{i}:[0,\varepsilon]\to\mathcal{D}^{2l_{l}+1}_{\sigma,\varepsilon}\) is an analytic curve, there exists \(C>0\) such that if \(u\) is of class \(C^{r}\) on \(\mathcal{D}^{2l_{l}+1}_{\sigma,\varepsilon}\) then \(\|u\circ\gamma_{i}\|_{C^{r}([0,\varepsilon])}\leq C\|u\|_{C^{r}(\mathcal{D}^{2 l_{l}+1}_{\sigma,\varepsilon})}\). As \(E_{i}=\gamma_{i}([0,\varepsilon])^{\xi_{i}}\), in view of (5.14), \(u\) has \(C^{r}\)-extension on \(E_{i}\) and
\[\|u\|_{C^{r}(E_{i})}\leq C_{\gamma_{i},\xi_{i}}(\|u\circ\gamma_{i}\|_{C^{r}([ 0,\varepsilon])}+\|f\|_{C^{k_{v(r)}}(M)})\leq C_{\gamma_{i},\xi_{i}}(C\|u\|_{C ^{r}(\mathcal{D}^{2l_{l}+1}_{\sigma,\varepsilon})}+\|f\|_{C^{k_{v(r)}}(M)}).\]
**Step 4.3. Induction.** Starting from Step 4.0 (as the initial inductive step) and then repeating alternately Steps 4.1 and 4.2\(N\)-times, we have that there exists \(C_{J}>0\) such that \(u\) has a \(C^{r}\)-extension on \(J^{\tau}\) and
\[\|u\|_{C^{r}(J^{\tau})}\leq C_{J}(\|g\|_{C^{r}(J)}+\|f\|_{C^{k_{v(r)}}(M)}). \tag{5.21}\]
**Step 5. Summary.** Using the arguments from Step 4, we obtain (5.21) also in the case where \(J=[r_{\alpha}-\varepsilon,r_{\alpha}]\). Then the strip \(J^{\tau}\) touches a saddle on right side. Let \(A\subset M^{\prime}\setminus(\operatorname{Sd}(\psi_{\mathbb{R}})\cup \operatorname{SL}(\psi_{\mathbb{R}}))\) be the closure of
\[M^{\prime}\setminus\bigcup_{\alpha\in\mathcal{A}}([l_{\alpha},l_{\alpha}+ \varepsilon]^{\tau}\cup[r_{\alpha}-\varepsilon,r_{\alpha}]^{\tau}).\]
Then by Step 1 applied to \(A\) and Step 4 applied to the intervals \([l_{\alpha},l_{\alpha}+\varepsilon]\) and \([r_{\alpha}-\varepsilon,r_{\alpha}]\) for all \(\alpha\in\mathcal{A}\), we have that \(u\) has a \(C^{r}\)-extension on \(M^{\prime}_{e}\) and (5.12) holds with \(C=C_{A}+\sum_{\alpha\in\mathcal{A}}(C_{[l_{\alpha},l_{\alpha}+\varepsilon]}+C_ {[r_{\alpha}-\varepsilon,r_{\alpha}]})\).
Proof of Theorem 1.3.: Suppose that there exists \(u\in C^{r}(M^{\prime}_{e})\) such that \(Xu=f\) for some \(r\in\mathbb{R}_{\eta}\) with \(v(r)>0\). Choose \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\) and \(0\leq l<m_{\sigma}\) such that \(U_{\sigma,2l+1}\cap M^{\prime}\neq\emptyset\). We will show that \(\mathfrak{E}^{j}_{\sigma,2l+1}(f)=0\) for all \(j\geq 0\) such that \(\mathfrak{o}(\sigma,j)<v(r)\) and \(\mathfrak{d}^{j}_{\sigma,i}(f)=0\) for all \(j\geq 0\) such that \(\widehat{\mathfrak{o}}(\sigma,j)<v(r)\) and \(0\leq i\leq j\wedge(m_{\sigma}-2)\) with \(i\neq j-(m_{\sigma}-1)\operatorname{mod}m_{\sigma}\). The proof for even sectors follows the same way as for odd sectors, so we will only focus on the latter.
In view of (5.18), for \(\omega\in\mathcal{D}_{\sigma,\varepsilon}(\frac{2l+1}{2m_{\sigma}},\frac{2l+2 }{2m_{\sigma}})\),
\[u(\omega)-u(G_{l}(-\varepsilon+\iota\Im\omega^{m_{\sigma}}))=\frac{\varepsilon ^{-\frac{m_{\sigma}-2}{m_{\sigma}}}}{m_{\sigma}^{2}}F_{(f\cdot V)\circ \varepsilon^{1/m_{\sigma}}}(\varepsilon^{-1/m_{\sigma}}\omega).\]
By assumption, \(u\) is of class \(C^{r}\) on \(\overline{\mathcal{D}}_{\sigma,\varepsilon}(\frac{2l+1}{2m_{\sigma}},\frac{2l+2 }{2m_{\sigma}})\), and hence \(u(G_{l}(-\varepsilon+\iota\Im\omega^{m_{\sigma}}))\) is of class \(C^{r}\) on \(\overline{\mathcal{D}}_{\sigma,\varepsilon}(\frac{2l+1}{2m_{\sigma}},\frac{2l+2 }{2m_{\sigma}})\). Therefore, \(F_{(f\cdot V)\circ\varepsilon^{1/m_{\sigma}}}\) has a \(C^{r}\)-extension on \(\overline{\mathcal{D}}(\frac{2l+1}{2m},\frac{2l+2}{2m})\).
Choose \(k\geq m_{\sigma}-1\) such that \(\mathfrak{o}(\sigma,k-1)<v(r)\leq\mathfrak{o}(\sigma,k)\). By Remark 5.3, we have \(n:=\lceil v(r)\rceil=\lceil\mathfrak{o}(\sigma,k)\rceil\) and \(k\vee(n+1)\leq k_{v(r)}\). Therefore, by Theorem 4.11, \(\varepsilon^{j/m_{\sigma}}\mathfrak{E}^{j}_{\sigma,2l+1}(f)=\mathscr{C}^{j}_{2l+1 }((f\cdot V)\circ\varepsilon^{1/m_{\sigma}})=0\) for all \(j\geq 0\) such that \(\mathfrak{o}(\sigma,j)<v(r)\) and \(\varepsilon^{j/m_{\sigma}}\mathfrak{d}^{j}_{\sigma,i}(f)=\partial^{j}_{i}((f \cdot V)\circ\varepsilon^{1/m_{\sigma}})=0\) for all \(j\geq 0\) with \(\widehat{\mathfrak{o}}(\sigma,j)<v(r)\) and \(0\leq i\leq j\wedge(m_{\sigma}-2)\) with \(i\neq j-(m_{\sigma}-1)\operatorname{mod}m_{\sigma}\)
## Acknowledgements
The authors would like to thank Alexander Gomilko for his help in understanding some analytical issues used in Section 4.1 and Giovanni Forni for an interesting discussion of potentially alternative approaches to solving cohomological equations. The authors acknowledge the Center of Excellence "Dynamics, mathematical analysis and artificial intelligence" at the Nicolaus Copernicus University in Torun and Centro di Ricerca Matematica Ennio De Giorgi - Scuola Normale Superiore, Pisa for hospitality during their visits. Research was partially supported by the Narodowe Centrum Nauki Grant 2022/45/B/ST1/00179.
|
2305.07751 | Private and Communication-Efficient Algorithms for Entropy Estimation | Modern statistical estimation is often performed in a distributed setting
where each sample belongs to a single user who shares their data with a central
server. Users are typically concerned with preserving the privacy of their
samples, and also with minimizing the amount of data they must transmit to the
server. We give improved private and communication-efficient algorithms for
estimating several popular measures of the entropy of a distribution. All of
our algorithms have constant communication cost and satisfy local differential
privacy. For a joint distribution over many variables whose conditional
independence is given by a tree, we describe algorithms for estimating Shannon
entropy that require a number of samples that is linear in the number of
variables, compared to the quadratic sample complexity of prior work. We also
describe an algorithm for estimating Gini entropy whose sample complexity has
no dependence on the support size of the distribution and can be implemented
using a single round of concurrent communication between the users and the
server. In contrast, the previously best-known algorithm has high communication
cost and requires the server to facilitate interaction between the users.
Finally, we describe an algorithm for estimating collision entropy that
generalizes the best known algorithm to the private and communication-efficient
setting. | Gecia Bravo-Hermsdorff, Róbert Busa-Fekete, Mohammad Ghavamzadeh, Andres Muñoz Medina, Umar Syed | 2023-05-12T20:35:10Z | http://arxiv.org/abs/2305.07751v1 | # Private and Communication-Efficient Algorithms for Entropy Estimation
###### Abstract
Modern statistical estimation is often performed in a distributed setting where each sample belongs to a single user who shares their data with a central server. Users are typically concerned with preserving the privacy of their samples, and also with minimizing the amount of data they must transmit to the server. We give improved private and communication-efficient algorithms for estimating several popular measures of the entropy of a distribution. All of our algorithms have constant communication cost and satisfy local differential privacy. For a joint distribution over many variables whose conditional independence is given by a tree, we describe algorithms for estimating Shannon entropy that require a number of samples that is linear in the number of variables, compared to the quadratic sample complexity of prior work. We also describe an algorithm for estimating Gini entropy whose sample complexity has no dependence on the support size of the distribution and can be implemented using a single round of concurrent communication between the users and the server. In contrast, the previously best-known algorithm has high communication cost and requires the server to facilitate interaction between the users. Finally, we describe an algorithm for estimating collision entropy that generalizes the best known algorithm to the private and communication-efficient setting.
Originally published at the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). This version corrects some errors in the original version.
For example, consider the problem of detecting fingerprinting on the web. Many websites track users across the web without their consent by recording (enough) information about their devices (e.g., installed fonts, operating system, timezone, etc.), a practice known as "browser fingerprinting". Entropy is the standard metric used to quantify the identifiability of the collected fingerprints. So a private and distributed method for estimating entropy can be used by a browser to warn users that this covert tracking could occur, without ever storing the fingerprints themselves.
The study of entropies has an extensive and rich history in mathematics and sciences. Related quantities called "entropy" appear in many contexts (thermodynamics, information theory, dynamical systems [36], category theory [10], etc.). These may be broadly thought as measures of information of a system or process obeying certain properties, which, in turn, lead to natural measures of disorder, randomness, outcome diversity, information content, uniformity, etc.
In this paper, we study private and communication-efficient algorithms for estimating certain entropies of a distribution. Specifically, we give algorithms for estimating the following entropies, which are widely-used in many scientific fields to quantify the uncertainty, diversity and spread of a discrete distribution:
* _Shannon entropy_[34], a fundamental quantity in information theory.
* _Gini entropy_ (also known as Tsallis entropy [39] of order 2, or (one minus the) second frequency moment). Some of its applications include measuring ecological diversity [35, 29], market competition among firms [23], effective size of political parties [28], and suitability of features to split on during decision tree learning [32].
* _Collision entropy_ (also known as Renyi entropy [33] of order 2). Some of its applications include measuring the quality of random number generators [37], and determining the number of reads needed to reconstruct a DNA sequence [30].
Our algorithms are implemented in either the _non-interactive_ model (for the Gini and collision entropies), in which all users simultaneously exchange information with the server during a single round of communication, or the (stronger) _sequentially interactive_ model (for the Shannon entropy), in which the server queries users one at a time, possibly in an adaptive manner [24]. When analyzing the communication complexity of an algorithm, we prove bounds on the number of bits that each user transmits to the server. However, the server is allowed to broadcast an arbitrary amount of information to the users (this is also called the _blackboard_ model [18]), including shared random bits (also known as the _public coin_ model [1, 2, 25]). When analyzing the privacy of our algorithms we use the framework of _local differential privacy_[19], which ensures that the server learns very little about each user's data.
### Our contributions
* A sequentially interactive \(\alpha\)-local differentially private algorithm for estimating the Shannon entropy of a joint distribution on \(d\) variables within \(\epsilon d\) error using \(\tilde{O}(d/(\alpha^{2}\epsilon^{5}))\) samples and \(O(1)\) bits per sample. Our analysis assumes that each of the \(d\) variables has a constant support size and that their conditional independence graph is a tree. We also describe algorithms that have better dependence on \(1/\epsilon\) in certain special cases, such as when the tree has low diameter or is a chain. Our algorithms achieve \(O(1)\) communication complexity by observing only two or three of the \(d\) variables in any single sample; we call these _pair_ and _triplet observations_. The only previously known algorithm for estimating the Shannon entropy of a tree-structured distribution from _pair_ observations is a non-interactive algorithm due to Chow and Liu [13]. We prove that any non-interactive algorithm requires \(\Omega(d^{2})\)_pair_ observations to achieve \(O(d)\) error. We also prove that, for any sequentially interactive algorithm, \(\Omega(d/\epsilon)\) pair observations are necessary to achieve \(O(\epsilon d)\) error.
* A non-interactive \(\alpha\)-local differentially private algorithm for estimating the Gini entropy of a distribution within \(\epsilon\) error using \(O(1/(\alpha^{4}\epsilon^{2}))\) samples, \(O(1)\) bits per sample, and \(\tilde{O}(1)\) space. The best previous algorithms [11] either have a sample complexity that depends on the support size \(k\) of the distribution, or are sequentially interactive, and also require \(\Omega(k)\) bits per sample and \(\Omega(k)\) space. Also our error bound holds with high probability instead of only in expectation.
* A non-interactive \(\alpha\)-local differentially private algorithm for estimating the collision entropy of a distribution with support size \(k\) within \(\epsilon\) error using \(\tilde{O}(k^{2}/(\alpha^{4}\epsilon^{2}))\) samples, \(O(1)\) bits per sample, and \(\tilde{O}(1)\) space. Our algorithm generalizes the previously best known non-interactive algorithm [37] to the private and communication-efficient setting.
## 2 Related Work
There is a very extensive literature on distributed statistical estimation under communication constraints (see [41] for the paper that appears to have started this thread). Variations on the problem include whether communication is allowed between users, whether communication happens in one or multiple rounds, whether there is a shared source of randomness among the users, and whether communication is limited per-user or only cumulatively across all users.
Many previous results in this area bound the sample and communication complexity of estimating the parameters of a distribution \(P_{\theta}\), where \(\theta\in\Theta\) (see e.g. [22]). This problem class includes discrete distribution estimation, where the guarantees are usually stated as bounds on the relative entropy or total variation distance between the estimated and true distribution (see e.g. [5]). Other problems of interest are mean estimation [38] and heavy hitter estimation [4].
There has also been significant interest in differentially private statistical estimation, and of particular relevance is the work by [3], who gave private algorithms for estimating certain functionals of a distribution, including the Shannon entropy. However, they used the central model of differential privacy, while in this paper we prove guarantees using the (stronger) local model.
## 3 Entropy Measures
The _Shannon_, _Tsallis_, and _Renyi_ entropy of a discrete random variable \(X\) are defined as
\[\text{(Shannon)} H(X) =-\sum_{x}\Pr[X=x]\log\Pr[X=x], \tag{1}\] \[\text{(Tsallis)} T_{\gamma}(X) =\frac{1}{\gamma-1}\big{(}1-\sum_{x}\Pr[X=x]^{\gamma}\big{)},\] (2) \[\text{(Renyi)} R_{\gamma}(X) =\frac{1}{1-\gamma}\log\big{(}\sum_{x}(\Pr[X=x])^{\gamma}\big{)}, \tag{3}\]
where \(\gamma\) in (2) and (3) is a free parameter satisfying \(\gamma>0\) and \(\gamma\neq 1\). Both Tsallis and Renyi entropy are generalizations of Shannon entropy in the sense that \(\lim_{\gamma\to 1}T_{\gamma}(X)=\lim_{\gamma\to 1}R_{\gamma}(X)=H(X)\).
In this paper, we describe algorithms for estimating the Shannon entropy and special cases of the Tsallis and the Renyi entropy that are widely used in many scientific fields: \(T_{2}(X)\), also known as the _Gini entropy_, and \(R_{2}(X)\), also known as the _collision entropy_. Substituting \(\gamma=2\) into the definitions above and using the abbreviations \(G(X)\equiv T_{2}(X)\) and \(C(X)\equiv R_{2}(X)\), we have:
\[\text{(Gini)} G(X) \equiv T_{2}(X)=1-\sum_{x}\Pr[X=x]^{2},\] \[\text{(Collision)} C(X) \equiv R_{2}(X)=-\log\big{(}\sum_{x}\Pr[X=x]^{2}\big{)}.\]
Gini entropy is so-called because it is equivalent to the Gini diversity index, a statistic proposed by Corrado Gini in 1912 to measure income and wealth inequality [21]. Collision entropy takes its name from the observation that if \(X\) and \(X^{\prime}\) are independent and identically distributed, then \(C(X)=-\log\Pr[X=X^{\prime}]\).
For the problem of estimating Shannon entropy, we specialize to a high-dimensional setting, where we only observe a _pair_ (or _triplet_) of the dimensions at a time. That is, \(X\) is a random-vector of \(d\) discrete variables, where \(d\) is large, but each \(X_{i}\) has a constant support size (e.g., they are binary), and we only observe two (or three) dimensions per sample. Without making any assumption about this joint distribution, the problem is intractable. One of the most common assumptions, which we also adopt in this work, is that the joint distribution is tree-structured. In this case, the distribution can be estimated by the celebrated [13] (and optimal [8]) Chow-Liu algorithm. While the Chow-Liu algorithm requires \(\Omega(d^{2})\)_pairs_ observations to estimate the Shannon entropy, our sequential algorithm requires only \(\mathcal{O}(d)\)_pairs_ observations (see Section 5.2 for more details).
The _joint Shannon entropy_\(H(X_{1},\ldots,X_{d})\) of a set of random variables \(X_{1},\ldots X_{d}\) is the Shannon entropy \(H(X)\) of the random variable \(X=(X_{1},\ldots,X_{d})\). We write the abbreviated term _joint entropy_ when the use of Shannon entropy is obvious from context.
The _mutual information_ between two random variables \(X\) and \(Y\) and their _conditional mutual information_ given another random variable \(Z\) are defined as:
\[I(X;Y) =H(X)+H(Y)-H(X,Y), \tag{4}\] \[I(X;Y\ |\ Z) =H(X,Z)+H(Y,Z)-H(X,Y,Z)-H(Z). \tag{5}\]
## 4 Estimation Algorithms and Evaluation Criteria
A set of \(n\) users and a central server cooperate according to the following protocol to estimate the entropy of a random variable \(X\):
1. Each user \(i\in[n]\) draws an independent sample \(x_{i}\) according to the distribution of \(X\).
2. For \(r\) rounds: 1. The server sends information to a subset of the users. 2. Those users send (partial) information about their sample back to the server.
3. The server outputs an estimate of the Shannon entropy (Algorithms 1, 2, and 4) or the Gini or collision (Algorithm 5) entropies of \(X\).
An _estimation algorithm_ specifies the steps that each user and the server perform to implement the above protocol. The algorithm is _non-interactive_ if the protocol consists of a single round in which all users participate. In a non-interactive algorithm the server cannot adapt its queries to users based on responses from other users, since the server communicates with all the users concurrently. An algorithm is _sequentially interactive_ if each round consists of communication with a single user, who is never contacted again. Sequential interactivity enables the server to query users adaptively [24].
We evaluate estimation algorithms according to the following criteria:
* _Sample complexity_: The number of users from whom the server requests data.
* _Space complexity_: The space used by the server when executing the algorithm.
* _Communication complexity_: The maximum number of bits transmitted by any single user to the server. Note that the amount of information sent by the server to the users is not counted when determining communication complexity.
* _Privacy_: Let \(x_{i}\) be the sample belonging to user \(i\) and \(o_{i}\) be the data observed by the server from user \(i\). We say that an algorithm satisfies _\(\alpha\)-local differential privacy_ if \[\Pr[o_{i}\in O\ |\ x_{i}=x]\leq e^{\alpha}\Pr[o_{i}\in O\ |\ x_{i}=x^{\prime}]\] for any user \(i\), measurable set \(O\), and possible sample values \(x,x^{\prime}\).
* _Error_: The absolute difference between the true entropy of the distribution and the estimate output by the server.
## 5 Estimating Shannon Entropy of Tree-structured Joint Distributions
In this section we assume that \(X=(X_{1},\ldots,X_{d})\) is a vector of \(d\) discrete variables, and that the support size of each variable \(X_{i}\) is constant (e.g., each variable is Boolean). We also assume that \(X\) has a _tree-structured_ distribution, which means that there exists a rooted tree \(T\) with \(d\) nodes such that for any node \(i\in[d]\) we have \(\Pr[X_{i}\ |\ X_{-i}]=\Pr[X_{i}\ |\ X_{\mathrm{pa}_{T}(i)}]\), where \(X_{-i}=(X_{1},\ldots X_{i-1},X_{i+1},\ldots,X_{d})\) and the node \(\mathrm{pa}_{T}(i)\) is the parent of node \(i\) in tree \(T\). If \(i\) is the root node, then we define \(\Pr[X_{i}\ |\ X_{\mathrm{pa}_{T}(i)}]=\Pr[X_{i}]\). Equivalently, a tree-structured distribution is a Markov random field with a tree as the underlying undirected graph. Essentially, the tree-structured assumption implies that the only correlations among the \(X_{i}\)'s are pairwise correlations. If \(T\) is a chain or a star we say that \(X\) is _chain-structured_ and _star-structured_, respectively. We will treat these two special cases at the end of this section (Algorithms 2 and 4, respectively).
### Estimating Entropy of a Marginal Distribution When the Support Size is Small
Before proceeding to describe algorithms for estimating the Shannon entropy of tree-structured distributions, we use existing results for private distribution estimation to devise a local differentially private estimator for the Shannon entropy that is sample and communication efficient when the support size of the distribution is small (as is the case for the individual marginals). The server will repeatedly invoke this algorithm as a subroutine in the sections below.
First we recall that the difference in Shannon entropy of two random variables can be upper bounded according to Theorem 17.3.3 of [14] as
\[|H(X_{\mathbf{p}})-H(X_{\mathbf{p}^{\prime}})|\leq\|\mathbf{p}-\mathbf{p}^{ \prime}\|_{1}\log\frac{c}{\|\mathbf{p}-\mathbf{p}^{\prime}\|_{1}}\]
where \(X_{\mathbf{p}}\) and \(X_{\mathbf{p}^{\prime}}\) are two discrete random variables with support size \(c\) and distributions \(\mathbf{p}\) and \(\mathbf{p}^{\prime}\). Next, we apply a local differentially private learning algorithm for discrete distribution due to [4] that learns the parameters of a discrete distribution with small \(L_{1}\) error. The following theorem combines these two results by using the fact that \(x/\log(1/x)\leq 1\) whenever \(0<x\leq 1/2\).
**Theorem 5.1**.: _For any discrete distribution \(X\) with support size \(c\) and for any \(1/2\geq\epsilon>0\), there exists an estimator satisfying \(\alpha\)-local differential privacy that estimates \(H(X)\) within \(\epsilon\) error using \(n=O(c^{2}\log\frac{1}{\delta}/(\epsilon^{2}\alpha^{2}))\) samples with probability \(1-\delta\) when \(\alpha\in(0,1)\)._
This algorithm resulting from Theorem 5.1 can be used to privately estimate the entropy \(H(X_{i})\), mutual information \(I(X_{i};X_{j})\), and conditional mutual information \(I(X_{i};X_{j}\ |\ X_{k})\) of any variables \(X_{i},X_{j}\) and \(X_{k}\). This can be done using \(O\left(\frac{\log\frac{1}{\delta}}{\alpha^{2}\epsilon^{2}}\right)\) samples per estimate and \(O(1)\) bits per sample, since each of these variables has constant support size, and both mutual information and conditional mutual information can be expressed in terms of entropies (Eqs. (4) and (5)). We call such an estimate \((\alpha,\epsilon,\delta)\)_-good_.
### Our Algorithm for Tree-structured Joint Distributions
Note that the support size of \(X\) can be exponential in \(d\). In the worst case, estimating the entropy of a distribution with support size \(k\) within constant error requires \(\tilde{\Theta}(k)\) samples [20]. However the tree-structure of \(X\) can be exploited to significantly reduce the sample complexity. In their seminal paper, Chow and Liu [13] proved the identity
\[H(X)=\sum_{i=1}^{d}H(X_{i})-\max_{T}\sum_{i=1}^{d}I(X_{i};X_{\text{pa}_{T}(i)}), \tag{6}\]
for any tree-structured random variable \(X\), where the maximization is taken over all possible trees connecting the \(d\) variables.
Eq. (6) suggests a communication-efficient algorithm for estimating the entropy of \(X\), which is known as the _Chow-Liu algorithm_: First, estimate each marginal entropy \(H(X_{i})\) and each mutual information \(I(X_{i};X_{j})\). Next, compute a maximum spanning tree on the \(d\) variables, where the weight of each edge \((X_{i},X_{j})\) is the estimate of the mutual information \(I(X_{i};X_{j})\). Finally, plug these estimators into Eq. (6).
The Chow-Liu algorithm requires \(\Omega(d^{2})\) samples, since it computes the mutual information between every pair of variables in order to compute a maximum spanning tree. However, estimating the right-hand side of Eq. (6) only requires estimating the _weight_ of the maximum spanning tree, which is significantly easier than finding the tree itself. Algorithm 1 adapts a technique from [12] that estimates the weight of the maximum spanning tree of a graph in time that is sublinear in the number of edges in the graph. The basic idea is to select nodes of the graph at random and use breadth-first search to determine the size of each of their connected components if we were to drop edges that do not meet a weight threshold, short-circuiting the search when the size becomes too large. These quantities are combined to estimate the weight of the maximum spanning tree. In our case, an edge weight is a mutual information between a pair of variables, which we estimate from pair observations.
```
1:Let \(M=\left\lceil\frac{2\log c}{\epsilon}\right\rceil\) and \(R=\left\lceil\frac{\log\frac{1}{\epsilon^{2}}}{\epsilon^{2}}\right\rceil\), where \(c\) is an upper bound on the support size of each \(X_{i}\).
2:for\(m=1,\ldots,M\)do
3:for\(r=1,\ldots,R\)do
4: Choose positive integer \(Z\) randomly according to \(\Pr[Z\geq z]=1/z\).
5: Choose \(i^{*}\) uniformly at random from \([d]\).
6: Initialize queue to contain \(i^{*}\) and a set \(V=\{i^{*}\}\).
7:while queue length is non-zero and shorter than \(\min\left\{\frac{2}{\epsilon},Z\right\}\)do\(\triangleright\) Breadth-first search
8: Remove \(i\) from front of queue and \(V=V\cup\{i\}\).
9:for\(j=[d]\setminus V\)do
10: Server computes \(\left(\alpha,\frac{\epsilon}{2},\frac{\delta}{d^{2}}\right)\)-good estimate \(\hat{I}_{ij}\) of \(I(X_{i};X_{j})\).
11: (Only compute this estimate once per pair.)
12:if\(\hat{I}_{ij}\geq\epsilon m\)then add \(j\) to back of queue.
13:if queue length is zero then\(\gamma_{mr}\gets 1\)else\(\gamma_{mr}\gets 0\).
14:\(\hat{\eta}_{m}\leftarrow\frac{d}{R}\sum_{r=1}^{R}\gamma_{mr}\).
15:\(\hat{W}\gets\epsilon Md-\epsilon\sum_{m=1}^{M}\hat{\eta}_{m}\)
16: Server computes \(\left(\alpha,\epsilon,\frac{\delta}{d}\right)\)-good estimate of each marginal entropy \(H(X_{i})\).
17: Let \(\hat{S}\) be the sum of the marginal entropy estimates.
18: Return \(\hat{H}=\hat{S}-\hat{W}\).
```
**Algorithm 1** Shannon entropy estimation for tree-structured distribution
**Theorem 5.2**.: _Algorithm 1 is \(\alpha\)-locally differentially private and has \(O(1)\) communication complexity. Assume that each \(X_{i}\) has constant support size. The expected number of samples requested by the server is \(O\left(\frac{d\log\frac{d}{\delta}}{\alpha^{2}\epsilon^{3}}\right)\). Let \(\hat{H}\) be the entropy estimate output by the algorithm. If \(X\) is tree-structured, then \(|\hat{H}-H(X)|\leq\epsilon d\) with probability \(1-\delta\)._
### Our Algorithm for Chain-structured Joint Distributions
Verma and Pearl [40] observed that if \(X\) is chain-structured with chain \(T\) then for any triplet (\(X_{i}\), \(X_{j},X_{k}\)), if \(X_{k}\) is on the unique path in \(T\) between \(X_{i}\) and \(X_{j}\), then \(I(X_{i};X_{j}|X_{k})=0\). Thus, for any triplet \((X_{i},X_{j},X_{k})\) in \(T\), at least one of \(I(X_{i};X_{j}|X_{k})\), \(I(X_{i};X_{k}|X_{j})\), or \(I(X_{j};X_{k}|X_{i})\) has to be zero. This observation alone is not enough to recover the chain, since the conditional mutual information \(I(X_{i};X_{j}|X_{k})\) can also be zero for \(X_{i},X_{j}\) and \(X_{k}\) when \(X_{k}\) is not on the path between \(X_{i}\) and \(X_{j}\) in the chain \(T\). Nevertheless, under the mild assumption that the mutual information \(I(X_{i},X_{j})\) between every pair of variables is distinct, we can recover the chain \(T\) by estimating the conditional mutual information of triplets of variables.
Our algorithm is similar to sorting algorithms such as _mergesort_[26], which require \(O(d\log_{2}d)\) pairwise comparisons over \(d\) items. While we cannot compare pairs explicitly like in a sorting problem, for any triplet \((X_{i},X_{j},X_{k})\), we can use their conditional mutual information estimators to locally decide which "item" is between the other two: i.e., \(X_{i}\leftrightarrow X_{j}\leftrightarrow X_{k}\), \(X_{i}\leftrightarrow X_{k}\leftrightarrow X_{j}\) or \(X_{k}\leftrightarrow X_{i}\leftrightarrow X_{j}\) in the chain \(T\). This suggests our Algorithm 2, which inserts the variables in a chain one by one in a sequential manner. Algorithm 2 calls Algorithm 3 as a subroutine that seeks to find the position where to insert.
**Theorem 5.3**.: _Algorithm 2 is \(\alpha\)-locally differentially private and has \(O(1)\) communication complexity. The number of samples requested by the server is \(O\left(\frac{d\log\frac{d}{\delta}}{\alpha^{2}\epsilon^{2}}\right)\). Let \(\hat{H}\) be the entropy estimate output by the algorithm. If \(X\) is chain-structured and \(|I(X_{i};X_{j})-I(X_{j};X_{k})|\geq\epsilon\), then \(|\hat{H}-H(X)|\leq\epsilon d\) with probability \(1-\delta\)._
### Our Algorithm for Star-structured Joint Distributions
If \(X\) is star-structured then recovering the star \(T\) is a matter of identifying its center, which can be done by computing the mutual information between only \(\tilde{\mathcal{O}}(d)\) pairs of variables. The algorithm picks a random marginal \(X_{i}\) and takes a "Prim's step" [31], i.e., chooses the neighboring node (say \(X_{k}\)) that has the largest mutual information with \(X_{i}\). Assuming that the mutual information \(I(X_{i},X_{j})\) between every pair of variables is distinct, the edge between \(X_{i}\) and \(X_{k}\) is in the maximal spanning tree. Next, the algorithm estimates \(\sum_{j\neq i}I(X_{i},X_{j})\) and \(\sum_{j\neq k}I(X_{k},X_{j})\) to decide whether \(X_{i}\) or \(X_{k}\) is the center node of the star. Algorithm 4 presents the procedure, and Theorem 5.4 gives its sample complexity.
**Theorem 5.4**.: _Algorithm 4 is \(\alpha\)-locally differentially private and has \(O(1)\) communication complexity. The number of samples requested by the server is \(O\left(\frac{d\log\frac{d}{\delta}}{\alpha^{2}\epsilon^{2}}\right)\). Let \(\hat{H}\) be the entropy estimate output by the algorithm. If \(X\) is star-structured and \(|I(X_{i};X_{j})-I(X_{j};X_{k})|\geq\epsilon\), then \(|\hat{H}-H(X)|\leq\epsilon d\) with probability \(1-\delta\)._
### Lower Bounds
We prove sample complexity lower bounds for estimating the Shannon entropy of a tree-structured joint distribution from pair observations. Our first lower bound focuses on the non-interactive case, when the algorithm must select all the pairs in advance. The second claim is more general, and holds for all sequentially interactive algorithms.
```
1:\(S=[d]\), \(C=\emptyset\), pick an arbitrary \(i,j,k\in S\) and set \(S=S\setminus\{i,j,k\}\).
2:Server computes \((\alpha,\epsilon,\delta)\)-good estimates \(\hat{I}(X_{i};X_{j}\,|\,X_{k})\), \(\hat{I}(X_{i};X_{k}\,|\,X_{j})\) and \(\hat{I}(X_{k};X_{j}\,|\,X_{i})\).
3:if\(\hat{I}(X_{i};X_{j}\,|\,X_{k})>\epsilon\)then\(\mathbf{x}_{1}=(i,k,j)\)
4:elseif\(\hat{I}(X_{i};X_{k}\,|\,X_{j})>\epsilon\)then\(\mathbf{x}_{1}=(i,j,k)\)
5:elseif\(\hat{I}(X_{k};X_{j}\,|\,X_{i})>\epsilon\)then\(\mathbf{x}_{1}=(j,i,k)\)
6:for\(i\in(1,\ldots,d-3)\)do
7: Pick item \(j\) from \(S\) and set \(S=S\setminus\{j\}\), \(r=x_{i,1}\) and \(p=x_{i,i+2}\)
8: Server computes \((\alpha,\epsilon,\delta)\)-good estimates \(\hat{I}(X_{j};X_{p}\,|\,X_{o})\), \(\hat{I}(X_{r};X_{j}\,|\,X_{p})\).
9:if\(\hat{I}(X_{j};X_{p}\,|\,X_{r})>\epsilon\)then\(\mathbf{x}_{i+1}=(j,\mathbf{x}_{i})\)\(\triangleright\) Attach \(X_{j}\) to the head of the chain
10:elseif\(\hat{I}(X_{o};X_{j}\,|\,X_{p})>\epsilon\)then\(\mathbf{x}_{i+1}=(\mathbf{x}_{i},j)\)\(\triangleright\) Attach \(X_{j}\) to the tail of the chain
11:else\(\triangleright\) Insert \(X_{j}\) into the chain defined by \(\mathbf{x}_{i}\)
12:\(\ell=\textsc{TernarySearch}(\mathbf{x}_{i},1,i+2,j)\)\(\triangleright\) Defined in Algorithm 3
13:\(\mathbf{x}_{i+1}=(\mathbf{x}_{i}[1,\ldots,\ell],j,\mathbf{x}_{i}[\ell+1,\ldots,i+2])\)
14: Create chain \(T\) according to the order defined by \(\mathbf{x}_{d-2}\).
15:Server computes \((\alpha,\epsilon,\delta)\)-good estimate of each term in Eq. (6) using \(T\) and returns \(\hat{H}\).
```
**Algorithm 2** Shannon entropy estimation for chain-structured distribution
```
1:if\(\ell_{l}=\ell_{h}-1\)thenreturn\(\ell_{l}\)
2: Pick the median element \(k=\lceil(\ell_{h}+\ell_{l})\rceil\), and set \(i=x_{\ell_{l}}\) and \(o=x_{\ell_{h}}\)
3: Server computes \((\alpha,\epsilon,\delta)\)-good estimate \(\hat{I}(X_{i};X_{k}\,|\,X_{j})\).
4:if\(\hat{I}(X_{i};X_{k}\,|\,X_{j})>\epsilon\)thenreturnTernarySearch\((\mathbf{x},i,k,j)\)
5:elsereturnTernarySearch\((\mathbf{x},k,o,j)\)
```
**Algorithm 3** TernarySearch\((\mathbf{x},\ell_{l},\ell_{h},j)\)
```
1:Pick \(i\in[d]\) uniformly at random.
2:Server computes \((\alpha,\epsilon,\delta)\)-good estimate \(\hat{I}(X_{i},X_{j})\) for all \(j\in[d]\setminus\{i\}\).
3: Find \(k=\arg\max_{j\in[d]\setminus\{i\}}\hat{I}(X_{i},X_{j})\)
4:Server computes \((\alpha,\epsilon,\delta)\)-good estimate \(\hat{I}(X_{k},X_{j})\).
5:if\(\sum_{j}\hat{I}(X_{i},X_{j})>\sum_{j}\hat{I}(X_{k},X_{j})\)then let \(T\) be a star with \(X_{i}\) as center
6:else let \(T\) be a star with \(X_{k}\) as the center
7:Server computes \((\alpha,\epsilon,\delta)\)-good estimate of each term in Eq. (6) using \(T\) and returns \(\hat{H}\).
```
**Algorithm 4** Shannon entropy estimation for star-structured distribution
**Theorem 5.5**.: _For any non-interactive algorithm that uses \(o(d^{2})\) pair observations to estimate Shannon entropy, there exists a tree-structured distribution over \(\{0,1\}^{d}\) such that the error of the algorithm is \(\Omega(d)\) with constant probability._
**Theorem 5.6**.: _For any \(\epsilon>0\) and for any sequentially interactive algorithm that uses \(o(d/\epsilon)\) pair observations to estimate Shannon entropy, there exists a tree-structured distribution on \(\{0,1\}^{d}\) such that the error of the algorithm is \(\Omega(\epsilon\cdot d)\) with constant probability._
The lower bound given in Theorem 5.5 is based on Turan's theorem [9], which we use to show that for any algorithm with sub-quadratic sample complexity and for any constant \(C\in(0,1)\), there is a graph with \(d\) nodes containing a \(C\cdot d\)-clique - when \(d\) is large enough - such that the algorithm does not observe any edge of that clique. This implies that the additive error of the algorithm is linear in \(d\). The lower bound for sequentially interactive algorithms in Theorem 5.6 is based on a information theoretical approach. Interestingly, our construction of problem instances for which we
applied Le Cam's theorem is fairly simple, since it contains \(d\) independent random variables in every case. Nevertheless, this lower bound shows that Algorithm 1 is optimal in \(d\).
### Comparison to Prior Work
To the best of our knowledge, the Chow-Liu algorithm is the only published method for estimating the entropy of a distribution that takes advantage of its tree structure. Since the algorithm is non-interactive, the lower bound in Theorem 5.5 shows that our algorithms have provably better sample complexity when the number of variables \(d\) is large (note that the dependence on \(d\) in each of Theorems 5.2, 5.3 and 5.4 is sub-quadratic). The Chow-Liu algorithm can also be used to estimate the distribution itself, not just its entropy, and it has recently been shown [8, 17] that the algorithm has optimal sample complexity when given full observations (i.e., samples of the entire vector \((X_{1},\ldots,X_{d})\) and not just pairs or triplets of the variables). Thus the Chow-Liu algorithm is optimal for estimating a tree-structured distribution, but suboptimal for estimating the _entropy_ of a tree-structured distribution. The root cause of this difference appears to be the fact that it is significantly easier to estimate the weight of the maximum spanning tree than finding the tree itself.
## 6 Estimating Gini and Collision Entropies
In this section, we describe a non-iterative protocol (Algorithm 5) that estimates both the Gini and collision entropies of a discrete random variable \(X\) while observing only \(b\) bits per sample from its distribution, and does not require any extra assumption on the distribution of \(X\) (such as assuming it is tree-structured). First, the server partitions all users into pairs (assume for simplicity that the number of users is even). The server then distributes a \(b\)-bit hash function to each user, along with a distinct salt to each user pair. Each user then hashes their sample along with their salt, and returns the hash value to the server. The server computes entropy estimates based on the number of observed hash collisions across all pairs. In Algorithm 5, we let \(\langle x,y\rangle\) denote a binary string that encodes \(x\), followed by a delimiter, and by \(y\).
```
1: Each user \(i\in[n]\) draws a sample \(x_{i}\) independently from the distribution of \(X\).
2: Server partitions the \(n\) users into \(\frac{n}{2}\) disjoint pairs.
3: Let \(q_{i}\in\left\lceil\frac{n}{2}\right\rceil\) be the index of the pair containing user \(i\).
4: Server transmits \(q_{i}\) and hash function \(h:\{0,1\}^{*}\mapsto\{0,1\}^{b}\) to each user \(i\).
5: Each user \(i\) generates a \(b\)-bit hash value \(v_{i}=h(\langle q_{i},x_{i}\rangle)\) for their sample \(x_{i}\).
6: Each user \(i\) lets \(\hat{v}_{i}=v_{i}\) with probability \(\lambda=\frac{e^{a}-1}{2^{b}+e^{a}-1}\) and else draws \(\hat{v}_{i}\) uniformly from \([2^{b}]\).
7: Server receives \(\hat{v}_{i}\) from each user \(i\).
8: If pair \(q\) contains users \(i\) and \(j\) then let \(c_{q}=\mathbf{1}[\hat{v}_{i}=\hat{v}_{j}]\) indicate whether a hash collision was observed for pair \(q\).
9: Server computes \(\bar{c}=\frac{2}{n}\sum_{q}c_{q}\).
10: Server outputs \(\hat{G}=\frac{2^{b}\bar{c}-1}{\lambda^{2}(2^{b}-1)}\) and \(\hat{C}=-\log\left(1-\hat{G}\right)\).
```
**Algorithm 5** Gini and collision entropy estimation
Algorithm 5 is based on the observation that if \(X\) and \(X^{\prime}\) are independent and identically distributed then the Gini entropy is equal to \(1-\Pr[X=X^{\prime}]\) and the collision entropy is equal to \(-\log\Pr[X=X^{\prime}]\). If the server observed each sample directly then it could estimate \(\Pr[X=X^{\prime}]\) using the collision frequency, i.e., the fraction of sample pairs \((x_{i},x_{j})\) such that \(x_{i}=x_{j}\). However, the server only observes a \(b\)-bit hash of each sample. Among sample pairs in which there is a true collision, all of them also produce a hash collision. Among samples pairs in which there is not a true collision, about a \(\frac{1}{2^{b}}\) fraction of them produce a hash collision. Therefore the true collision frequency
can be estimated using an appropriately bias-corrected hash collision frequency, and the server uses this estimate to approximate the Gini and collision entropies.
The analysis of Algorithm 5 is given in Theorem 6.1 below. As is customary, for the analysis we assume that the hash function \(h\) is constructed by assigning each element of its domain to a uniform random element of its range [7]. See the Appendix G for the proof of the theorem.
**Theorem 6.1**.: _Algorithm 5 is \(\alpha\)-locally differentially private, has \(\tilde{O}(b)\) communication complexity and \(\tilde{O}(b)\) space complexity. Let \(\tilde{G}\) and \(\tilde{C}\) be the outputs of the algorithm. Let \(\alpha,\epsilon,\delta\in(0,1)\). If \(n=\Omega\left(\frac{2^{4b}\max\{1-G(X),2^{-b},\epsilon\}\log\frac{1}{2}}{ \alpha^{4}\epsilon^{2}}\right)\), then \(|\hat{G}-G(X)|\leq\epsilon\) with probability at least \(1-\delta\). Also, if \(X\) has support size \(k\) and \(n=\Omega\left(\frac{2^{4b}k^{2}\log\frac{1}{2}}{\alpha^{4}\epsilon^{2}}\right)\), then \(|\hat{C}-C(X)|\leq\epsilon\) with probability at least \(1-\delta\)._
### Comparison to Prior Work
Recall that Gini entropy is proportional to the second frequency moment. Local differentially private algorithms for estimating frequency moments were recently studied in [11]. Letting \(b=1\) in Algorithm 5 yields a sample complexity of \(\tilde{O}(1/(\alpha^{4}\epsilon^{2}))\), which is independent of the distribution's support size, unlike the sample complexity of the non-interactive algorithm for estimating the second frequency moment from [11]. Also our algorithm only uses \(1\) bit per sample and \(\tilde{O}(1)\) space, while the previous algorithm uses \(\Omega(k)\) bits per sample and \(\Omega(k)\) space, where \(k\) is the support size of the distribution. The authors in [11] asked whether there is a non-interactive algorithm for privately estimating frequency moments with a sample complexity that is independent of the distribution's support size. Here we affirmatively answer this open question for the second frequency moment.
The best known algorithm for estimating collision entropy using \(\tilde{O}(1)\) space is due to [37]. The sample complexity of their algorithm is \(\tilde{O}\left(k/\epsilon^{2}\right)\) and its communication complexity is \(O(\log k)\) bits per user. Letting \(b=1\) in Algorithm 5 generalizes the previous algorithm to the private and communication-efficient setting. It was shown in [15] that (conditioned on a plausible conjecture) any algorithm that estimates collision entropy to within \(O(1)\) error using \(O(1)\) space requires \(\Omega(k)\) samples.
## 7 Experiments
In this section we present two sets of experiments to support our theoretical findings. First, we demonstrate that Algorithm 1 is indeed able to estimate the Shannon entropy of tree-structured distributions with linear sample complexity in \(d\). Thus it has a superior sample complexity compared to the state-of-the-art non-interactive method [13, 8], which has a quadratic sample complexity in \(d\). The sample complexity is defined here in terms of number of observations from pairs of marginals. In the second set of experiments, we use our Algorithm 5 to estimate the collision entropy of discrete distributions, and compare its performance to that of the best-known communication efficient, non-private algorithm for this task, Skorski's algorithm [37].
**Estimating Shannon entropy:** To estimate the Shannon entropy of a tree-structured distribution as given by Eq. (6), the marginal entropies and the mutual information between certain pairs of marginals has to be estimated. The Chow-Liu algorithm estimates the mutual information between all pairs of marginals which results in quadratic sample complexity, whereas Algorithm 1 estimates the mutual information only for a linear fraction of pairs. Both algorithms estimate the marginal entropy values by sampling the marginals independently from the mutual information estimations, and they both use the same \(\epsilon\) additive error and privacy budget \(\alpha\) to estimate the mutual information between pairs of marginals. Thus it is fair to compare their performance in terms of number of pairs for which they estimate the mutual information. For the Chow-Liu algorithm this is always \(d^{2}\), whereas our Algorithm 1 is randomized, thus we evaluate it over \(100\) repetitions and report the average.
We ran this experiment on random tree-structured joint distributions over \(\{0,1\}^{d}\). To create a random tree-structured distribution, we first sample the structure of the distribution by taking the maximum spanning tree of a complete graph with \(d\) nodes and edge weights distributed according to a standard normal. Next, we sample the parameters for each marginal uniformly at random from \([0,1]\), and then we achieve the tree-structure by inducing dependence between pairs of variables while preserving the marginals. Specifically, for two marginals \(X_{i}\) and \(X_{j}\) with sampled parameters \(p_{i}\) and \(p_{j}\), we set: \(P(X_{i}=0,X_{j}=0)=(1-p_{i})\cdot(1-p_{j})+r_{ij},\)\(P(X_{i}=1,X_{j}=1)=p_{i}\cdot p_{j}+r_{ij},\)\(P(X_{i}=0,X_{j}=1)=(1-p_{i})\cdot p_{j}-r_{ij},\) and \(P(X_{i}=1,X_{j}=0)=p_{i}\cdot(1-p_{j})-r_{ij},\) where \(r_{ij}\) is sampled uniformly at random from a range of values such that each probability stays positive. The results are displayed in Figure 0(a). It is clear that the sample size (_i.e._, number of mutual information estimate) is close to linear for our algorithm, whereas the Chow-Liu algorithm requires a larger sample size.
**Estimating collision entropy:** In this set of experiments, we drew samples from a discrete exponential distribution \(p_{i}\propto e^{-i}\) with support size \(k=1000\) and estimated the collision entropy using algorithm [37] (the previously best-known communication-efficient algorithm for this task) and our Algorithm 5 (with and without local differential privacy). Algorithm [37] requires \(O(\log k)\) bits per sample and is not private, while our algorithm only requires 1 bit per sample and is differentially private. The results are displayed in Figure 0(b). The previous algorithm has 5% estimation error after observing 10000 bits, while our algorithm has less than 3.5% estimation error. Thus our algorithm has lower error for the same communication cost while also being local differentially private.
## 8 Conclusion and Future Work
Estimating entropy is of importance in many practical applications. In this paper, we studied three widely used entropy measures: Shannon, Gini and collision entropy. We described estimation algorithms for each entropy that require minimal communication and satisfy local differential privacy. We also validated our theoretical results with simulations.
Our sequentially interactive algorithm for estimating Shannon entropy of high-dimensional tree-structured distributions observes only two of these dimensions per sample and has a sample complexity \(O(d/e^{5})\). Our approach relies on the celebrated Chow-Liu approximation [13], providing a substantial improvement on the \(\Omega(d^{2})\) sample complexity of the original non-interactive Chow-Liu algorithm. We also identified two special cases (viz., when the underlying graphical model of the joint distribution is either a chain or star graph) and provided algorithms with a sample complexity of \(\tilde{O}(d\log d/\epsilon^{2})\).
Our algorithm for Gini and collision entropy estimation also improved on the state-of-the-art, either by improving the sample complexity and communication complexity of previous work, or by generalizing the best known algorithm to the private and communication-efficient setting.
A natural extension of our work on Shannon entropy estimation is to consider higher-order correlations in the Chow-Liu decomposition [27]. In contrast to the second-order case which reduces to a maximum spanning tree problem, discovering the underlying structure of the joint distribution is already computationally challenging. However, efficiently estimating the entropy of the resulting distribution might still be possible.
|
2302.02856 | Revisited $\mathcal{T}$, $\mathcal{P}$-odd spin-rotational Hamiltonian
of HfF$^+$ for precise $e$EDM measurements | The current constraint on the electron electric dipole moment ($e$EDM),
$|d_e|<4.1\times 10^{-30}$ ${e {\cdotp} {\rm cm}}$ (90\% confidence), was
recently established using the trapped $^{180}$Hf$^{19}$F$^+$ molecular ions in
the $J=1$ rotational level of its $ ^3\Delta_1$ electronic state [T. S. Roussy,
L. Caldwell, T. Wright, et al., arxiv:2212.11841]. The extensive experimental
study of the HfF$^+$ cation provides detailed spectroscopy of the
$\Omega-$doublet levels in the external rotating electric and magnetic fields.
We showed that previously developed theoretical approaches can fully reproduce
the latest experimental data. Their justification from the first principles is
very important for the examination of both modern molecular theory and possible
systematic uncertainties in the interpretation of the experimental data
obtained with high accuracy. | Alexander N. Petrov, Leonid V. Skripnikov, Anatoly V. Titov | 2023-02-06T15:24:07Z | http://arxiv.org/abs/2302.02856v2 | Revisited \(\mathcal{T}\),\(\mathcal{P}\)-odd spin-rotational Hamiltonian of HfF\({}^{+}\) for precise \(e\)EDM measurements
###### Abstract
The current constraint on the electron electric dipole moment (\(e\)EDM), \(|d_{e}|<4.1\times 10^{-30}\)\(e\)-cm (90% confidence), was recently established using the trapped \({}^{180}\)Hf\({}^{19}\)F\({}^{+}\) molecular ions in the \(J=1\) rotational level of its \({}^{3}\Delta_{1}\) electronic state [T. S. Roussy, L. Caldwell, T. Wright, et al., arXiv:2212.11841]. The extensive experimental study of the HfF\({}^{+}\) cation provides detailed spectroscopy of the \(\Omega-\)doublet levels in the external rotating electric and magnetic fields. We showed that previously developed theoretical approaches can fully reproduce the latest experimental data. Their justification from the first principles is very important for the examination of both modern molecular theory and possible systematic uncertainties in the interpretation of the experimental data obtained with high accuracy.
## I Introduction
At actual level of experimental sensitivity the experimental measurement of a non-zero electron electric dipole moment (\(e\)EDM, \(d_{e}\)) would be a clear signature of the physics beyond the Standard model (SM) [1; 2; 3; 4; 5; 6]. Recently the JILA group has obtained a new constraint on the electron electric dipole moment (\(e\)EDM), \(|d_{e}|<4.1\times 10^{-30}\)\(e\)-cm (90% confidence) [7], using the \({}^{180}\)Hf\({}^{19}\)F\({}^{+}\) ions trapped by the rotating electric field. The measurements were performed on the ground rotational level in the metastable electronic \({}^{3}\Delta_{1}\) state. It overcame the latest ACME result in 2018 \(e\)EDM \(|d_{e}|\lesssim 1.1\cdot 10^{-29}\)\(e\cdot\)cm [8] by a factor of 2.4 and the first result \(|d_{e}|\lesssim 1.3\times 10^{-28}\) on the \({}^{180}\)Hf\({}^{19}\)F\({}^{+}\) ions [9] by a factor of about 32.
According to estimates within the Standard model, the \(e\)EDM value is roughly ten orders of magnitude smaller [10; 11], so there is still a wide room for more sensitive experiments to search for new physics before encountering the SM background. A few experiments to search for the \(e\)EDM with other molecules are under preparation now, including ThF\({}^{+}\)[12], BaF [13], YbF [14] and YbOH [15; 16].
Considering a great potential for investigations of \(\mathcal{T}\), \(\mathcal{P}\)-violating effects (\(\mathcal{T}\) is the time reversal, \(\mathcal{P}\) is the space parity) on HfF\({}^{+}\) ions, it was proposed in Ref. [17] to use \({}^{177}\)Hf\({}^{19}\)F\({}^{+}\) and \({}^{179}\)Hf\({}^{19}\)F\({}^{+}\) ions to measure the nuclear magnetic quadrupole moment (MQM) of \({}^{177}\)Hf and \({}^{179}\)Hf nuclei which have spins \(I=7/2\) and \(I=9/2\), respectively. Then the \(\mathcal{T}\),\(\mathcal{P}\)-violating effects arising from the MQM and \(e\)EDM in \({}^{177}\)Hf\({}^{19}\)F\({}^{+}\) and in \({}^{177}\)Hf\({}^{19}\)F\({}^{+}\) were studied in details in Refs. [18; 19; 20; 21]. The MQM shift as a function of the external static electric field was calculated and it was shown that MQM effects can be distinguished from the \(e\)EDM as MQM shift is different for different levels of hyperfine structure.
## II Level scheme of \({}^{180}\)Hf\({}^{19}\)F\({}^{+}\) for the electron EDM search
The \({}^{180}\)Hf isotope is spinless whereas the \({}^{19}\)F isotope has a non-zero nuclear spin \(I\)=1/2, which gives rise to the hyperfine energy splitting between the levels with total angular momentum \(F=3/2\) and \(F=1/2\), **F**=**J**+**I (see e.g. Fig. 1 in Ref. [7]). In the absence of external fields, each hyperfine level has two parity eigenstates known as the \(\Omega\)-doublet. In the external rotating electric field the \(F=3/2\) state splits to four Stark doublets levels. Two of them, with the projection of the total momentum on the rotating field direction \(m_{F}\!\!=\!\pm 3/2\), are of interest for the \(e\)EDM search experiment. The rotating magnetic field which is parallel or antiparallel to the rotating electric field further splits each Stark doublet to a pair of Zeeman sublevels. The energy splitting, \(f\), between the sublevels \(m_{F}\!\!=\!\pm 3/2\) is measured in the experiments. The measurement of \(f\) is repeated under different conditions which can be characterized by binary switch parameters such as \(\mathcal{\tilde{B}}\), \(\mathcal{\tilde{D}}\), \(\mathcal{\tilde{R}}\) being switched from \(+1\) to \(-1\) (see Ref. [7; 9] for details). \(\mathcal{\tilde{B}}=+1(-1)\) means that the rotating magnetic field, \(\mathbf{B}_{\mathrm{rot}}\), is parallel (antiparallel) to the rotating electric field \(\mathbf{E}_{\mathrm{rot}}\); \(\mathcal{\tilde{D}}=+1(-1)\) means that the measurement was performed for lower (upper) Stark level; and \(\mathcal{\tilde{R}}\) defines direction for the rotation of the fields. Here notation \(f^{S_{1},S_{2}\cdots}\) denotes a component which is odd under the switches \(S_{1},S_{2},...\). The \(e\)EDM signal manifests as the main contribution to \(f^{\mathcal{BD}}\) channel according to
\[f^{\mathcal{BD}}=2d_{e}E_{\mathrm{eff}}, \tag{1}\]
where \(E_{\mathrm{eff}}\) is the effective electric field, which can be obtained only in precise calculations of the electronic structure. The values \(E_{\mathrm{eff}}=24\) GV/cm [22; 23], 22.5(0.9) GV/cm [24], 22.7(1.4) GV/cm [25] were obtained.
Beyond the \(f^{\mathcal{BD}}\) other components \(f^{0}\) (even under
all switches), \(f^{\mathcal{D}}\), \(f^{\mathcal{B}}\) are measured with high accuracy [7; 26], which, in particular, is required to control a number of systematic effects. As a matter of fact, all the components are measured with the same scheme but with different treatment of the raw experimental data. In turn in Refs. [27; 28] the precise scheme for theoretical calculation of Stark and Zeeman effects in rotating fields was developed. Recently the method was extended to the case of linear triatomic molecules [29].
The main goal of the paper is to calculate parameters \(f^{\mathcal{B}\mathcal{D}}\), \(f^{0}\), \(f^{\mathcal{D}}\) and \(f^{\mathcal{B}}\) from the _first principles_ and to compare them with the experimental data. Perfect agreement of the theoretical values with the experimental data is a very important item for examination of both modern molecular theory and possible systematic uncertainties in interpretation of highly accurate experimental data.
## II Theoretical methods
Following Refs. [27; 28; 30; 31], the energy levels and wave functions of the \({}^{180}\)Hf\({}^{19}\)F\({}^{+}\) ion are obtained by a numerical diagonalization of the molecular Hamiltonian (\(\mathbf{\hat{H}}_{\mathrm{mol}}\)) in the external rotating electric \(\mathbf{E}(t)\) and magnetic \(\mathbf{B}(t)\) fields over the basis set of the electronic-rotational wavefunctions
\[\Psi_{\Omega}\theta^{J}_{M,\Omega}(\alpha,\beta)U^{\mathrm{F}}_{M_{I}}. \tag{2}\]
Here \(\Psi_{\Omega}\) is the electronic wavefunction, \(\theta^{J}_{M,\Omega}(\alpha,\beta)=\sqrt{(2J+1)/4\pi}D^{J}_{M,\Omega}(\alpha,\beta,\gamma=0)\) is the rotational wavefunction, \(\alpha,\beta,\gamma\) are Euler angles, \(U^{F}_{M_{I}}\) is the F nuclear spin wavefunctions and \(M\) (\(\Omega\)) is the projection of the molecule angular momentum, \(\mathbf{J}\), on the lab \(\hat{z}\) (internuclear \(\hat{n}\)) axis, \(M_{I}=\pm 1/2\) is the projection of the nuclear angular momentum on the same axis. Note that \(M_{F}=M_{I}+M\) is not equal to \(m_{F}\). The latter, as stated above, is the projection of the total momentum on the rotating electric field.
We write the molecular Hamiltonian for \({}^{180}\)Hf\({}^{19}\)F\({}^{+}\) in the form:
\[\mathbf{\hat{H}}_{\mathrm{mol}}=\mathbf{\hat{H}}_{\mathrm{el}}+\mathbf{\hat{H }}_{\mathrm{rot}}+\mathbf{\hat{H}}_{\mathrm{hfs}}+\mathbf{\hat{H}}_{\mathrm{ ext}}. \tag{3}\]
Here \(\mathbf{\hat{H}}_{\mathrm{el}}\) is the electronic Hamiltonian, \(\mathbf{\hat{H}}_{\mathrm{rot}}\) is the Hamiltonian of the rotation of the molecule, \(\mathbf{\hat{H}}_{\mathrm{hfs}}\) is the hyperfine interaction between electrons and fluorine nuclei as they described in Ref. [27] and \(\mathbf{\hat{H}}_{\mathrm{ext}}\) describes the interaction of the molecule with rotating magnetic and electric fields as it is described in Ref. [28].
Rotating fields are expressed in terms of components that rotates in the \(xy\)-plane:
\[\mathbf{E}_{\mathrm{rot}}(\mathrm{t})=\mathcal{E}_{\mathrm{rot}}(\hat{\mathrm{ x}}\mathrm{cos}(\omega_{\mathrm{rot}}\mathrm{t})+\mathcal{\hat{R}}\mathrm{y} \mathrm{sin}(\omega_{\mathrm{rot}}\mathrm{t})), \tag{4}\]
\[\mathbf{B}_{\mathrm{rot}}(\mathrm{t})=\mathcal{B}_{\mathrm{rot}}(\hat{\mathrm{ x}}\mathrm{cos}(\omega_{\mathrm{rot}}\mathrm{t})+\mathcal{\hat{R}}\mathrm{y} \mathrm{sin}(\omega_{\mathrm{rot}}\mathrm{t})), \tag{5}\]
where \(\mathcal{\hat{R}}=\pm 1\), as described above defines direction of rotation along the \(\hat{z}\) axis: \(\vec{\omega}_{\mathrm{rot}}=\mathcal{\hat{R}}\omega_{\mathrm{rot}}\hat{z}\). \(\mathcal{\hat{R}}=+1(-1)\) if the fields rotate counter-clockwise (clockwise) around the \(\hat{z}\) axis. Below we put \(\omega_{\mathrm{rot}}/2\pi=+375\) kHz, \(\mathcal{E}_{\mathrm{rot}}=+58\) V/cm, which are the values used in the experiment [7]. Note, that \(\omega_{\mathrm{rot}}\) and \(\mathcal{E}_{\mathrm{rot}}\) are always positive. In this paper time-dependence of external fields is accounted for by the transition to the rotating frame that corresponds to the first approach described in Ref. [28].
Following Ref. [27] we considered \({}^{3}\Delta_{\mathrm{1}}\), \({}^{3}\Delta_{\mathrm{2}}\), \({}^{3}\Pi_{0^{+}}\) and \({}^{3}\Pi_{0^{-}}\) low-lying electronic basis states. \(\mathbf{H}_{\mathrm{el}}\) is diagonal on the basis set (2). Its eigenvalues are transition energies of these states. They were calculated and measured in Ref. [32]:
\[{}^{3}\Delta_{\mathrm{1}}:T_{e} =976.930\ \mathrm{cm}^{-1}\,\] \[{}^{3}\Delta_{\mathrm{2}}:T_{e} =2149.432\ \mathrm{cm}^{-1}\,\] \[{}^{3}\Pi_{0^{-}}:T_{e} =10212.623\ \mathrm{cm}^{-1}\,\] \[{}^{3}\Pi_{0^{+}}:T_{e} =10401.723\ \mathrm{cm}^{-1}. \tag{6}\]
Electronic matrix elements required to evaluate interaction with external magnetic field (Zeeman or magnetic interaction) are [27]:
\[G_{\parallel}=\langle{}^{3}\Delta_{\mathrm{1}}|\hat{L}^{e}_{\hat{n}}-\mathrm{g }_{S}\hat{S}^{e}_{\hat{n}}|{}^{3}\Delta_{\mathrm{1}}\rangle, \tag{7}\]
\[G_{\perp}^{(1)}=\langle{}^{3}\Delta_{\mathrm{1}}|\hat{L}^{e}_{-}-\mathrm{g}_{S }\hat{S}^{e}_{-}|{}^{3}\Delta_{\mathrm{2}}\rangle=-2.617, \tag{8}\]
\[G_{\perp}^{(2a)}=\langle{}^{3}\Delta_{\mathrm{1}}|\hat{L}^{e}_{+}-\mathrm{g}_{S }\hat{S}^{e}_{+}|{}^{3}\Pi_{0^{+}}\rangle=1.3456, \tag{9}\]
\[G_{\perp}^{(2b)}=\langle{}^{3}\Delta_{\mathrm{1}}|\hat{L}^{e}_{+}-\mathrm{g}_{ S}\hat{S}^{e}_{+}|{}^{3}\Pi_{0^{-}}\rangle=1.5524. \tag{10}\]
Here \(\mathrm{g}_{S}=-2.0023\) is the free\(-\)electron \(g\)-factor, \(\mathbf{L}^{e}\) and \(\mathbf{S}^{e}\) are the electronic orbital and electronic spin momentum operators, respectively.
We performed calculations for the cases when magnetic interactions with both \({}^{3}\Pi_{0^{\pm}}\) and \({}^{3}\Delta_{\mathrm{2}}\) were taken into account and for the case when the interactions were omitted. For the first case the body-fixed g-factor is \(G_{\parallel}=0.011768\), for the latter \(G_{\parallel}=0.012043\) and matrix elements (8-10) are set to zero. Parameters \(G_{\parallel}\) were chosen in such a way that the g-factor for \(J=1\)\({}^{3}\Delta_{\mathrm{1}}\) exactly corresponds to the experimental value \(\mathrm{g}=0.00306\)[33].
Other electronic matrix elements for calculation of the molecular Hamiltonian were taken from Ref. [27], except for the hyperfine structure constant \(A_{\parallel}=-62.0\) MHz measured in Ref. [9] and dipole moment \(D_{\parallel}\) for \({}^{3}\Delta_{\mathrm{1}}\) which was recalculated in the present work in the accurate quantum chemical calculation (see the next section) and independently confirmed by comparison of the experimental and theoretical data.
Electronic structure calculation details
To obtain the purely _ab-initio_ value of the body-fixed dipole moment we used the following scheme. First we calculated the value of the dipole moment within the relativistic two-component (2c) coupled cluster method with single, double and perturbative triple cluster amplitudes, CCSD(T). The (valence) part of the generalized relativistic effective core potential (GRECP) [34; 35] was employed in the electronic Hamiltonian. In the correlation calculation 52 outer-core and valence electrons were correlated, i.e. the 52e-CCSD(T) approach was employed. We used the basis set constructed in Ref. [27] which includes 25 \(s-\), 25 \(p-\), 21 \(d-\), 14 \(f-\), 10 \(g-\), 5 \(h-\) and 5 \(i-\) type Gaussians for Hf and corresponds to the aug-ccpVQZ basis set [36; 37] for F which contains 6 \(s-\), 5 \(p-\), 4 \(d-\), 3 \(f-\) and 2 \(g-\) contracted Gaussians and can be briefly written as (13,7,4,3,2)/[6,5,4,3,2]. The contribution of higher order correlation effects was obtained as the difference in the values of the dipole moment calculated within the coupled cluster with single, double, triple and non-iterative quadruple amplitudes, CCSDT(Q) [38; 39], and the CCSD(T) method. In the calculations 20 valence and outer core electrons of HfF\({}^{+}\) were correlated and the reduced basis set was used: [12; 16; 16; 10; 8]/(6,5,5,3,1) [22; 23; 40] basis set for Hf and [14; 9; 4,3]/(4,3,2,1) ANO-I basis set for F [41]. Finally, we calculated a basis set correction. For this, we turned off the spin-orbit part of the GRECP operator, i.e. switched to the scalar-relativistic approximation (for outer electrons) and calculated the correction as a difference between the values obtained within the extended basis set and the basis set used at the first step employing the coupled cluster method with single and double cluster amplitudes correlating 52 electrons. The extended basis set for Hf contains 30 \(s-\), 30 \(p-\), 30 \(d-\), 30 \(f-\), 15 \(g-\), 15 \(h-\) and 15 \(i-\) type functions for Hf and the uncontracted AAE4Z (19,11,6,4,2) basis set [42] for F. Calculations described above were performed for the equilibrium geometry of the \({}^{3}\Delta_{1}\) state of the HfF\({}^{+}\) cation. To obtain the value of the dipole moment for the zero vibrational level we calculated a vibration correction as in Ref. [27].
Electronic calculations were performed within the dirac[43; 44], mrcc[39; 45; 46] and cfour[47] codes. We also employed the code developed in Refs. [48; 49] to calculate property matrix elements.
## III Results
The calculated value of the body-fixed dipole moment is given in Table 1. One can see that the correlation effects beyond the CCSD(T) model only modestly contribute to the value of the dipole moment. One can also see good convergence with respect to the basis set size: even a significant increase in the number of basis functions (see previous section) does not change the value of the dipole moment. The uncertainty of the final _ab-initio_ value of the dipole moment was calculated as a square root of squares in corrections on higher-order correlation effects, on the extended basis set and vibration correction.
In Fig. 1 the calculated values of \(f^{\mathcal{D}}\) as function of \(f^{0}\) are given. Fig. 2 presents the values of \(f^{\mathcal{BD}}\) as a function of \(f^{B}\). In both Figs. the experimental values [26] are given for comparison. To plot Figs. 1 and 2 \(f^{0}\), \(f^{\mathcal{D}}\), \(f^{B}\), \(f^{\mathcal{BD}}\) are assumed to be functions of \(\mathcal{B}_{\rm rot}\). For Fig. 2 the non-reversing component of the magnetic field is also added which gives the main contribution to \(f^{B}\) component. \(f^{0}=150.6\) Hz in Fig. 2.
We present results for the cases when magnetic interactions with both \({}^{3}\Pi_{0^{\pm}}\) and \({}^{3}\Delta_{2}\) were taken into account and for the case when the interactions are omitted. Calculations are also performed for different values of the body-fixed dipole moment, \(D_{\parallel}\), of \({}^{3}\Delta_{1}\) state. The negative value for \(D_{\parallel}\) means that the unit vector \(\hat{n}\) along the molecular axis is directed from Hf to F. One can see that calculation that taking into account the interactions with \({}^{3}\Delta_{2}\), \({}^{3}\Pi_{0^{+}}\) and \({}^{3}\Pi_{0^{-}}\) electronic states and using the dipole moment \(D_{\parallel}=-1.53\) a.u. leads to a perfect agreement between the measured and calculated values for \(f^{\mathcal{D}}\) as functions of \(f^{0}\) and a very good agreement for \(f^{\mathcal{BD}}\) as a function of \(f^{B}\). As stated above \(D_{\parallel}=-1.53\) a.u. coincides with the value calculated in Ref. [27] (though calculated with better accuracy in this work) and is in good agreement with the experimental value \(D_{\parallel}=-1.54(1)\) a.u. [50]1. When the interactions with \({}^{3}\Delta_{2}\), \({}^{3}\Pi_{0^{+}}\) and \({}^{3}\Pi_{0^{-}}\) states were omitted we were not able to fit all experimental data in Figs. 1 and 2. As an example the calculations with \(D_{\parallel}=-1.27\) a.u. which are in good agreement with the experimental data for points \(f^{0}=100.67,151.175,198.514\) Hz in Fig. 1 are given. Nevertheless, from _ab initio_ calculations performed in this work and experiment in Ref. [50] it is clear
\begin{table}
\begin{tabular}{l r} \hline \hline Contribution & \(D_{\parallel}\), a.u. \\ \hline
52e-CCSD(T) & \(-1.50\) \\
20e-CCSD(Q) \(-\) 20e-CCSD(T) & \(-0.02\) \\ basis set correction & \(0.00\) \\ vibration correction & \(-0.01\) \\ Total & \(-1.53(2)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The calculated value of the body-fixed dipole moment \(D_{\parallel}\) of HfF\({}^{+}\) in the \({}^{3}\Delta_{1}\) electronic state with the origin at the center of mass.
that the value \(D_{\parallel}=-1.27\) is far from the real one and accounting for interactions with \({}^{3}\Pi_{0\pm}\) and \({}^{3}\Delta_{2}\) is very important for an accurate calculation of \(J\)=1 levels in the electronic \({}^{3}\Delta_{1}\) state.
The components \(f^{\mathcal{D}}\) as a function of the \(f^{0}\) and \(f^{\mathcal{BD}}\) as a function of \(f^{B}\) have close relation to the g-factors of the upper, g\({}^{u}\), and lower, g\({}^{l}\), Stark doublets in the external _static_ electric field. According to Refs. [9; 26]
\[f^{\mathcal{D}}=\frac{\mathrm{g}^{u}-\mathrm{g}^{l}}{\mathrm{g}^{u}+\mathrm{ g}^{l}}f^{0}+\frac{\Delta^{0}\Delta^{\mathcal{D}}}{f^{0}}, \tag{11}\]
\[f^{\mathcal{BD}}=\left(\frac{\mathrm{g}^{u}-\mathrm{g}^{l}}{\mathrm{g}^{u}+ \mathrm{g}^{l}}-\frac{\Delta^{0}\Delta^{\mathcal{D}}}{f^{0^{2}}}\right)f^{ \mathcal{B}}, \tag{12}\]
where \(\Delta\) is the difference between Zeeman sublevels at zero magnetic field (which is nonzero due to the rotation of the electric field). For the static electric field 58 V/cm and \(D_{\parallel}=-1.53\) a.u. our calculation gives \(\mathrm{g}^{u}=-3.05376\times 10^{-3}\), g\({}^{l}=-3.06670\times 10^{-3}\), (g\({}^{u}-\mathrm{g}^{l})/(\mathrm{g}^{u}+\mathrm{g}^{l})=-0.002114\), \(\Delta^{0}=0.7710\) Hz and \(\Delta^{\mathcal{D}}=-0.5501\) Hz. If magnetic interactions with \({}^{3}\Pi_{0\pm}\) and \({}^{3}\Delta_{2}\) states are not taken into account we have g\({}^{u}=-3.05432\times 10^{-3}\), g\({}^{l}=-3.06958\times 10^{-3}\), (g\({}^{u}-\mathrm{g}^{l})/(\mathrm{g}^{u}+\mathrm{g}^{l})=-0.002491\), \(\Delta^{0}=0.7709\) Hz and \(\Delta^{D}=-0.5501\) Hz. The calculated \(f^{\mathcal{D}}\) as a function of the \(f^{0}\) and \(f^{\mathcal{BD}}\) as a function of the \(f^{\mathcal{B}}\), given in Figs. 1 and 2, can be approximated with high accuracy by
\[f^{\mathcal{D}}=k_{1}f^{0}+\frac{\Delta^{0}\Delta^{\mathcal{D}}}{f^{0}} \tag{13}\]
and
\[f^{\mathcal{BD}}=k_{2}f^{\mathcal{B}} \tag{14}\]
respectively. For calculation with \(D_{\parallel}=-1.53\) a.u. \(k_{1}=-0.002151\) and \(k_{2}=-0.002133\) which are in good agreement with the experimental values \(k_{1}=-0.002149(3)\) and \(k_{2}=-0.002100(20)\) obtained from \(f^{\mathcal{D}}\) as a function of the \(f^{0}\) and \(f^{\mathcal{BD}}\) as a function of \(f^{B}\) respectively [26].
## IV Conclusion
We calculated frequencies components \(f^{0}\), \(f^{\mathcal{D}}\), \(f^{B}\), and \(f^{\mathcal{BD}}\) of the \(\Omega-\)double structure of \(J\)=1 rotational levels of the \({}^{3}\Delta_{1}\) electronic state in the external rotating electric and magnetic fields. The high accuracy of the theoretical model introduced in Refs. [27; 28] is demonstrated, which now can be considered as a powerful tool helping to study systematic effects on HfF\({}^{+}\) ions in experimental searches for new physics beyond the Standard model. An accurate _ab initio_ value for body-frame dipole moment, \(D_{\parallel}=-1.53\) a.u., of \({}^{3}\Delta_{1}\) electronic state, confirmed by comparison of the calculated and experimental values for \(f^{0}\), \(f^{\mathcal{D}}\), \(f^{B}\), and \(f^{\mathcal{BD}}\) is obtained.
###### Acknowledgements.
We thank Luke Caldwell, Trevor Wright, Jun Ye, and Eric Cornell for useful discussion and providing experi
Figure 2: (Color online) \(f^{\mathcal{DB}}\) as a function of \(f^{\mathcal{B}}\). Horizontal bands – the experimental values, bandwidths correspond to the experimental uncertainty [26]. Solid (black) curve: \(D_{\parallel}=-1.53\) a.u. Dotted (red) curve: \(D_{\parallel}=-1.53\) a.u., but interactions with both \({}^{3}\Delta_{2}\) and \({}^{3}\Pi_{0\pm}\) states are omitted. Dashed (blue) curve: \(D_{\parallel}=-1.27\) a.u. Dotted-dashed (green) curve: \(D_{\parallel}=-1.27\) a.u., but interactions with both \({}^{3}\Delta_{2}\) and \({}^{3}\Pi_{0\pm}\) states are omitted.
mental data.
Electronic structure calculations were carried out using computing resources of the federal collective usage center Complex for Simulation and Data Processing for Megascience Facilities at National Research Centre "Kurchatov Institute", [http://ckp.nrcki.ru/](http://ckp.nrcki.ru/).
Calculations of the Stark and Zeeman effects in rotating fields were supported by the Russian Science Foundation Grant No. 18-12-00227. Calculations of property integrals were supported by the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS" Grant according to Project No. 21-1-2-47-1.
|
2305.17542 | Non-Sequential Graph Script Induction via Multimedia Grounding | Online resources such as WikiHow compile a wide range of scripts for
performing everyday tasks, which can assist models in learning to reason about
procedures. However, the scripts are always presented in a linear manner, which
does not reflect the flexibility displayed by people executing tasks in real
life. For example, in the CrossTask Dataset, 64.5% of consecutive step pairs
are also observed in the reverse order, suggesting their ordering is not fixed.
In addition, each step has an average of 2.56 frequent next steps,
demonstrating "branching". In this paper, we propose the new challenging task
of non-sequential graph script induction, aiming to capture optional and
interchangeable steps in procedural planning. To automate the induction of such
graph scripts for given tasks, we propose to take advantage of loosely aligned
videos of people performing the tasks. In particular, we design a multimodal
framework to ground procedural videos to WikiHow textual steps and thus
transform each video into an observed step path on the latent ground truth
graph script. This key transformation enables us to train a script knowledge
model capable of both generating explicit graph scripts for learnt tasks and
predicting future steps given a partial step sequence. Our best model
outperforms the strongest pure text/vision baselines by 17.52% absolute gains
on F1@3 for next step prediction and 13.8% absolute gains on Acc@1 for partial
sequence completion. Human evaluation shows our model outperforming the WikiHow
linear baseline by 48.76% absolute gains in capturing sequential and
non-sequential step relationships. | Yu Zhou, Sha Li, Manling Li, Xudong Lin, Shih-Fu Chang, Mohit Bansal, Heng Ji | 2023-05-27T18:13:17Z | http://arxiv.org/abs/2305.17542v1 | # Non-Sequential Graph Script Induction via Multimedia Grounding
###### Abstract
Online resources such as wikiHow compile a wide range of scripts for performing everyday tasks, which can assist models in learning to reason about procedures. 1 However, the scripts are always presented in a linear manner, which does not reflect the flexibility displayed by people executing tasks in real life. For example, in the CrossTask Dataset, 64.5% of consecutive step pairs are also observed in the reverse order, suggesting their ordering is not fixed. In addition, each step has an average of 2.56 frequent2 next steps, demonstrating "branching". In this paper, we propose a new challenging task of non-sequential graph script induction, aiming to capture _optional_ and _interchangeable_ steps in procedural planning. To automate the induction of such graph scripts for given tasks, we propose to take advantage of loosely aligned videos of people performing the tasks. In particular, we design a multimodal framework to ground procedural videos to wikiHow textual steps and thus transform each video into an observed step path on the latent ground truth graph script. This key transformation enables us to train a script knowledge model capable of both generating explicit graph scripts for learnt tasks and predicting future steps given a partial step sequence. Our best model outperforms the strongest pure text/vision baselines by 17.52% absolute gains on F\({}_{1}\)@3 for next step prediction and 13.8% absolute gains on Acc@1 for partial sequence completion. Human evaluation shows our model outperforming the wikiHow linear baseline by 48.76% absolute gains in capturing sequential and non-sequential step relations.
Footnote 1: Our data and code are publicly available for research purposes at [https://github.com/bryanzhou08/Multimodal-Graph-Script-Learning/](https://github.com/bryanzhou08/Multimodal-Graph-Script-Learning/)
Footnote 2: Occurred in more than 10 videos.
## 1 Introduction
A script consists of typical actions that are performed to complete a given task. Online resources such as wikiHow3 provide a wide variety of community-edited scripts for everyday tasks (Fig.1). Such a large library of linear scripts can serve as a starting point for learning goal-step knowledge (Zhang et al., 2020; Yang et al., 2021). However, as the saying goes, "all roads lead to Rome". There is usually more than one way to achieve any given goal. Practically speaking, users should be presented with multiple alternative step sequences so that they can pick the most suitable route according to their unique situations and preferences. Robots and virtual assistants also stand to gain the crucial abilities of global planning optimization and on-the-spot improvisation from alternative step paths.
Footnote 3: www.wikiHow.com
In particular, we observe that two types of steps
Figure 1: Example of wikiHow linear script of the procedural task _make egg fried rice_ compared to an ideal example of our non-sequential graph script consisting of optional and interchangeable steps.
are overlooked by linear scripts: _optional steps_ and _interchangeable steps_. Optional steps such as Add some chili peppers can be skipped based on the users' preference or item availability. Interchangeable steps such as Pre-cook some eggs and Cut some green onions can be performed in either order without affecting the overall task completion. After accounting for these two step types, the original linear script is converted into a 'non-sequential graph script', as shown in Fig.1 (right).
Previous efforts like Proscript Sakaguchi et al. (2021) obtained non-linear graph scripts via crowdsourcing, which is not scalable. In this work, we automate the process of transforming a linear text script into a non-linear graph script by grounding into visual observations (videos) of people executing the task. If we observe that people often skip a certain step, then it is natural to denote that step as optional. Similarly, if people tend to swap the ordering of a group of steps, these steps are likely interchangeable. Since wikiHow does not contain such emperical observations, we align wikiHow scripts with procedural video datasets such as Crosstask Zhukov et al. (2019) and Howto100M Miech et al. (2019) (see Fig.2).
To map a video to a sequence of wikiHow steps, we perform alignment on both task-level and step-level. On the task level, we use a title matching algorithm based on Sentence-BERT similarity to select videos and wikiHow documents for the task. Then, we propose an effective pre-possessing strategy (simplification + deduplication) to create the wikiHow step library. At the step level, we consider two situations based on whether the video has been segmented into steps. When manual segmentation is provided, we directly map video annotations to the wikiHow step library. Otherwise, we first segment the video into clips based on ASR sentence groups (Fig.2), and then map them to wikiHow steps using a fault tolerant grounding strategy (SS3.1) that is robust to inaccurate ASR sentence boundaries. When grounding is complete, we obtain the set of observed step sequences for each task.
Next, to obtain the desired graph script from the observed step sequences, we use auto-regressive seq2seq models Sutskever et al. (2014) to learn the distribution of valid paths (step sequences) along the graph (SS3.2). As opposed to directly training a graph generation model, our path generation learning format is better aligned with existing procedural video data and also takes advantage of pretrained seq2seq models to improve generalization across tasks. Since the cross-entropy loss used for training auto-regressive models focuses on penalizing local "one-step" errors (the errors in predicting each single step), we further introduce a Path-level Constraint Loss to reduce global inconsistencies of the entire path. To generate hard negative contrastive-paths that fail to complete the task, we manipulate the video-grounded positive paths through _global reordering_, _shuffling_, and _re-sampling_ (SS3.2).
After training, our model is able to produce complete paths given input step libraries from various domains, including but not limited to: cooking, car maintenance, and handcrafting, etc. To automatically generate explicit graph scripts, we implement step-level constraint beam-decoding to sample multiple generated step sequences and record a step-adjacency matrix for constructing the final graph script.
For downstream evaluation, we adapt the existing CrossTask dataset Zhukov et al. (2019) to set up two new evaluation sub-tasks: _Next Step Prediction_ and _Partial Sequence Completion_. Compared against top-performing test/video only baselines, our best model achieves 17.52% absolute gains in overall F1@3 for next step prediction and 13.8% absolute gains on Accuracy@1 for partial sequence completion. Moreover, we use MTurk to perform _Human Evaluation_ on the correctness and expressiveness of our auto-generated graph scripts. Results show our model can correctly capture optional, interchangeable and sequential step relationships with up to 82.69% overall accuracy.
Key contributions of this paper include:
* We introduce an automatic method for converting sequential text scripts into non-sequential graph scripts by aligning / grounding textual scripts to video datasets.
* We propose a path generation model capable of learning from video-grounded step sequences with Path-Level Constraint Loss.
* Experiments show our non-sequential path generation model to be more effective than existing text/vision baselines in next step prediction and partial sequence completion.
* Human evaluation of generated graph scripts demonstrates our non-sequential graph scripts to be more accurate and expressive in capturing step-relationships.
## 2 Task Formulation
In this paper, we propose a new challenge of graph script induction for procedural tasks: Given a procedural task \(\mathcal{T}\) represented by a task name, our goal is to induce a graph script for the task using the steps in the linear script. In particular, the graph script should capture the following relations between steps: (1) _sequential_\(\langle s_{i}\to s_{j}\rangle\) where two steps should be completed sequentially; (2) _interchangeable_\(\langle s_{i}\leftrightarrow s_{j}\rangle\) where two steps can be completed in either order or at the same time; (3) _optional_\(\langle s_{i}\to s_{k},s_{i}\to s_{j}\to s_{k}\rangle\) where a step can be optionally added between other steps.
To achieve this goal, we assume that we have access to a large repository of textual scripts (wikiHow) and a set of videos that record people carrying out the tasks.4 The videos might have step-level annotations or accompanying narration which we can convert into text using ASR tools.
Footnote 4: Or a large repository of videos from which we can find matching videos using retrieval.
## 3 Methodology
To learn a graph script induction model, we first ground the video dataset to textual steps on both task-level and step-level (Fig. 2). After grounding, each video can be seen as a valid step sequence sampled from the ground truth graph script. Then, we use such grounded step sequences to train our graph script model and enhance model learning by introducing a Path-Level Constraint Loss over carefully designed contrastive step sequences.
### Video to Script Grounding
For each video, we first perform task-level alignment to find the top-\(m\) most relevant wikiHow documents and then step-level alignment to ground the video to specific wikiHow steps. We consider the following two cases based on whether the video dataset includes step-level annotation:
**Labelled Video Datasets:** Labelled video datasets like Crosstask Zhukov et al. (2019) contain procedural videos grouped by human-annotated task names. In addition, the videos are labelled with temporal step segmentation and relatively accurate step annotations in the form of short imperative English sentences. The example video in Fig.2 for task: _"Make BLT Sandwich"_ is annotated with steps: "cook the bacon in a pan", "put mayo on bread", etc.
At the task level, we first use keyword matching to quickly find all relevant wikiHow documents whose title contains \(\geq\) 85% of keywords in the task name. For example in Fig. 2, the task name: _"Make BLT Sandwich"_ is matched to wikiHow documents: _"Make a BLT Sandwich1"_, _"Make a Breakfast Sandwich"_, etc. After we retrieve a list of relevant wikiHow documents, they are further ranked by cosine similarity between Sentence-BERT embeddings of document title and
Figure 2: Example of grounding procedural videos of the Making BLT Sandwich task to wikiHow steps. We create a wikiHow step library through task-level matching and step pre-processing, and then ground video step annotations/asr-narrations to textual steps from the wikiHow step library.
the task name. Finally, the steps of the top \(m\) wikiHow documents are selected to form the initial wikiHow step candidate pool.
In step-level grounding, we first record Sentence-BERT Similarity scores between each video step annotation and all processed steps in the wikiHow step library. Then, we do greedy matching between video step annotations and wikiHow steps with priority given to higher scoring pairs. Here we keep video steps with best score \(\geq k_{1}\)5, while lower scoring video steps are considered ungroundable. When all videos have been grounded, unused steps from the wikiHow step library are removed.
Footnote 5: Hyperparameters in the grounding section are empirically selected based on qualitative evaluation over a small subset.
Unlabelled Video Datasets:Although we achieve high grounding quality for annotated video datasets, step-level annotation is quite costly and often not available for a wide range of tasks that we are interested in. A more practical scenario is when we have a large repository of videos like Howto100M from which we can retrieve videos corresponding to the target task. Task-level alignment for Howto100M is different from that of annotated video datasets due to questionable video grouping. In Howto100M, videos for each task are selected purely based on Youtube search ranking. This ranking often prioritizes popular videos that have low correlation to the task at hand. To ensure high video-task correlation, we re-select Howto100M videos for each task based on BERT-Similarity between video title and the task name (only videos with similarity score \(\geq k_{2}\) are selected).
Step-level alignment also becomes much more challenging as we must rely on video ASR transcriptions without human step-level annotations. ASR narrations usually comprise of short partial sentence pieces without strict temporal step boundary labels (Fig.2). In addition, since Howto100M videos are collected from Youtube, some ASR narrations contain task-irrelevant information such as subscription requests (Fig.2). To address these challenges, we use a more fault tolerant grounding strategy shown in Fig.2: First, we remove all sentence pieces containing Youtube stop words including "subscribe", "channel", "sponsor", etc. Then, we expand each ASR sentence piece by concatenating it with surrounding pieces until the length of the resulting piece exceeds 10 words6. Finally, we ground each resulting ASR step to wikiHow steps with a higher match threshold \(k_{3}\).
Footnote 6: This parameter is borrowed from Lin et al. (2022) which uses the same length threshold
Processing the wikiHow Step Library:High quality step-level alignment demands the wikiHow Step Library used for grounding to contain clean, non-overlapping steps that are homogeneous in format and granularity to the video step annotations. Since the vanilla wikiHow dataset (Koupaee and Wang, 2018) does not meet these criteria, we perform a series of pre-processing before step-level alignment:
1. First, we put the steps in the initial wikiHow step library through a series of regex-based parsing to standardise stylistic elements like capitalization, punctuation and bracket/parentheses usage.
2. Then, we use a seq2seq text simplification model (Maddela et al., 2021) to reduce granularity in wikiHow steps which are often more fine-grained than video step annotations.
3. Finally, we deduplicate the wikiHow Step Library by enforcing a minimum weighted Levenshtein distance of 0.1 between any two steps and removing overly similar duplicate steps.
### Model Training
Graph Script LearningInspired by (Bojchevski et al., 2018), we transform the graph script learning problem into a path learning problem by treating steps as nodes and temporal relationships between the steps as directed edges (edges point to future step). For each procedural task \(\mathcal{T}\), the wikiHow step library of task-relevant steps \(\mathcal{W}_{\mathcal{T}}\) generated in SS3.1 represents the set of nodes used to construct the latent ground-truth graph script. In SS3.1, we grounded each procedural video to a wikiHow step sequence. These step sequences can be regarded as observed step node paths that lead to successful completion of \(\mathcal{T}\). In this formulation, learning the latent graph script for a task can be regarded as learning the weights of valid paths through \(\mathcal{W}_{\mathcal{T}}\).
For our basic architecture, we train a BART-base model (Lewis et al., 2019) to generate complete step sequences given a wikiHow step library. As illustrated in Fig.3, for each task \(\mathcal{T}\), we first shuffle the corresponding wikiHow step library to remove any pre-existing step ordering. Then, we concatenate the shuffled step library with a special sepa
rator token7 appended to the end of every step to indicate step boundary. The resulting sequence is used as the input sequence for all training data regarding \(\mathcal{T}\). For each target output, we first collect all grounded step sequences of videos completing \(\mathcal{T}\). Similar to input sequences, steps in the output are also appended with the same separator token and concatenated. Finally, each processed video-grounded step sequence is used individually as a target output sequence for our model.
Footnote 7: We define the separator token as <->.
Path-Level ConstraintBesides being able to generate valid step sequences that lead to successful task completion, we also enable our model to differentiate valid step sequences from invalid ones that fail to complete the task. We accomplish this by introducing a Path-Level Constraint in the form of a contrastive loss. For each positive step sequence, we generate \(n\) negative contrastive sequences using the following 3 methods (Fig.3):
1. _Re-sample:_ randomly re-sample a step sequence of the same length from the wikiHow step library. Both step selection and step ordering are wrong.
2. _Shuffle:_ shuffle the sequence until no longer valid. Step selection is preserved, but local/global step ordering are wrong.
3. _Cut & Swap:_ cut the sequence at a random position and swap the latter part to the front. Step selection and local step ordering are preserved, but global step ordering is wrong.
To maximize the model's learning potential, we follow the paradigm of curriculum learning [1] when introducing contrastive examples: we start with contrastive sequences generated via _Re-sample_ because they are most dissimilar from valid sequences. As training progresses, we shift toward _Shuffled_ and _Cut & Swap_ by gradually increasing the probability of sampling from those contrastive sequence groups.
Inspired by [1], we use the last layer of the decoder in BART as the representation of each token in the sequence and obtain the sequence representation by averaging over the constituent token representations. Let the hidden representations of our generated sequence \(\mathbf{s}^{(g)}\), true grounded sequence \(\mathbf{s}^{(p)}\) and negative contrastive sequence \(\mathbf{s}^{(n)}\) be denoted by \(\mathbf{z}^{(g)}\), \(\mathbf{z}^{(p)}\) and \(\mathbf{z}^{(n)}\), respectively. Let \(\mathcal{Z}=\left\{\mathbf{z}^{(p)}\right\}\bigcup\left\{\mathbf{z}_{i}^{(n) }\right\}_{i=1}^{M}\), with \(M\) as the number of negative contrastive sequences. Hence, we define our Path-level Contrastive Loss: 8:
Footnote 8: Based on the InfoNCE Contrastive Loss[1]
\[\mathcal{L}_{PC}=-\log\frac{\exp[\mathrm{sim}\left(\mathbf{z}^{(\mathbf{z})}, \mathbf{z}^{(p)}\right)/\tau]}{\sum_{\mathbf{z}^{(i)}\in\mathcal{Z}}\exp[ \mathrm{sim}\left(\mathbf{z}^{(\mathbf{z})},\mathbf{z}^{(i)}\right)/\tau]}, \tag{1}\]
where the temperature \(\tau\) is a hyperparameter and \(\mathrm{sim}\) denotes cosine similarity. Finally, our overall loss combines the Path-level Contrastive Loss with the Cross-Entropy Loss of seq2seq models:
\[\mathcal{L}_{CE}=\sum_{i}-\log P\left(\mathbf{s}_{i}^{(p)}\mid\mathbf{s}_{<i }^{(p)},\mathcal{W}_{\mathcal{T}}\right), \tag{2}\]
\[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{CE}}+\alpha\mathcal{L}_{\text{ PC}}, \tag{3}\]
Figure 3: **Example input/output sequence in model training. We create the input sequence by shuffling and concatenating the wikiHow step library. We use the concatenated grounded sequence as the target output (positive example) and its permuted/resampled versions as the contrastive output (negative example).**
where \(\alpha\) is a hyperparameter and \(\mathcal{W_{T}}\) denotes the task-specific wikiHow step library.
### Graph Script Generation
In SS3.2, we transformed the graph script learning problem into a path learning problem by treating procedural step relationships as edges between nodes and abstracting the latent ground truth graph as the collection of paths through node-set \(\mathcal{W_{T}}\) that lead to successful task completion. After our model has learnt the latent ground truth graph scripts for a set of tasks, we use it to reconstruct explicit graph scripts through the following procedure:
For each task \(\mathcal{T}\), we use \(\mathcal{W_{T}}\) as model input and have the model generate output step sequences consisting only of steps within \(\mathcal{W_{T}}\). We enforce this by implementing Step-constrained Beam Search, an extension of Constrained Beam Search (De Cao et al., 2021), where the model is only allowed to generate valid next words that lead to entities stemming from a fixed prefix trie \(\mathcal{P}\). Here, we construct \(\mathcal{P_{T}}\) containing all steps in \(\mathcal{W_{T}}\) and ask the model to repeatedly decode from \(\mathcal{P_{T}}\) to generate step sequences. After each step is fully generated, the model is given the choice to end generation by producing the end-of-sentence (eos) token or continue decoding the next step by producing a token from the root of \(\mathcal{P_{T}}\). After generating the predicted step sequences, we break them down and record the edges in an graph adjacency matrix between all generated step nodes. The low-frequency edges representing unlikely paths are removed to improve graph script confidence. Finally, we reconstruct the output graph script from the graph adjacency matrix. An example of this process on the task _"Make Lemonade"_ is detailed in Fig.4.
## 4 Experiments
To evaluate our non-sequential graph script induction model, we propose 3 new downstream tasks:
1. Graph Script Generation: for each task \(\mathcal{T}\), the system is asked to produce a 2-dimensional probabilistic graph script similar to Fig.1 that captures the step relationships introduced in section 1). The model is scored based on human evaluation of its generated graph scripts.
2. Next Step Prediction: given a partial step sequence \(S_{p}=(s_{1}\rightarrow...\to s_{t-1})\), the model is asked to predict the top-k most likely choices for the next step \(s_{t}\) from \(\mathcal{W_{T}}\). For each partial step sequence, there can be a variable number of correct next steps.
3. Partial Sequence Completion: given a partial step sequence \(S_{p}=(s_{1}\rightarrow...\to s_{t-1})\), the model is asked to produce a sequence \(S=(s_{1}\rightarrow...\to s_{n})\) using steps from \(\mathcal{W_{T}}\) that completes the task \(\mathcal{T}\). This task is particularly challenging because the model is asked to predict a variable-length step sequence that best completes the task at hand.
Figure 4: **Example of Graph Script Generation. To decode a graph from our generator, we first ask the model to generate alternative step sequences via beam-decoding and record them in an step-adjacency matrix, which is then be used to reconstruct the non-sequential graph script (with low-frequency edges removed).**
### Baselines
**Baseline: TimeSformer+DS.** TimeSformer (Bertaisis et al., 2021) trained with unsupervised distant supervision Lin et al. (2022) provides the state-of-the-art step-level video representation for pure-video-based step forecasting. We fine-tuned the model on CrossTask videos before testing.
**Baseline: wikiHow Linear.** This model is trained on all wikiHow linear step sequences selected during title-matching (SS3.1). To ensure fairness in comparison, the training sequences undergo the same step processing as that of the non-sequential model. For each training sequence, the model takes the complete wikiHow step library as input and one linear sequence from the selected wikiHow documents as target output.
**Baseline: ReBART.** ReBART Chowdhury et al. (2021) is the state-of-the-art sentence re-ordering method that uses a text-to-marker generation format. Numbered markers are inserted before each step in the training data, and the target output step sequence is translated into corresponding marker sequences.
**Ablation Study: Direct Next Step Prediction & Direct Partial Sequence Completion.** These two task-specific models are included as variants of our model (SS3.2) where the input training sequence is a partial start sequence and the target output sequence is just the next step (for next step prediction) or the remaining sequence (for partial sequence completion). The training data for these two models are also constructed from our grounded video step sequences (SS3.1).
### Automatic Evaluation
**Evaluation Dataset** Inspired by Chen et al. (2022), we build our evaluation dataset on top of the existing CrossTask Dataset Zhukov et al. (2019) and reuse their manual temporal step annotations. Using procedures in SS3.1, we ground annotated CrossTask videos (Fig.2) to sentence-simplified wikiHow Steps. Afterwards, we randomly select 40% of grounded step sequences to form the training set. Remaining sequences form the test set.
For each grounded step sequence \(S=(s_{1}\rightarrow...\to s_{n})\) in the test set, we split after all steps \((s_{t}|t\in[1,n-1])\) to produce partial start sequences \(S_{p}=(s_{1}\rightarrow...\to s_{t})\). For next step prediction, the correct output corresponding to \(S_{p}\) is the next step \(s_{t+1}\); while for partial sequence completion, the correct output corresponding to \(S_{p}\) is the remaining sequence \((s_{t+1}\rightarrow...\to s_{n})\). In the case where multiple grounded step sequences share the same partial start sequence \(S_{p}\) but have different next step / remaining steps, the input sequence \(S_{p}\) would have multiple correct answers for next step prediction / partial sequence completion.
**Next Step Prediction** As shown in Table 1, our models trained using video-to-text grounded step sequences outperform other baselines trained with wikiHow linear step sequences by 15% \(\sim\) 20% absolute gains in all next step prediction metrics. This shows the advantage of our video-grounded step sequences over wikiHow linear sequences in improving the model's ability to predict next steps. Comparing our models trained on complete step sequences against models trained directly on next
\begin{table}
\begin{tabular}{l c|c c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{HT100M} & \multicolumn{4}{c|}{**Next Step Prediction**} & \multicolumn{4}{c}{**Partial Sequence Completion**} \\ & & Acc@1 & Acc@3 & Rec@3 & F\({}_{1}\)@3 & Acc@1 & Edit & Normalized \\ & & \(\uparrow\) & \(\uparrow\) & \(\uparrow\) & \(\uparrow\) & \(\uparrow\) & Dist. \(\downarrow\) & Edit & Dist. \(\downarrow\) \\ \hline TimeSformer+DS & ✗ & 59.91 & 60.82 & 52.98 & 43.83 & - & - & - \\ \hline Random & ✗ & 31.34 & 50.32 & 28.84 & 38.04 & 1.20 & 2.398 &.6935 \\ wikiHow Linear & ✗ & 44.05 & 59.51 & 54.02 & 42.14 & 11.74 & 1.872 &.6061 \\ ReBART & ✗ & 49.07 & 58.00 & 61.39 & 44.38 & 18.28 & 1.802 &.4411 \\ Direct NSP (Grounding) & ✗ & 68.89 & 63.02 & 79.01 & 53.85 & - & - & - \\ Direct PSC (Grounding) & ✗ & - & - & - & - & 29.17 & 1.214 &.4118 \\ \hline Ours (Grounding) & ✗ & 75.59 & 67.50 & **83.17** & 58.29 & 20.12 & 1.639 &.4296 \\ Ours (Grounding) & ✓ & 70.97 & **74.68** & 74.14 & 61.52 & 29.34 & 1.193 &.4093 \\ Ours (Grounding + PLC) & ✗ & 75.49 & 71.89 & 72.51 & 58.48 & 26.70 & 1.228 &.4267 \\ Ours (Grounding + PLC) & ✓ & **76.09** & 73.72 & 78.22 & **61.90** & **32.08** & **1.123** & **.3849** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Automatic Evaluation Results on Next Step Prediction and Partial Sequence Completion. Here “HT100M” denotes whether the model is pre-trained on the Howto100M dataset with temporal order information. “Normalized Edit Dist.” represents the average Levenstein distance normalized by sequence length. “Grounding” denotes whether the model used our grounded video sequences for training. “PLC” represents Path-Level Constraint.**
step prediction without whole script knowledge, we see a large performance gap. This shows the importance of learning whole-script knowledge for next step prediction. When predicting top-3 most likely next steps, models pretrained on Howto100M significantly outperform models w/o pretraining. This can be attributed to the pretrained models having better knowledge of sequence "branching" from observing more diverse task executions.
Partial Sequence CompletionOur best performing models trained using video-to-text grounded step sequences typically achieves over 13% absolute gains on Accuracy@1 and over 14% relative gains on normalized edit distance against other baselines trained using wikiHow linear step sequences, showing grounded videos step sequences can boost models' ability in partial sequence completion. When comparing models trained with the Path-Level Constraint (Sec.3.2) to otherwise identical models trained without such constraint, we see significant gains across all metrics. This demonstrates the effectiveness of our Path-Level Constraint in teaching the model to produce valid step sequences while avoiding their invalid counterparts. We also observe a performance gain for models pretrained on Howto100M vs the same models w/o such pretraining. This result combined with similar results in next step prediction shows that pretraining on a large unlabelled procedural video dataset can improve the model's ability to learn scripts for other tasks.
### Human Evaluation
Using the graph construction method in SS3.3, we generate two graph scripts for each procedural task in CrossTask using the wikiHow Linear baseline (SS4.1) and our non-sequential graph script induction model. To evaluate the correctness and expressiveness of generated graph scripts, we design T/F questions regarding _sequential_, _optional_, and _interchangeable_ relations. For optional and interchangeable step relationships indicated by the graph script, we ask annotators whether the relationship is appropriate. For other steps in the connected graph script, we ask annotators whether their previous and subsequent steps are sequentially appropriate.
Table 2 and table 3 show our model achieves 46.68% \(\sim\) 52.23% absolute gains in Accuracy across all relation types and task categories. In addition, our model is able to accurately capture 74% more optional steps and 227% more interchangeable step pairs in generated graph scripts.
## 5 Related Work
Text-based Script InductionTemporal relations have always been the core of script (schema) related tasks, which can either be learned from data or human annotation. When human-written scripts are available, previous works have typically assumed that the human-provided ordering of steps is the only correct order Jung et al. (2010); Ostermann et al. (2017); Nguyen et al. (2017); Lyu et al. (2021); Sakaguchi et al. (2021). Another line of work has attempted to learn event ordering from data alone, either by assuming that the events follow narrative order Chambers and Jurafsky (2008, 2009); Jans et al. (2012); Rudinger et al. (2015); Ahrendt and Demberg (2016); Wang et al. (2017) or by using an event-event temporal relation classifier to predict the true ordering of events Li et al. (2020, 2021). Our work is distinct from both paradigms as we use human-written scripts as a basis and learn the event ordering from observed sequences in videos.
Video-based Script InductionExisting efforts that utilize visual information in script induction can be mainly classified into implicit script knowledge models and explicit sequential script induction models. Some previous efforts have focused on training models with implicit script knowledge that can make step-level predictions based on textual Yang et al. (2021), visual Sener and Yao (2018); Lin et al. (2022); Zhou et al. (2023), or mul
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Relation Type**} & \multicolumn{2}{c|}{**Linear**} & \multicolumn{2}{c}{**Ours**} \\ & \#/task & Acc & \#/task & Acc \\ \hline Sequential & 10.56 & 35.79 & 12.50 & 88.02 \\ Optional & 1.40 & 19.23 & 2.44 & 65.91 \\ Interchangeable & 0.44 & 37.50 & 1.44 & 88.46 \\ \hline Overall & 12.40 & 33.93 & 16.38 & 82.69 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Human Evaluation results by step-relation type.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Task Category**} & \multicolumn{2}{c|}{**Linear**} & \multicolumn{2}{c}{**Ours**} \\ & \#/task & Acc & \#/task & Acc \\ \hline Cooking & 12.1 & 35.16 & 16.2 & 81.07 \\ Household & 12.5 & 28.33 & 16.0 & 75.00 \\ Car Maintenance & 15.0 & 36.67 & 17.5 & 88.89 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Human Evaluation results by task category. #/task denotes average number of relations per task.
timedia Zellers et al. (2021) input. Other models aim to produce explicit sequential graph scripts that only capture procedural relations between steps Salvador et al. (2018); Yang et al. (2021). Another line of works use multimedia information to generate explicit graph scripts that model only pre-conditional/dependency relationships between events Logeswaran et al. (2023) and sub-events Jang et al. (2023). Ours is the first work to generate explicit non-sequential graph scripts that capture rich procedural, optional, and interchangeable relations through multimedia learning.
## 6 Conclusions and Future Work
We propose the new task of Non-sequential Graph Script Induction to capture optional and interchangeable steps in procedural tasks. Instead of relying on the script annotation, we automatically induce graph scripts by grounding procedural videos to a wikiHow textual step library. We transform the graph generation problem to a path generation problem that can better aligned with video observations, and train a seq2seq model using our grounded step sequences while imposing path-level constraints via a contrastive loss. Experiments demonstrate our model's superiority on downstream tasks including next step prediction and partial sequence completion. Human evaluation confirms our model's ability to generate graph scripts that correctly capture optional and interchangeable steps. Future work will focus on incorporating more video supervision signals such as enriching steps from videos and adding the repeatable steps.
## 7 Limitations
### Representation of Repeatable Steps
Our current approach is not able to capture repeatable steps due to data source constraints from our video datasets. The video datasets we use in this work, namely Howto100M and CrossTask, are both constructed from Youtube videos. At the end of many Youtube instructional videos, there is a brief recap of the whole task, where many steps are displayed for a second time. Since CrossTask was originally proposed for step segmentation, the step annotations capture all video references to task-related steps, including the brief mentions at the end of the videos that are not actually part of task execution. Similarly, Howto100M videos ASR pieces near the end of the video would also capture the vioecover going through such step references.
Therefore, to ensure the grounded video step sequence only contains steps included in the execution of the task, we simply removed all repeated steps in the grounded step sequence and only kept the first occurrence. However in this process, we also removed valid repeats of the same step. For example, if the step Add some salt was executed twice at different stages of the task. We leave this area of improvement for future works.
### Enrichment of Steps from Video
In our current model, all the steps in the wikiHow step library are processed steps from related wikiHow documents. However, it has been shown that textual sources can be prone to reporting bias, occasionally ignore task-relevant information that is present only in the vision modality Chen et al. (2021).
Continuous frames from video data can capture details that text descriptions do not explicitly mention. If the model is able to make use of such vision-exclusive details and learn patterns from them, its overall ability can be improved. The challenge in utilizing such underlying visual information is to differentiate task-relevant video steps from their task-irrelevant counterparts. This area has not been covered by our current graph script induction pipeline, we hope to provide comprehensive solutions in future work.
## 8 Ethics and Broader Impact
### Datasets
In this work, we used publicly available text data from the wikiHow Dataset ([https://github.com/mahnazkoupagee/wikiHow-Dataset](https://github.com/mahnazkoupagee/wikiHow-Dataset)) Creative Commons License (CC-BY-NC-SA), which is under the _Attribution-Noncommercial-Share Alike 3.0 Creative Commons License_ which allows us to use the dataset for non-commercial purposes. For video data, we used the publicly available CrossTask Dataset ([https://github.com/DmZhukov/CrossTask](https://github.com/DmZhukov/CrossTask)) under the BSD 3-Clause "New" or "Revised" License and the Howto100M Dataset ([https://www.di.ens.fr/willow/research/howto100m/](https://www.di.ens.fr/willow/research/howto100m/)) under the Apache License 2.0. Both licenses allows us to use the datasets for non-commercial purposes.
The datasets we use consist of non-offensive instructional and procedural videos / text scripts about everyday tasks. Our usage of the datasets
only concerns the task related information and does not violate privacy.
### Human Evaluation
As detailed in SS4.3, we conduct human evaluation for our generated graph scripts in this paper via Amazon Mechanical Turk([https://www.mturk.com/](https://www.mturk.com/)). All annotators involved in the human evaluation are voluntary participants and receive a fair wage. All annotators were instructed of the task nature and consent to complete the annotation via a online consent form. We have applied for IRB exemption and the request was approved.
### Model Usage
Our graph script induction framework is not intended to be used for any activity related to any human subjects. Instead, it should only be used for generating graph scripts regarding everyday tasks that benefit people's learning and understanding. It may also be used for predicting/instructing future step/steps to facilitate completion of relevant tasks. Note that our graph script induction framework is intended for wikiHow visual tasks and might not be applicable for other scenarios.
## Acknowledgement
We thank the anonymous reviewers for their helpful suggestions. This research is based upon work supported by U.S. DARPA KAIROS Program No. FA8750-19-2-1004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
2306.02379 | Modular Transformers: Compressing Transformers into Modularized Layers
for Flexible Efficient Inference | Pre-trained Transformer models like T5 and BART have advanced the state of
the art on a wide range of text generation tasks. Compressing these models into
smaller ones has become critically important for practical use. Common neural
network compression techniques such as knowledge distillation or quantization
are limited to static compression where the compression ratio is fixed. In this
paper, we introduce Modular Transformers, a modularized encoder-decoder
framework for flexible sequence-to-sequence model compression. Modular
Transformers train modularized layers that have the same function of two or
more consecutive layers in the original model via module replacing and
knowledge distillation. After training, the modularized layers can be flexibly
assembled into sequence-to-sequence models that meet different
performance-efficiency trade-offs. Experimental results show that after a
single training phase, by simply varying the assembling strategy, Modular
Transformers can achieve flexible compression ratios from 1.1x to 6x with
little to moderate relative performance drop. | Wangchunshu Zhou, Ronan Le Bras, Yejin Choi | 2023-06-04T15:26:28Z | http://arxiv.org/abs/2306.02379v1 | # Modular Transformers: Compressing Transformers into
###### Abstract
Pre-trained Transformer models like T5 and BART have advanced the state of the art on a wide range of text generation tasks. Compressing these models into smaller ones has become critically important for practical use. Common neural network compression techniques such as knowledge distillation or quantization are limited to _static_ compression where the compression ratio is fixed. In this paper, we introduce Modular Transformers, a modularized encoder-decoder framework for flexible sequence-to-sequence model compression. Modular Transformers trains modularized layers that have the same function of two or more consecutive layers in the original model via module replacing and knowledge distillation. After training, the modularized layers can be flexibly assembled into sequence-to-sequence models that meet different performance-efficiency trade-offs. Experimental results show that after a single training phase, by simply varying the assemble strategy, Modular Transformers can achieve flexible compression ratios from 1.1\(\times\) to 6\(\times\) with little to moderate relative performance drop.
## 1 Introduction
The ever increasing size of pre-trained sequence-to-sequence (seq2seq) models Lewis et al. (2020); Raffel et al. (2019); Zhang et al. (2020); Liu et al. (2020); Xue et al. (2021); Zhou et al. (2021) has supported advances in the state of the art in a wide range of natural language processing (NLP) tasks. For example, BART Lewis et al. (2020) has 400 million parameters while T5 Raffel et al. (2019) pushes this number to 11 billion. This makes large pre-trained seq2seq models hard to deploy and prone to negative environmental impacts Strubell et al. (2019); Schwartz et al. (2020); Xu et al. (2021), motivating researchers to investigate methods to compress large pre-trained models into smaller, faster ones that retain strong performance. Previous work has shown that BERT Devlin et al. (2019), a popular encoder-only pre-trained Transformer Vaswani et al. (2017), can be effectively compressed and accelerated via different neural network compression techniques Sanh et al. (2019); Sun et al. (2019); Jiao et al. (2020); Zhou et al. (2020); Gordon et al. (2020); Shen et al. (2020).
However, there is limited research on methods to compress pre-trained seq2seq models Shleifer and Rush (2020). On the other hand, pre-trained seq2seq models are generally more space and time-consuming compared to their encoder-only counterparts since they require storing the decoder as well and generally decode in an auto-regressive fashion. To satisfy ever-changing resource constraints varying in different applications and over time, existing seq2seq compression techniques must separately train and store many compact models with different compression ratios, which is computationally inefficient and may also violate space constraints. Moreover, as suggested by Kasai et al. (2020), as opposed to encoder-only models, it is nontrivial to find proper sizes for seq2seq models with a given resource constraint as the depth for the encoder and the decoder must be jointly tuned. As such, searching for compact seq2seq models meeting different resource constraints can be very costly. This motivates us to investigate the problem of _flexible_ seq2seq compression that can dynamically adjust compression ratio to meet varying resource constraints without training multiple compact models.
In this work, we present Modular Transformers, a framework to compress Transformers into modularized layers for flexible efficient inference. With the guidance from the original model, Modular Transformers trains modularized layers that have the same function of different numbers of consecutive layers in the original model via multi-grained module replacing and knowledge distillation. Specifically, we first map each of the modularized Transformer layers to a sub-module
(i.e., a number of consecutive layers) of the encoder or decoder of the original model. We then train the modularized layers by randomly assembling a model for each training step (replacing sub-modules of the original model with their corresponding modularized layers), while keeping the original model parameters frozen. We propose a curriculum-replacing strategy to train the modularized layers in a fine-to-coarse fashion. We also use attention and representation distillation to further encourage the modularized layers to behave similarly to the original sub-modules.
After training, the modularized layers can be flexibly assembled into seq2seq Transformers that meet different performance-efficiency trade-offs. The compression ratio for encoders and decoders can also be seamlessly adjusted to find the optimal parameterization within a fixed computation budget without any additional training. In addition, we introduce two deterministic assembling strategies tailored for smaller sizes and lower latency. Both of them replace the original model following a decoder-to-encoder, top-to-bottom, and fine-to-coarse fashion.
We empirically verify the effectiveness of Modular Transformers by compressing T5 (Raffel et al., 2019), a popular pre-trained seq2seq model, on representative text generation tasks including text summarization, machine translation, and question generation. Empirical results show that our approach consistently outperforms prior seq2seq compression techniques across different datasets while also enabling users to flexibly adjust the efficiency-performance trade-off of the model to meet different requirements for deployment.
## 2 Related Work
Pre-trained Model CompressionPrior work has shown that BERT (Devlin et al., 2019), a popular encoder-only pre-trained Transformer (Vaswani et al., 2017), can be effectively compressed and accelerated. As summarized in Xu et al. (2021) and Xu et al. (2021), popular BERT compression techniques include knowledge distillation (Hinton et al., 2015; Sanh et al., 2019; Sun et al., 2019; Jiao et al., 2020; Zhou et al., 2022) which trains a compact student network to mimic the behavior of the original teacher model, pruning (LeCun et al., 1989; Michel et al., 2019; Gordon et al., 2020; Sanh et al., 2020; Wang et al., 2022) which prunes redundant neurons or structures in the original model, module replacing (Xu et al., 2020) which trains compact successor sub-modules to replace that in the original model, and quantization (Shen et al., 2020; Zafrir et al., 2019) that compresses a neural network by reducing the number of bits used to represent its parameters. In addition, a number of work also investigated efficient inference with BERT-like models with dynamic inference methods including early exit (Teerapittayanon et al., 2016; Xin et al., 2020; Liu et al., 2020; Schwartz et al., 2020; Zhou et al., 2020) or adaptive computation time (Graves, 2016; Eyzaguirre et al., 2021). These approaches only reduce inference latency while keep the model size unchanged. Modular Transformers focuses on the first category which reduces the size of the model while also accelerating inference.
Sequence-to-sequence CompressionCompared to conventional neural network compression, seq2seq compression is relatively less explored. The de-facto method for compressing seq2seq model is sequence-level knowledge distillation (Kim and Rush, 2016), which uses the original teacher model to generate pseudo-labels with beam search on the train set and train the compact student model with the input-pseudo label pairs. A number of recent studies (Junczys-Dowmunt, 2019; Kasai et al., 2020; Zhang et al., 2021) have verified the effectiveness of the pseudo labeling method for distilling pre-trained Transformer-based seq2seq models. However, (Shleifer and Rush, 2020) shows that a simple shrink and fine-tune method performs competitively with pseudo labeling while requiring fewer computations. Another work related to Modular Transformers is LayerDrop (Fan et al., 2020), which randomly dropout Transformer layers when pre-training a seq2seq model and then prunes certain layers during inference for improved efficiency. LayerDrop differs from our method for two main reasons. First, LayerDrop must be applied _during_ pre-training while Modular Transformers can be applied to any pre-trained models. This makes the scope of application of our method much broader because most existing models are not pre-trained with LayerDrop. Second, during inference, LayerDrop prunes layers while Modular Transformers replaces sub-networks with compact modules with similar functionality. This hopefully reduces the performance drop, especially when the target compression ratio is relatively large.
Moreover, there is also some prior work explor
ing pruning Michel et al. (2019); Li et al. (2021) and quantization Li et al. (2022) techniques for seq2seq model compression. Recently, Zhou et al. (2023) proposed dynamic in-context learning for efficient text generation with prompting. These lines of work are orthogonal to knowledge distillation, pseudo labeling, and Modular Transformers and the approaches can be combined in a straightforward manner.
## 3 Methodology
In this section, we describe the proposed Modular Transformers framework in detail. We first recap module replacing in SS3.1. We then describe the architecture design and training method for Modular Transformers in SS3.2 and SS3.3, respectively. Then, we present the idea of dynamic assembling in SS3.4. Finally, we discuss the relationship between Modular Transformers and vanilla module replacing in SS3.5.
### Preliminary: Module Replacing
The goal of module replacing is essentially similar to knowledge distillation: training a compact model that behaves like the original large model. Compared to knowledge distillation which trains a student model to mimic a teacher model by minimizing the discrepancy between the predictions or hidden representations of the student and the teacher, the idea of module replacing is more direct: a compact model should behave the same way as the original model if all of its sub-networks are interchangeable with those in the original model.
Specifically, for a desired compression ratio \(r\), we first specify a compact "successor" layer for each \(r\) consecutive layer, which we refer to as sub-networks, in the original model. Consider a model \(T\) with \(n\times r\) layers, we can define a compact model \(S\) which has \(n\) layers. Let \(T=\{t_{1},\dots,t_{n}\}\) denote the original model, \(t_{i}\) and \(s_{i}\) denote the sub-networks and layers in the original model and the compact model, respectively. The output vectors of the \(i\)-th sub-network or layer is denoted as \(\mathbf{y}_{i}\). During compression, in each step, we sample a random variable \(r_{i+1}\) that determines whether the \((i+1)\)-th sub-network in the original model is replaced by the corresponding compact layer in this step, from an independent Bernoulli distribution with a probability \(p\) of success:
\[r_{i+1}\sim\mathrm{Bernoulli}(p) \tag{1}\]
As such, the output of the \((i+1)\)-th model is calculated as:
\[\mathbf{y}_{i+1}=r_{i+1}*s_{i}(\mathbf{y}_{i})+(1-r_{i+1})*t_{i}(\mathbf{y}_{ i}) \tag{2}\]
where \(*\) denotes the element-wise multiplication and \(r_{i+1}\in\{0,1\}\). In this way, the compact layers are trained to replace the corresponding sub-networks in the original model. After convergence, the compact layers are expected to be interchangeable (thus having the same functionality) with the original sub-networks. Moreover, the interaction between compact layers and original sub-networks and the random permutation of the hybrid model
Figure 1: Illustration of the Modular Transformers framework. A set of modularized layers with different granularity are trained by multi-grained module replacing and knowledge distillation. During inference, the modularized layers are assembled to meet different resource budgets. \(t_{i}\) denotes the i-th layer in the original model. \(m_{ij}\) denotes j-th modularized layer with a granularity of i.
also adds extra regularization for the training of the compact model.
After training, we collect all compact layers and combine them to be the compressed model \(S=\{s_{1},\ldots,s_{n}\}\), which will be used for efficient inference. Finally, we fine-tune the compact model to bridge the gap between module-replacing training and actual inference with the compact model.
### Modularized Seq2seq Models
Modular Transformers aims to train a set of modularized layers that can be flexibly assembled into compact models with different performance-efficiency trade-offs at test time. It can directly adapted to meet different resource constraints without re-training compact models of different sizes. To achieve this goal, we propose to define modularized layers with different capacities, i.e., capturing the behavior of sub-networks of different sizes.
Specifically, given an encoder (or decoder) \(T=\{t_{1},\ldots,t_{n}\}\) with \(n\) layers and a target range of compression ratio from \(1\times\) to \(s\times\) (we assume n is divisible by s), we define a suite of modularized layers \(M=\{m_{ij},i\in[n],j\in\{1,\ldots,n/i\}\}\), where \([n]\) denotes all positive integer divisors (or factors) of \(n\) except 1 and \(n\) itself, and \(m_{ij}\) denotes a modularized layer that will be trained to match a sub-network that consists of \(i\) consecutive layers starting from the \(i\times(j-1)+1\)-th layer in the original model. For example, for a Transformer model with 12 encoder/decoder layers and a target maximum compression ratio of 6, Modular Transformers defines 6/4/3/2 modularized layers each corresponding to a sub-network representing 2/3/4/6 consecutive layers of the original model. Overall, Modular Transformers consists of 15 modularized layers for the encoder as well as for the decoder, which is comparable to the original model size. After training, we can combine useful modularized layers into a compact model to reach a target compression ratio and a desired level of inference efficiency (see section 3.4).
### Multi-grained Module Replacing
After defining the modularized layers in Modular Transformers, we train them to have the same functionality as the original sub-networks via a combination of module replacing and knowledge distillation.
First, we extend the vanilla module replacing to multi-grained module replacing that can mix modularized layers with different granularity (i.e., target sub-network size) during training. Since the target sub-networks of modularized layers of different granularity often overlap with each other, we can not simply sample a Bernoulli random variable to determine the structure of the hybrid model used for training in the current step. Instead, we propose a greedy replacing strategy for multi-grained module replacement. Specifically, we start from the n-th (last) layer of the original model and an empty hybrid model \(H\). Assuming that we reach the k-th layer, we sample a random variable \(r_{k}\sim\mathrm{Bernoulli}(p)\) where \(p\) represents the probability of replacing that layer. If \(r_{k}=0\), we add \(t_{k}\) into the hybrid model \(H\) and move to the next layer. Otherwise, we perform a module replacement and sample a random variable \(r_{ki}\sim\mathrm{Cat}(\mathbf{p})\) where \(i\in[n]\) denotes the granularity of the modularized layer we want to add to the hybrid model. \(\mathrm{Cat}(\mathbf{p})\) denotes a categorical distribution on \([n]\) (or Multi-noulli distribution) and \(\mathbf{p}\) is the probability vector of the \(|[n]|\) possible outcomes. Then we determine the nearest suitable modularized layer of granularity \(i\) as \(m_{ij},j=\lfloor k/i\rfloor\). If \(i\) divides \(k\), we can directly add \(m_{ij}\) into \(H\) and advance to the \(k-i\)-th layer. Otherwise, we first append the original layers \(t_{i\times j+1:k}\) to \(H\) before adding \(m_{ij}\). In this way, for each training step, we sample a hybrid model and train its modularized layers with a task-specific objective (such as cross-entropy loss) to encourage the modularized layers to mimic the behavior of the replaced sub-networks in the original model.
In addition to module replacing, we also propose to leverage attention and hidden representation distillation Sun et al. (2019); Jiao et al. (2020) to better align modularized layers and the corresponding sub-networks. Specifically, we train a modularized layer \(m_{ij}\) in the hybrid model to match the attention distribution and the output hidden representation of \(t_{i\times j}\) in the original model. During training, we simply combine task-specific loss with distillation loss with equal weights.
Moreover, we adopt the curriculum replacement strategy from Xu et al. (2020) and progressively increase the replacing probability \(p\) to 1 during training. However, as opposed to vanilla module replacing where the model becomes static as \(p=1\), the hybrid model is still randomly combined with modularized layers of different granularity. Also, motivated by the fact that modularized layers of larger granularity are more difficult to train, we propose to train the modularized layers of different
granularity in a coarse-to-fine fashion. Specifically, for a 12-layer model (such that the modularized layers are of granularity in {2,3,4,6}), we start the training with \(\textbf{p}=[1,0,0,0]\) so that only modularized layers of granularity \(2\) will be sampled in the hybrid model. We then progressively (linearly) change **p** to \([0.75,0.25,0,0]\), \([0.5,0.25,0.25,0]\), and finally to \([0.25,0.25,0.25,0.25]\) so that they are sampled uniformly. We empirically set each transition phase to a quarter of all training steps.
### Flexible Assembling
We describe how we can use the trained modularized layers for flexible inference. The proposed flexible assembling method starts with the original model \(T\) and gradually replaces it with modularized layers. In contrast to the random replacing strategy used for sampling the hybrid model during training, we need to take the target compression/speed-up ratio into account while also optimizing for the performance of the assembled model. To this end, we propose a deterministic decoder-first, top-down, fine-to-coarse replacing strategy for flexible assembling. The decoder-first and top-down strategy is inspired by the insight from prior work on seq2seq compression and module replacement. Specifically, Shleifer and Rush (2020) and Kasai et al. (2020) revealed that within a fixed parameter budget, seq2seq models with a deep encoder and a shallow decoder generally perform the best, and Xu et al. (2020) showed that top layers are more robust to module replacing, whereas replacing bottom layers often leads to a larger performance drop. The fine-to-coarse strategy is motivated by our conjecture that modularized layers with large granularity will more likely lead to a larger drop in performance.
In general, there exist two common compression strategies tailored for different kinds of resource budgets. The first setting focuses on the size of the compressed models and the second focuses on the inference speed. We devise two variants of our assembling strategy tailored for _speed-first_ and _size-first_ compression. Inspired by the previous observation of Shleifer and Rush (2020), we use a more uniform replacing strategy for size-first assembling while replacing decoder sub-networks more aggressively for speed-first assembling.
We illustrate our approach on the T5-base model with \(12\) encoder and \(12\) decoder layers. In the size-first assembling, we first replace the decoder from top to bottom with modularized layers \(m_{2j}\) corresponding to two consecutive layers. We then do the same for the encoder. After \(T\) is entirely replaced by \(m_{2j}\), we start replacing \(m_{2j}\) with \(m_{3j}\) from decoder to encoder, from top to bottom. We do this iteratively until the entire model consists only of \(m_{6j}\) layers. As for speed-first assembling, we first replace decoder layers until they have all been replaced by \(m_{4j}\) layers. We then replace the encoder with \(m_{2j}\) and then replace the decoder with \(m_{6j}\). Finally, we iteratively replace the encoder until the whole model consists only of \(m_{6j}\) layers.
During this process, each module replacing operation reduces the number of layers in the model by \(1\). As such, the compression ratio can be flexibly adjusted from \(1\) to the maximum granularity of the modularized layers according to different resource constraints and performance-efficiency trade-offs.
### Relation with Module Replacing
Our approach differs from vanilla module replacing in four different ways: (1) module replacing is limited to compressing BERT-like encoder-only models in Xu et al. (2020) while Modular Transformers works with encoder-decoder models; (2) we propose to train modularized layers that can replace sub-networks of _different sizes_ in the original model and can be connected with others, whereas the successor layers are mapped to sub-networks of a fixed size; (3) we propose multi-grained module replacing and use representation distillation to better align modularized layers with their corresponding sub-networks while vanilla module replacing only supports a fix compression ratio and is trained with cross-entropy loss only; (4) we introduce the idea of dynamic assembling that enables _flexible_ adjustment of performance-efficiency trade-off, whereas the compression ratio in Xu et al. (2020) is fixed since the successor layers are directly connected together. Essentially, we can view module replacing as a method used for training modularized layers so that they are interchangeable with sub-networks in the original model.
## 4 Experiments
In this section, we empirically verify the effectiveness of Modular Transformers by using it to compress T5 (Raffel et al., 2019), a popular pre-trained seq2seq Transformer, on a number of representative natural language generation tasks and datasets.
### Experimental Settings
#### 4.1.1 Models and Datasets
We conduct experiments with the T5-base and T5-large models, which have 220M and 774M parameters respectively, as the backbone models for baseline and large-size experiments.
As for tasks and datasets, we evaluate Modular Transformers on text summarization, question generation, and machine translation. For summarization, we select the CNN/DailyMail Hermann et al. (2015) and XSUM Narayan et al. (2018) datasets. We use the SQUAD Rajpurkar et al. (2016) following the split in Du and Cardie (2018) for question generation and use the WMT-14 Callison-Burch et al. (2009) En-De split for machine translation.
#### 4.1.2 Baselines
We compare Modular Transformers with the following seq2seq compression methods: (1) **Pseudo Labeling (PL)**Kim and Rush (2016), also called sequence-level knowledge distillation, which uses the original model to generate pseudo labeled data for training compact models; (2) **Shrink and Fine-tune (SFT)**Shleifer and Rush (2020), which simply shrinks the teacher model to student size and refine-tunes the shrunk model; (3) **Knowledge Distillation (KD)**Hinton et al. (2015), which combines logit distillation, hidden representation distillation, and attention distillation, to train a compact student model; (4) **Head Prune**Michel et al. (2019), which prunes attention heads in the model with gradient-based head importance score; (5) **LayerDrop**Fan et al. (2020), which randomly drops Transformer layers during fine-tuning, and selectively prunes certain layers for efficient inference; and (6) **Quant + KD**Li et al. (2022), which combines quantization-aware training and knowledge distillation to train a quantized seq2seq model.
#### 4.1.3 Training Details
We define modularized layers of granularity {2,3,4,6} for both T5-base and T5-large.1 We train Modular Transformers and all related models with a warm-up ratio of 0.05, a label smoothing Szegedy et al. (2016) rate of 0.1, and learning rate in {3e-5, 5e-5, 7e-5}. For Modular Transformers, we use a batch size of 128 and a max epoch of 24 for summarization and question generation datasets, and a batch size of 256 and a max epoch of 60 for machine translation datasets. We use the same batch size for all methods and the same number of epochs for all methods with KD. We train non-KD baselines with half the number of epochs which leads to similar performance and faster convergence. For method-specific hyperparameters, we adopt the values provided in the original contribution.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & & & \multicolumn{4}{c}{**Summarization**} & \multicolumn{2}{c}{**Question Gen**} & \multicolumn{2}{c}{**Translation**} \\ & & & \multicolumn{2}{c}{**CNN/DM**} & \multicolumn{2}{c}{**XSum**} & \multicolumn{2}{c}{**Squad**} & \multicolumn{2}{c}{**En-De**} \\ \cline{4-11}
**Method** & **\#Layers** & **Ratio** & RG-1 & RG-2 & RG-L & RG-1 & RG-2 & RG-L & B-4 & M & RG-L & B-4 \\ \hline T5-base & 12-12 & 1.0/1.0\(\times\) & 42.25 & 20.35 & 39.45 & 43.39 & 20.66 & 35.15 & 22.216 & 25.31 & 51.40 & 30.3 \\ \hline \hline \multicolumn{11}{c}{_Compressed Models Focusing on Smaller Sizes_} \\ \hline Pseudo Labeling & 6-6 & 1.5/2.0\(\times\) & 41.08 & 19.24 & 38.15 & 42.34 & 19.91 & 34.09 & 21.36 & 24.52 & 50.55 & 29.1 \\ SFT & 6-6 & 1.5/2.0\(\times\) & 41.16 & 19.48 & 38.24 & 42.21 & 19.46 & 33.83 & 21.24 & 24.45 & 50.42 & 28.5 \\ KD & 6-6 & 1.5/2.0\(\times\) & 41.26 & 19.55 & 38.36 & 42.52 & 19.95 & 34.22 & 21.43 & 24.61 & 50.61 & 29.2 \\ Head Prune & 12-12 & 1.2/1.9\(\times\) & 38.86 & 17.83 & 36.75 & 39.73 & 17.45 & 31.92 & 19.45 & 23.41 & 48.63 & 27.2 \\ LayerDrop & 6-6 & 1.5/2.0\(\times\) & 37.45 & 16.38 & 35.62 & 38.62 & 16.61 & 30.78 & 18.86 & 22.82 & 47.25 & 25.1 \\ Quant + KD & 12-12 & 1.1/3.9\(\times\) & 41.18 & 19.65 & 38.37 & 42.05 & 19.51 & 33.89 & 21.38 & 24.45 & 50.43 & 29.0 \\
**Modular Transformers** & 6-6 & 1.5/2.0\(\times\) & **41.71** & **19.86** & **38.83** & **42.86** & **20.27** & **34.49** & **21.53** & **24.81** & **50.81** & **29.4** \\ \hline \hline \multicolumn{11}{c}{_Compressed Models Focusing on Lower Latency_} \\ \hline Pseudo Labeling & 12-3 & 1.9/1.6\(\times\) & 41.25 & 19.38 & 38.27 & 42.49 & 20.01 & 34.26 & 21.40 & 24.45 & 50.47 & 29.3 \\ SFT & 12-3 & 1.9/1.6\(\times\) & 41.45 & 19.63 & 38.55 & 42.41 & 19.74 & 33.96 & 21.38 & 24.52 & 50.51 & 28.8 \\ KD & 12-3 & 1.9/1.6\(\times\) & 41.52 & 19.65 & 38.60 & 42.60 & 20.10 & 34.39 & 21.51 & 24.60 & 50.63 & 29.4 \\ LayerDrop & 12-3 & 1.9/1.6\(\times\) & 38.31 & 17.37 & 36.44 & 39.44 & 17.23 & 31.46 & 19.28 & 23.31 & 47.93 & 25.9 \\
**Modular Transformers** & 12-3 & 1.9/1.6\(\times\) & **41.75** & **19.88** & **38.98** & **42.99** & **20.35** & **34.59** & **21.75** & **24.95** & **50.91** & **29.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Static compression results with T5-base in both size-first (top) and speed-first (bottom) settings. For compression ratio, \(a/b\times\) means the model is \(a\) times faster and b times smaller than the original model. Head Prune and Quant + KD are only considered in the size-first setting, as they cannot be adjusted for size-first or speed-first and their speed-up ratio is relatively low.
#### 4.1.4 Evaluation
Following previous work, we report ROUGE Lin (2004) and BLEU Papineni et al. (2002) for text summarization and machine translation, and report BLEU-4, METEOR Banerjee and Lavie (2005), and ROUGE-L for question generation. We compare Modular Transformers with baseline methods in both **fixed** and **flexible** compression ratio scenarios. In the fixed budget scenario, we experiment with both the size-first and speed-first settings. The size-first setting focuses on the size of the compressed models and compresses both the encoder and the decoder by half, while the speed-first setting focuses on the inference speed and compresses the decoder by 3/4 but leaves the encoder uncompressed. The baselines are trained twice while Modular Transformers can adjust to the two different trade-offs by simply varying the two aforementioned assembling strategies. In the flexible compression setting, we present both the trade-off between size-performance and speed-performance of Modular Transformers and compare it with several common static compression settings trained with pseudo labeling.
### Experimental Results
#### 4.2.1 Static Compression Results
We present static compression results with T5-base and T5-large as the teacher model in Table 1 and Table 2, respectively. We can see that Modular Transformers consistently outperforms all compared baselines in both size-first and speed-first settings without the need of re-training the model. We also find that while previous literature Kim and Rush (2016); Shleifer and Rush (2020) observes that logits-based knowledge distillation underperforms pseudo labeling and SFT baselines, adding attention distillation and hidden representation distillation makes KD performs slightly better than these baselines. Moreover, we find LayerDrop, the other baseline that does not require re-training for different compression ratios, performs poorly when the number of layers to be dropped is relatively large, which is consistent with previous observations Fan et al. (2020). Combining quantization and knowledge distillation performs well in terms of size-performance trade-off, which is consistent with a concurrent work by Li et al. (2022). However, the speed-up of this approach is much smaller compared to other baselines, which we believe can be partially attributed to common GPUs not being optimized for quantized models.
In addition, we find that models with 12-3 encoder/decoder layers consistently outperform their 6-6 counterparts while also leading to larger speed-ups. This is consistent with previous observations of Shleifer and Rush (2020) and it confirms the effectiveness of the speed-first assembling strategy.
#### 4.2.2 Flexible Compression Results
Similar to the static compression experiments, we also compare Modular Transformers with the KD baseline, which is the current best-performing method, for both size-performance and speed-performance trade-offs. We present the results in Figure 2. The left figure measures the trend of Modular Transformers's performance after each replacing operation in the size-first assembling strategy. The right figure illustrates the performance of
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & & & \multicolumn{4}{c}{**Summarization**} & \multicolumn{3}{c}{**Question Gen**} & \multicolumn{1}{c}{**Translation**} \\ & & & \multicolumn{3}{c}{**CNN/DM**} & \multicolumn{3}{c}{**XSum**} & \multicolumn{3}{c}{**SQUAD**} & \multicolumn{1}{c}{**En-De**} \\ \cline{4-11}
**Method** & **\#Layers** & **Ratio** & RG-1 & RG-2 & RG-L & RG-1 & RG-2 & RG-L & B-4 & M & RG-L & B-4 \\ \hline T5-large & 24-24 & 1.071.0\(\times\) & 42.61 & 20.72 & 39.81 & 43.83 & 21.05 & 35.44 & 22.75 & 25.45 & 51.62 & 31.2 \\ \hline \hline \multicolumn{11}{c}{_Compressed Models Focusing on Smaller Sizes_} \\ \hline Pseudo Labeling & 12-12 & 1.52/0.\(\times\) & 41.54 & 19.61 & 38.51 & 42.75 & 20.23 & 34.44 & 21.61 & 24.73 & 50.65 & 29.6 \\ SFT & 12-12 & 1.52/0.\(\times\) & 41.57 & 19.66 & 38.62 & 42.60 & 20.01 & 34.21 & 21.45 & 24.61 & 50.60 & 29.2 \\ KD & 12-12 & 1.52/0.\(\times\) & 41.62 & 19.73 & 38.70 & 42.83 & 20.21 & 34.55 & 21.59 & 24.75 & 50.71 & 29.9 \\ Quant + KD & 24-24 & 1.13/9.\(\times\) & 41.64 & 19.78 & 38.67 & 42.52 & 19.95 & 34.05 & 21.57 & 24.66 & 50.61 & 29.2 \\
**Modular Transformers** & 12-12 & 1.52/0.\(\times\) & **41.92** & **19.97** & **39.01** & **43.22** & **20.51** & **34.76** & **21.79** & **24.95** & **50.86** & **30.1** \\ \hline \hline \multicolumn{11}{c}{_Compressed Models Focusing on Lower Latency_} \\ \hline Pseudo Labeling & 24-6 & 1.91/6.\(\times\) & 41.76 & 19.79 & 38.68 & 42.92 & 20.28 & 34.51 & 21.72 & 24.94 & 50.81 & 30.1 \\ SFT & 24-6 & 1.91/6.\(\times\) & 41.85 & 19.95 & 38.85 & 42.83 & 20.14 & 34.43 & 21.41 & 24.57 & 50.65 & 29.4 \\ KD & 24-6 & 1.91/6.\(\times\) & 41.92 & 20.01 & 38.94 & 42.97 & 20.35 & 34.53 & 21.45 & 24.64 & 50.70 & **30.3** \\
**Modular Transformers** & 24-6 & 1.91/6.\(\times\) & **42.12** & **20.26** & **39.25** & **43.33** & **20.68** & **34.81** & **21.79** & **25.01** & **50.95** & **30.3** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Static compression results with T5-large. For compression ratio, \(a/b\times\) means the model is \(a\) times faster and \(b\) times smaller than the original model. We only include methods that are competitive in the T5-base experiments.
Modular Transformers and the KD baseline with configurations that correspond to a speed-up ratio from approximately \(1.25\times\) to \(2.5\times\). We can see that Modular Transformers retains the performance of the original model very well when only reducing a few layers or assembling for a relatively small speed-up ratio. More importantly, Modular Transformers consistently outperforms the KD baseline, which is trained 6 times (once for each specific compression ratio), and the improvement is even larger in the extreme compression regime. This confirms the effectiveness of our approach for flexible seq2seq compression and its robustness to relatively large compression ratios.
### Analysis
We then conduct a series of analyses to better understand the effectiveness of Modular Transformers. All experiments are conducted with T5-base on the CNN/DailyMail dataset.
Impact of Assembling StrategyWe first analyze the impact of the proposed assembling strategies which replace layers from fine to coarse, from encoder to decoder, and from top to bottom. We compare our strategies in both the size-first and speed-first settings with the random baseline and the oracle where we exhaustively search all possibilities. The results are shown in Figure 3. We can see that our strategy is relatively close to the oracle, especially in the low compression ratio regime. In contrast, the random assembling baseline performs substantially worse, demonstrating the effectiveness of our assembling strategies.
Impact of Random ReplacingSince the assembling strategy at test time is deterministic, one may question the need for a random module replacing scheme during training. To this end, we compare Modular Transformers with the baseline where we only sample model structures used during inference. The results are shown in Table 3. We can see that this variant performs substantially worse than the random replacing method. We believe that this is due to the interaction between well-trained original layers and the modularized layers and is crucial for improving performance.
Impact of Knowledge DistillationWe then study the impact of combining knowledge distillation with module replacing for training Modular Transformers. We compare the variant trained without knowledge distillation in Table 3. We find that incorporating knowledge distillation for training modularized layers improves the overall performance. This confirms the complementary nature of knowledge distillation and module replacement for the first time.
Impact of Curriculum ReplacingIn addition, we also analyze the impact of curriculum replacing which progressively increases the probability of including coarse-grain modularized layers in the hybrid model during training. We train a uniform sampling variant for comparison. In Table 3, we find that the uniform variant performs worse than Modular Transformers, demonstrating the effectiveness of curriculum replacing.
Figure 3: Comparison between our flexible assembling method, random assembling baseline, and oracle performance with T5-base on the CNN/DailyMail dataset in both size-first (left) and speed-first (right) settings.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Methods & RG-1 & RG-2 & RG-L \\ \hline Ours & **41.53** & **19.74** & **38.62** \\ - w/o random replacing & 41.25 & 19.52 & 38.31 \\ - w/o curriculum replacing & 41.35 & 19.62 & 38.43 \\ - w/o knowledge distillation & 41.42 & 19.60 & 38.49 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study results with T5-base on the CNN/DailyMail dataset in the size-first setting.
Figure 2: Flexible compression results of Modular Transformers and KD with T5-base on the CNN/DailyMail dataset in both size-first (left) and speed-first (right) settings.
## 5 Conclusion
We introduce Modular Transformers, a framework for flexible and accurate seq2seq compression that adapts to different performance-efficiency trade-offs, which has important practical implications such as reducing the computation cost and environmental impact of deployed systems. Our method defines a set of modularized layers with different granularity and trains them to share the same functionality as the sub-networks of the original model they replace. We combine multi-grained module replacing and knowledge distillation, and design two flexible assembling strategies for size-first and speed-first inference. Empirical results on various text generation tasks show that our approach consistently outperforms previous methods while alleviating the need to re-train the compact model in order to adapt to new memory and inference time constraints.
## Limitations
Our experiments focus on the T5-base and T5-large models as these are widely used, representative pre-trained seq2seq models. However, there are other pre-trained seq2seq models such as BART that we did not experiment with. It would also be interesting to experiment with pre-trained models with more layers such as T5-3B and T5-11B. We have not conducted these experiments due to resource constraints.
## Ethics Statement
Our method is used to compress seq2seq Transformer models. Therefore, ethical considerations of text generation models generally apply to our method. We encourage users to assess potential biases before deploying text generation models.
|
2304.14362 | Intertwined relaxation processes maintain athermal electron distribution
in laser-excited dielectrics | We study the relaxation dynamics of laser-excited non-equilibrium electron
distributions in the valence- and conduction band of a dielectric. We apply
Boltzmann collision integrals to trace the influence of different scattering
mechanisms on the energy- and particle density of electrons and holes. Our
results show a two-timescale behavior of the equilibration process:
Thermalization within each band towards a respective Fermi distribution as well
as equilibration of the band-resolved temperatures occur within a few
femtoseconds. In contrast, the equilibration of the respective chemical
potentials, driven by scattering processes involving particle exchange like
impact ionization and Auger recombination, requires timescales in the range of
hundreds of femtoseconds. We evaluate the effect of our model assumptions for
distinct material parameters on the extracted specific relaxation times. All
these timescales, however, strongly increase, when the additional scattering
channel with the cold phonon system is considered: Our simulations demonstrate
that an athermal non-Fermi electron distribution can be maintained well up to
the picosecond range. | Nils Brouwer, Steffen Hirtle, Baerbel Rethfeld | 2023-04-27T17:38:47Z | http://arxiv.org/abs/2304.14362v1 | # Intertwined relaxation processes maintain athermal electron distribution
###### Abstract
We study the relaxation dynamics of laser-excited non-equilibrium electron distributions in the valence- and conduction band of a dielectric. We apply Boltzmann collision integrals to trace the influence of different scattering mechanisms on the energy- and particle density of electrons and holes. Our results show a two-timescale behavior of the equilibration process: Thermalization within each band towards a respective Fermi distribution as well as equilibration of the band-resolved temperatures occur within a few femtoseconds. In contrast, the equilibration of the respective chemical potentials, driven by scattering processes involving particle exchange like impact ionization and Auger recombination, requires timescales in the range of hundreds of femtoseconds. We evaluate the effect of our model assumptions for distinct material parameters on the extracted specific relaxation times. All these timescales, however, strongly increase, when the additional scattering channel with the cold phonon system is considered: Our simulations demonstrate that an athermal non-Fermi electron distribution can be maintained well up to the picosecond range.
## I Introduction
Ultras short laser pulses are relevant in numerous applications due to their ability to deposit energy in strongly confined spatial and temporal scales, _e.g._, in laser-processing [1], nanosurgery [2], as well as magnetic switching [3]. The very high intensities achieved with ultrashort laser pulses also enable energy deposition in initially transparent materials such as semiconductors and dielectrics by multi-photon absorption across the band gap [4; 5]. The interaction of ultrashort laser pulses with solids is also interesting from a more fundamental point of view, since the pulse durations are comparable to the intrinsic timescales of fundamental scattering processes within the electronic degrees of freedom and the lattice [6; 7; 8].
In the search for a theoretical description of laser-induced processes on multiple timescales, a short pulse duration allows the laser excitation to be separated from the following relaxation processes [9; 10]: At first the laser pulse excites the electrons, which may lead to a state of non-equilibrium inside as well as in between electronic bands, in which a common electron temperature is not defined. In addition, there is a non-equilibrium between the electronic and phononic systems, since only the electrons can be directly excited by optical lasers while the lattice remains cold. Several theoretical methods have been developed to describe laser excitation and the following relaxation processes. One very successful method is the two-temperature model (TTM) [11], in which the electronic and phononic systems are each assumed to be described by a specific temperature. Several extensions have been proposed to account, _e.g._, for multiple electron bands in semiconductors [12; 13] or in noble metals [14; 15], spin-resolution in itinerant ferromagnets [16; 17; 18], as well as several phonon modes [19]. However, these temperature-based models inherently assume equilibrium distribution functions (Fermi distributions for electrons, Bose-distributions for phonons) and are therefore unable to capture athermal electron distributions and to describe their thermalization. An approximate consideration of athermal electrons has been proposed with a separation of the electronic system, where the laser-excited electrons are treated separately from the unaffected electrons [20; 21; 22]. Another approach to describe the excitation of solids with ultrashort laser pulses is time-dependent density functional theory. It has been applied to the laser excitation of metals in combination with molecular dynamics [23] and was used to study the light-matter interaction in dielectrics [24]. However, these models lack the capability to track electron-electron relaxation.
In this work we apply Boltzmann collision integrals to track the energy-resolved electron and phonon distribution functions in a laser-excited dielectric during and after irradiation with an ultrashort laser pulse of a photon energy much smaller than the band gap [25; 26; 27]. Here, excitation of valence-band electrons is possible by multiphoton-ionization as well as tunneling ionization. Further single-photon absorption of electrons may occur in the conduction as well as in the valence band through inverse Bremsstrahlung (intraband absorption). Electron-electron collisions can redistribute energy within one band (intraband), or may exchange energy as well as particles between both bands (interband). The electron systems loose energy due to interaction with the
cold lattice, which is described through the interaction of the band-resolved electrons with several phonon modes. This approach allows us to study the equilibration of energy inside each considered electron band as well as the interplay of equilibration between conduction band, valence band and phonon modes without assuming an equilibrium distribution within any subsystem. The model follows our previous works in Refs. [26; 27; 28].
We will briefly recall the applied model in the next section II, and explain the most relevant theoretical aspects for this work. In section III we show our results for different assumptions and starting conditions. Specifically, section III.1 deals with the time evolution of the laser-excited electronic energy distributions, when the influence of the phonon systems is neglected. Distinct features of the transient non-equilibrium electron distribution function, particularly its difference to a thermal distribution, are identified and attributed to the considered scattering processes. The observed different timescales of characteristic equilibration processes are studied in more detail in section III.2, where we initialize excited Fermi distributions for conduction and valence band, respectively, and analyze the influence of varying excitation strengths and material parameters on the resulting thermalization times. In section III.3 we trace the temporal evolution of laser-excited electron distribution including additionally the interaction with the phonons. This leads to cooling of the excited electronic systems, which continuously feeds a certain nonequilibrium in the electron distribution lasting up to picosecond timescales. Our results therefore show that the electron-phonon coupling causes the electrons' energy distribution to remain athermal on timescales well beyond their intrinsic thermalization times.
## II Theory
In this work, we apply Boltzmann collision integrals to trace the transient distribution functions of electrons and phonons in a dielectric. The considered systems are an electronic valence and conduction band as well as one longitudinal acoustic phonon mode and two longitudinal optical phonon modes. Each electronic band is described by an isotropic and parabolic dispersion relation, with differing effective masses with opposite signs. The band edges are separated by a band gap.
The change of the electronic energy distribution is given by the Boltzmann equation. Since we assume a homogeneous, isotropic material, we neglect spatial and angular dependencies. The energy distribution \(f\) then depends on energy \(E\) and time \(t\) only, and its changes are given by a sum of all considered collision integrals
\[\frac{\partial f_{\alpha}(E,t)}{\partial t}=\sum_{\rm coll}\left.\frac{ \partial f_{\alpha}(E,t)}{\partial t}\right|_{\rm coll}\,, \tag{1}\]
where the subscript \(\alpha\) indicates conduction or valence band, respectively.
Several collision processes, within each band as well as interactions between the bands, are considered in our model: Electron-electron intraband collisions allow for thermalization within one band. Energy is exchanged between the two bands by collisions between electrons of different bands, also termed as electron-hole collisions. In addition, Auger recombination and impact ionization allow for particle exchange between both bands. During impact ionization, a high-energy electron of the conduction band transfers part of its energy to an electron in the valence band, enabling the latter to overcome the band gap. This process is also known as secondary electron ionization and may induce an avalanche process [29; 5; 8]. It results in two electrons of low kinetic energy, which can, in turn, also be the initial situation of the opposite process, namely Auger recombination. The energy absorption from the laser irradiation is described by a photoionization term (including both, multiphoton and tunneling ionization) and by inverse Bremsstrahlung (intraband absorption) described by an electron-phonon-photon collision integral.
The Boltzmann collision equation for the phonon distribution \(s\) of mode \(\beta\) reads completely analogous to the one for the electrons (1), namely
\[\frac{\partial s_{\beta}(E,t)}{\partial t}=\sum_{\rm coll}\left.\frac{ \partial s_{\beta}(E,t)}{\partial t}\right|_{\rm coll}\,. \tag{2}\]
The phonon modes exchange energy with both electronic bands by electron-phonon (e-ph) collisions. They also serve as a momentum-reservoir for photon absorption during electron-phonon-photon (e-ph-pt) collisions. Phonon-phonon collisions are neglected in our approach as they are relevant on longer timescales than considered here [30; 31].
The distinct collision integrals on the right-hand side of Eqs. (1) and (2) are derived using first-order perturbation theory and Fermi's golden rule. Our approach is explained in detail in Refs. [26; 27]. Here, we repeat only a few expressions that will be most relevant for the results shown in this work.
For electron-electron collisions such as Auger recombination and impact ionization, we start with two electrons with initial wave vectors \(\mathbf{k}\) and \(\mathbf{k_{2}}\) which transition into states with wave vectors \(\mathbf{k_{1}}\) and \(\mathbf{k_{3}}\). Since both Auger recombination and impact ionization involve a transition of an electron from one band to another, the overlap integral of both bands, which accounts for the probability of band transition, must be included. The overlap integral between the Bloch factor \(u_{c,\mathbf{k}}\) of the conduction band state with momentum \(\mathbf{k}\) and \(u_{v,\mathbf{k_{i}}}\) of the valence band state with momentum \(\mathbf{k_{1}}\) is given by
\[I_{cv}(\mathbf{k},\mathbf{k_{1}})=\int_{\rm unit\ cell}d\mathbf{r}\,u_{v, \mathbf{k_{1}}}^{*}(\mathbf{r})u_{c,\mathbf{k}}(\mathbf{r})\,. \tag{3}\]
Using \(\mathbf{k}\cdot\mathbf{p}\)-theory and f-sum rule, it can be approximated by [32]
\[\left|I_{cv}(\mathbf{k},\mathbf{k_{1}})\right|^{2}\approx\frac{\hbar^{2}}{2 \Delta}\left(\frac{1}{m_{c}^{*}}+\frac{1}{m_{v}^{*}}\right)\kappa^{2}\,, \tag{4}\]
where \(\kappa=|\mathbf{k}-\mathbf{k_{1}}|=|\mathbf{k_{2}}-\mathbf{k_{3}}|\) is the exchanged momentum, \(\Delta\) is the band gap energy and \(m_{e}^{*}\) and \(m_{v}^{*}\) are the effective conduction and valence band masses, respectively. The overlap integral introduces a probability for band transition to the electron-electron interaction (\(e-e\)) and thus directly enters the matrix element for both Auger recombination (Aug) and impact ionization (imp) [32],
\[|M_{\rm Aug}|^{2}=|M_{\rm imp}|^{2} = |M_{e-e}|^{2}\cdot|I_{cv}(\kappa)|^{2} \tag{5}\] \[= \frac{e^{4}}{\epsilon_{0}^{2}V^{2}}\frac{|I_{cv}(\kappa)|^{2}}{q_ {0}^{2}+\kappa^{2}}\delta_{\mathbf{k}+\mathbf{k}_{2},\mathbf{k}_{1}+\mathbf{ k}_{3}}\,,\]
with the electron charge \(e\), vacuum permittivity \(\epsilon_{0}\), volume of the unit cell \(V\) and the inverse screening length \(q_{0}\).
For electron-phonon collisions, we apply two different matrix elements, depending on the considered phonon mode. Longitudinal acoustic (LA) phonons are described in the framework of the Debye model with a linear dispersion relation \(\omega_{\rm LA}(q)=c_{s}\,q\), where \(c_{s}\) is the velocity of sound and \(q\) the absolute value of the momentum wave vector. The matrix element for electron-phonon coupling with this mode is obtained by deformation potential theory and reads [25; 26; 27]
\[\big{|}M_{e-ph}^{\rm LA}(q)\big{|}^{2}=\frac{\hbar C^{2}q^{2}}{2\rho\omega_{ \rm LA}(q)(1+(q_{0}/q)^{2})}\ \,, \tag{6}\]
where \(C\) is a constant deformation potential and \(\rho\) is the mass density. Longitudinal optical (LO) phonons are approximated by the Einstein model where the phonon frequency is constant, \(\omega_{\rm LO_{1,2}}(q)=\omega_{\rm LO_{1,2}}\). For the interaction of electrons with these modes, we apply a polar matrix element, which reads [25; 26; 27]
\[\big{|}M_{e-ph}^{\rm LO}(q)\big{|}^{2}=\frac{e^{2}}{\varepsilon_{0}}\frac{ \hbar\omega_{\rm LO_{1,2}}}{2q^{2}(1+(q_{0}/q)^{2})}\left(\frac{1}{\varepsilon _{\infty}}-\frac{1}{\varepsilon_{\rm r}}\right)\ \,, \tag{7}\]
where \(\varepsilon_{r}\) is the relative permittivity and \(\varepsilon_{\infty}\) the optical permittivity.
When we analyze non-equilibrium distributions it can be useful to define the quasi-temperature and chemical potential which allow us to compare the non-equilibrium distribution to an equilibrium distribution of the same particle- and energy density. They are found by first calculating the particle density
\[n_{\rm CB,VB}=\int_{\rm CB,VB}dE\,{\rm DOS}(E)f(E) \tag{8}\]
and the energy density
\[u_{\rm CB,VB}=\int_{\rm CB,VB}dE\,{\rm DOS}(E)f(E)E \tag{9}\]
of the non-equilibrium distribution \(f(E)\) in either the conduction or valence band, where \({\rm DOS}(E)\) is the density of states. Then, a root-finding algorithm is used to find the temperature \(T\) and chemical potential \(\mu\) of a Fermi distribution \(f_{\rm Fermi}(E,T,\mu)\) which yields the same particle and energy density in the considered band.
## III Results
As in Refs. [26; 27], we consider for the electrons two parabolic bands, namely an initially filled valence band with an effective mass of \(-3.51\,m_{e}\) and an initially empty conduction band with an effective mass of \(m_{e}\). The extrema of both bands are separated by a band gap of 9 eV. The Fermi energy is located in the center of the band gap. Initially, all electrons are described by a single Fermi distribution of \(T_{e}=300\) K. The longitudinal acoustic phonon mode is considered with a sound velocity of \(c_{s}=5935\) m/s, whereas the optical modes are considered with the Einstein model at frequencies of \(\hbar\omega_{1}=63\) meV and \(\hbar\omega_{2}=153\) meV. We assume that all phonon modes extend over the whole spherical Brillouin zone with radius \(q_{B}=2.35\cdot 10^{10}\) m\({}^{-1}\).
To solve the system of coupled integro-differential equations given by Eqs. (1) and (2), the electronic energies are discretized by 3000 points and the phonon wavevectors by 75 points, respectively. To evaluate the electron-electron collision integrals, we apply the MISER Monte Carlo integration algorithm [33]. All other collision integrals are solved using the trapezoidal rule. The propagation in time is done by a forward Euler method with an adaptive time step.
In the following, we will present the electron distribution function \(f\) in the so-called \(\Phi\) representation [25; 34], which is related to the distribution function through a logarithm as
\[\Phi(E)=-\log\!\left(\frac{1}{f(E)}-1\right)\,. \tag{10}\]
In comparison to a simple logarithm, Eq. (10) has the advantage that a Fermi distribution is represented by a linear function, also including the electrons in the valence band. It has a slope inversely proportional to the temperature of the system. In the case of a non-equilibrium distribution, the \(\Phi\)-representation deviates from a straight line, allowing an easy and direct visual identification of deviations from equilibrium in both the conduction and the valence bands.
### Electron relaxation after laser irradiation
In a first step, we analyze the behavior of the conduction and valence band electron distribution functions in our two-band model dielectric under laser irradiation while neglecting the phonon system. To that end, we set the electron-phonon collision term in equation (1) to zero. Note that the e-ph-pt collision term is still considered, allowing for a realistic laser energy absorption.
As laser parameters, we chose a 150 fs FWHM Gaussian laser pulse with a wavelength of 400 nm and a peak intensity of \(1.33\cdot 10^{18}\) W/m\({}^{2}\).
Figure 1 shows the electron distribution in \(\Phi\)-representation during and shortly after laser irradiation. The
gray shaded area indicates the band gap. The energies above the band gap belong to the conduction band, while the energies below the band gap belong to the valence band. The distribution at \(-50\) fs, i.e. before the maximum laser intensity, shows peaks in the conduction band, arising from a combination of multi-photon ionization, including above-threshold ionization, and subsequent intraband single-photon absorption [26]. In the valence band, corresponding dips appear, indicating a lack of electrons, or the presence of holes, respectively. At the temporal maximum of the laser pulse the distribution has risen in the conduction band and fallen in the valence band, as more electrons have been promoted from one band to the other. The peak-and-dip structure still exists, but in addition a bump is visible at higher energies in the conduction band. This bump is caused by Auger recombination, which in this case exceeds impact ionization [35].
The electron distribution 50 fs after the maximum of the Gaussian laser pulse is shown in both parts of Fig. 1. The peaks and dips caused by multi-photon ionization as well as the bump at higher energies in the conduction band caused by Auger recombination are still present. In contrast to that, the electron distribution is nearly equilibrated after 100 fs, as represented by the nearly straight \(\Phi\)-functions, see Eq. (10).
To further study the equilibration of the electron systems, we introduce the configuration entropy [36]
\[S=-k_{B}\int dE\,\mathrm{DOS}(E)\left[f(E)\log(f(E))\right.\] \[+\left.(1-f(E))\log(1-f(E))\right], \tag{11}\]
where \(k_{B}\) is Boltzmann's constant. By comparing the configuration entropy of the conduction and valence band to the entropy of the corresponding Fermi distribution, the progress of intraband thermalization can be investigated. Figure 2 depicts the relative deviation of the configuration entropy compared to the equilibrium entropy. Both the deviations of the conduction band and valence band are falling shortly before the pulse maximum and thereafter, and an intraband equilibrium is reached about 100 fs after the pulse maximum for both bands. The initial deviation is higher for the conduction band due to its lower effective mass compared to the valence band.
Next, we want to investigate the interband relaxation by comparing the temporal evolution of the quasi-temperatures and chemical potentials of both bands. The temporal evolution of the quasi-temperatures is shown in Fig. 3 (a). Both the conduction and valence band quasi-temperatures are rising during the laser irradiation and reach the same level approximately 100 fs after the laser pulse maximum. Analogously, the temporal evolution of the chemical potentials is depicted in Fig. 3 (b). While both the conduction and valence band chemical potential
Figure 1: Electron distribution in \(\Phi\)-representation (10), (a) during excitation of a model dielectric with a 150 fs FWHM laser pulse and (b) shortly thereafter. The gray shadowed area indicates the band gap. Energies above the band gap belong to the conduction band, energies below the band gap to the valence band. For this simulation, the coupling to the phonon system was neglected. In this representation, straight lines correspond to a Fermi distribution.
Figure 2: Relative deviation of the configuration entropy of the non-equilibrium distribution from the entropy of a Fermi distribution with the same particle and energy content within the conduction and valence band, respectively.
rise initially, their difference increases approximately up to the pulse maximum. For the chosen laser parameters, the laser irradiation excited more electrons to the conduction band than the absorbed energy would support in equilibrium. This results in a difference between the (quasi-) chemical potentials of the conduction and valence bands. The non-equilibrium between the chemical potentials is counteracted by Auger recombination, which decreases the conduction band electron density and increases the valence band electron density.
During the falling tail of the laser pulse, linear intra-band absorption by e-ph-pt collisions becomes more important than multi-photon absorption at lower intensities. The e-ph-pt collisions increase the energy of the conduction and valence bands, so that again more free electrons and holes are supported in equilibrium. This leads to a faster equilibration of the chemical potentials by Auger recombination alone. About 100 fs after the pulse maximum, the chemical potentials reach the same level and then slightly overshoot due to the heating by e-ph-pt collisions, so that the chemical potential of the valence band becomes larger than the chemical potential of the conduction band. Then impact ionization exceeds over Auger recombination so that electrons are transferred from the valence band to the conduction band. After about 300 fs the chemical potentials finally equilibrate.
In this section we investigated the intra- and interband thermalization of the electron bands during and shortly after laser irradiation, neglecting electron-phonon coupling. In those circumstances, we see fast intraband thermalization and equilibration of the quasi-temperatures between the bands on a timescale of about 100 fs. The equilibration of the chemical potentials takes slightly longer and is influenced by an interplay between nonlinear multiphoton ionization, linear e-ph-pt collisions and interband relaxation processes. In the next section, we separate these excitation and relaxation processes.
### Without Laser
In this section, we replace the initial laser excitation by excited Fermi distributions in the conduction and valence bands as initial condition. This allows us to determine the intrinsic relaxation times for the system under study. First we discuss the qualitative behavior of the dynamics of temperatures and chemical potentials during interband relaxation. Later in this section we will investigate the dependence of the relaxation times on different parameters.
Figure 4 shows the temporal evolution of the difference between the chemical potentials and temperatures, respectively, of the conduction and valence bands. Initially, 1% of the valence band electrons are excited into the conduction band and the conduction band temperature is set to \(T_{e}=20000\) K and the valence band temper
Figure 3: (a) Electron (quasi-) temperature and (b) chemical potential of valence and conduction band electrons during and after laser excitation, neglecting the coupling to the phonon system. The shading indicates the temporal shape of the laser pulse.
Figure 4: Temporal evolution of the electron temperature difference and chemical potential difference, during interband relaxation. Initial conditions: \(T_{c}=20000\) K and \(T_{v}=10000\) K; fraction of 1% of valence band electrons excited to the conduction band.
ature to \(T_{v}=10000\) K. For those parameters, for both the difference in temperature and chemical potential, a two-timescale behavior is observed: At first a faster relaxation on the timescale of less than 10 fs occurs, followed by a slower relaxation on the timescale of about 150 fs. The first faster relaxation can be attributed to energy-exchanging interband electron-hole collisions, which drive the quasi-temperatures of the two bands into equilibrium. The following slower relaxation is governed by particle exchanging processes, that is Auger recombination and impact ionization. While the latter processes contribute to the equilibration of the chemical potentials of valence and conduction band, they also have a delaying effect on the temperature relaxation due to the conversion between kinetic and potential energy.
The fast and the slow relaxation underlying the behavior in Fig. 4 can be described with each a relaxation time, which we will denote as \(\tau_{T}\) and \(\tau_{\mu}\). Here, \(\tau_{T}\) is the fast relaxation time which we attributed to energy-exchanging electron-hole collisions driven by a temperature difference and \(\tau_{\mu}\) is the slower relaxation time which we attributed to particle exchange by Auger recombination and impact ionization driven by a difference in chemical potentials. We fit a double exponential function
\[\Delta X(t)=Ae^{-t/\tau_{T}}+Be^{-t/\tau_{\mu}}, \tag{12}\]
to the temporal evolution of the difference of the temperatures or the chemical potentials, respectively, for the first 200 fs, in order to obtain the fit parameters \(A\), \(B\), \(\tau_{T}\) and \(\tau_{\mu}\). The resulting characteristic times \(\tau_{T}\) and \(\tau_{\mu}\) are similar for a fit to either \(X=\mu\) or \(X=T\). Therefore, we will show in the following the \(T\)-relaxation time \(\tau_{T}\) obtained by a fit to the temperature difference \(\Delta T\), and the \(\mu\)-relaxation time \(\tau_{\mu}\) obtained by a fit to the chemical potential difference \(\Delta\mu\). For the following results, we chose parameter ranges for the starting conditions, for which the temporal evolution clearly yields a double exponential behavior. We determine this by an adjusted R-squared value above 0.99 of the fit (12). At for example higher amounts of initially excited electrons for the same initial temperatures, the equilibration of the chemical potentials by Auger recombination can have a larger effect on the conduction band temperature than the equilibration of the temperatures due to electron-hole collisions. In this case, the temperature difference increases initially and the two timescales can not be separated.
Figure 5 shows the relaxation time for the chemical potentials and temperatures in dependence on different parameters. For all figures, the initial valence band temperature is \(T_{v}=10000\) K. The initial conduction band temperature is \(T_{c}=T_{v}+\Delta T\). If the parameter is not the varied parameter, \(\Delta T=10000\) K, the band gap equals 9 eV and 1% of the valence band electrons are initially excited to the conduction band. The crosses represent the calculated values and the solid curves are fits that are shown to guide the eyes.
In Fig. 5 (a) the percentage of initially excited electrons is varied. The \(\mu\)-relaxation time \(\tau_{\mu}\) shows a significant dependence on the percentage of excited electrons. While an interband relaxation time of \(\sim 300\) fs is observed for 0.5% initially excited electrons, the relaxation time is decreased to \(\sim 70\) fs for 2%. The \(T\)-relaxation time \(\tau_{T}\) only slightly varies around 8 fs.
The relaxation times in dependence on the band gap are depicted in Fig. 5 (b). A significant increase in the \(\mu\)-relaxation time is observed with increasing band gap, while the initial percentage of excited electrons and the temperatures are kept constant. It should be noted that the same laser pulse would excite more electrons to the conduction band for a material with a smaller band gap than for a material with a larger band gap. The \(T\)-relaxation time shows no significant dependence on the
Figure 5: Chemical potential relaxation time in dependence on (a) initial percentage of excited electrons, (b) band gap and (c) initial temperature difference between conduction and valence band. If the corresponding parameter is not varied, the initial temperatures are \(T_{e}=20000\) K and \(T_{v}=10000\) K, initially 1% of all electrons are excited into the conduction band and the band gap is set to 9 eV.
band gap energy.
Figure 5 (c) shows the dependence of the relaxation times on the difference of the initial temperatures. The initial valence band temperature is thereby kept at 10000 K and the conduction band temperature is varied. Therefore, the energy content increases with \(\Delta T\). A decrease of the \(\mu\)-relaxation time with increasing temperature difference or increasing energy content can be observed. The \(T\)-relaxation time only varies slightly.
Our results show a significant influence of the excitation strength, as simulated by starting with varying percentages of excited electrons and temperature differences on the \(\mu\)-relaxation time. Generally, we observe a larger \(\mu\)-relaxation time for stronger excitations. Also, the \(\mu\)-relaxation time is larger for larger band gap energies. The \(T\)-relaxation time only varies slightly for all considered parameters. In all cases, the temperature relaxation is considerably faster than the equilibration of the chemical potentials, so that each band first establishes its own Fermi distribution at equal temperatures in the conduction and valence bands, before density-changing scattering processes lead to a common Fermi distribution across both bands.
### Influence of electron-phonon coupling
In this section we study the electron relaxation after laser irradiation, when in contrast to section III.1 electron-phonon coupling is considered.
Figure 6 shows the electron distribution at different times (a) during laser excitation and (b) thereafter for the full simulation including electron-phonon coupling. Compared to the electronic distributions during laser excitation in Fig. 1 (a), where electron-phonon coupling was not taken into account, the same features can be observed. However, the bump at higher energies caused by Auger recombination is more noticeable for the curves shown in Fig. 6 (a).
The electron distributions for later times after laser excitation are presented in Fig. 6 (b). In contrast to the fast equilibration shown in figure 1 (b) without electron-phonon coupling, the electron distribution still deviates noticeably from a Fermi distribution as represented by a straight line after 1000 fs for the full simulation. Specifically, the Auger bump at high energies in the conduction band persists for a long time. Note that such Auger bump has been observed also in XUV-excited aluminum [35].
Next, we want to investigate interband thermalization by considering the (quasi-) temperatures and chemical potentials of the different systems. Figure 7 (a) shows the temporal evolution of the temperatures of the conduction and valence band electrons as well as the three considered phonon modes. The electron temperatures first rise during laser excitation and reach a maximum about 100 fs after the pulse maximum. Thereafter, the electron temperatures decrease and phonon temperatures increase due to electron-phonon coupling. Compared to figure 3 (a) where electron-phonon coupling was neglected, the temperature difference between conduction and valence band persists for a much longer time.
The temporal evolution of the chemical potentials of conduction and valence band is shown in figure 7 (b). In contrast to the case of neglecting electron-phonon coupling as depicted in figure 3 (b), the difference between the chemical potentials increases during and after laser excitation. This can be explained by the thermalization of the electrons with the phonon system, which causes the energy content of the electrons to decrease after laser irradiation. As the energy content decreases, less excited electrons are supported by the corresponding equilibrium distribution, which counteracts the equilibration of the chemical potentials by Auger recombination. Additionally, the Auger recombination rate decreases with decreasing energy in the electron system such that the equilibration of chemical potentials is further slowed down.
Our results show that both intra- and interband thermalization are considerably delayed by electron-phonon
Figure 6: Electron distribution in \(\Phi\) representation (10), (a) during excitation of a model dielectric with a 150 fs FWHM laser pulse and (b) shortly thereafter. Note that straight lines correspond to a Fermi distribution in this representation. In contrast to Fig. 1, the coupling to the phonon system was included here.
coupling. The chemical potentials of the conduction and valence bands are even driven further out of interband equilibrium shortly after laser excitation, leading to a situation far from equilibrium on the timescale of electron-phonon equilibration. Since the electron-phonon coupling itself depends on the density of excited electrons as well as directly on the electronic energy distribution [27], the complex full equilibration process needs further studies.
## IV Summary and Conclusions
In this work, we studied the relaxation of the conduction band, valence band and phonon system in a dielectric after excitation with an intense, ultrafast laser pulse. We have calculated the time evolution of the laser-excited electron distribution functions in the valence and conduction bands in a large bandgap material. We observe a peak-like structure of the energy distribution in the conduction and valence bands, and a fast relaxation to an equilibrium situation, which is a joint Fermi distribution across both bands. Both bands equilibrate to a thermal distribution already during the time of laser irradiation. Both temperatures equilibrate to a joint temperature within about ten femtoseconds, while an occupational nonequilibrium, characterized by two different chemical potentials, may last up to a few hundreds of femtoseconds.
We have quantified two distinct characteristic relaxation times, \(\tau_{T}\) and \(\tau_{\mu}\), which we then studied in dependence on material- and laser-parameters for well-defined initial situations. The relaxation time of the chemical potential turned out to be more sensitive to the studied parameters than the relaxation time of the temperature difference.
We then included electron-phonon coupling in our calculations, which strongly perturbs the separation of timescales. Instead of a fast equilibration towards a Fermi distribution at elevated temperature, the simultaneous cooling of the electron gas leads to a continuous overpopulation of the conduction band. This overpopulation boosts Auger recombination, which reduces the total number of excited electrons but feeds electrons at high energy states into the conduction band. The resulting energy distribution in both considered bands differs strongly from a Fermi distribution.
We therefore conclude that an athermal electron distribution can be present in the material far into the picosecond timescales.
###### Acknowledgements.
The authors gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - TRR 173 - 268565370 Spin + X: spin in its collective environment (Project A08), as well as computing resources on the Elwertitsch high performance computing cluster granted by the AHRP under project TUK-STREMON. We thank Sebastian T. Weber and Dirk O. Gericke for fruitful discussions.
|
2308.13900 | Semi-Supervised Semantic Segmentation via Marginal Contextual
Information | We present a novel confidence refinement scheme that enhances pseudo labels
in semi-supervised semantic segmentation. Unlike existing methods, which filter
pixels with low-confidence predictions in isolation, our approach leverages the
spatial correlation of labels in segmentation maps by grouping neighboring
pixels and considering their pseudo labels collectively. With this contextual
information, our method, named S4MC, increases the amount of unlabeled data
used during training while maintaining the quality of the pseudo labels, all
with negligible computational overhead. Through extensive experiments on
standard benchmarks, we demonstrate that S4MC outperforms existing
state-of-the-art semi-supervised learning approaches, offering a promising
solution for reducing the cost of acquiring dense annotations. For example,
S4MC achieves a 1.39 mIoU improvement over the prior art on PASCAL VOC 12 with
366 annotated images. The code to reproduce our experiments is available at
https://s4mcontext.github.io/ | Moshe Kimhi, Shai Kimhi, Evgenii Zheltonozhskii, Or Litany, Chaim Baskin | 2023-08-26T15:02:00Z | http://arxiv.org/abs/2308.13900v2 | # Semi-Supervised Semantic Segmentation via Marginal Contextual Information
###### Abstract
We present a novel confidence refinement scheme that enhances pseudo-labels in semi-supervised semantic segmentation. Unlike current leading methods, which filter pixels with low-confidence predictions in isolation, our approach leverages the spatial correlation of labels in segmentation maps by grouping neighboring pixels and considering their pseudo-labels collectively. With this contextual information, our method, named S4MC, increases the amount of unlabeled data used during training while maintaining the quality of the pseudo-labels, all with negligible computational overhead. Through extensive experiments on standard benchmarks, we demonstrate that S4MC outperforms existing state-of-the-art semi-supervised learning approaches, offering a promising solution for reducing the cost of acquiring dense annotations. For example, S4MC achieves a 1.29 mIoU improvement over the prior state-of-the-art method on PASCAL VOC 12 with 366 annotated images. The code to reproduce our experiments is available at [https://s4mcontext.github.io/](https://s4mcontext.github.io/).
## 1 Introduction
Supervised learning has been the driving force behind advancements in modern computer vision, including classification (Krizhevsky et al., 2012; Dai et al., 2021), object detection (Girshick, 2015; Zong et al., 2022), and segmentation (Zagornyko et al., 2016; Chen et al., 2018; Li et al., 2022; Kirillov et al., 2023). However, it requires extensive amounts of labeled data, which can be costly and time-consuming to obtain. In many practical scenarios, there is no shortage of available data, but only a fraction can be labeled due to resource constraints. This challenge has led to the development of semi-supervised learning (SSL; Rasmus et al., 2015; Berthelot et al., 2019; Sohn et al., 2020; Yang et al., 2022), a methodology that leverages both labeled and unlabeled data for model training.
This paper focuses on applying SSL to semantic segmentation, which has applications in various areas such as perception for autonomous vehicles (Bartolomei et al., 2020), mapping (Van Etten et al., 2018) and agriculture (Milioto et al., 2018). SSL is particularly appealing for segmentation tasks, as manual labeling can be prohibitively expensive.
A widely adopted approach for SSL is pseudo-labeling (Lee, 2013; Arazo et al., 2020). This technique dynamically assigns supervision targets to unlabeled data during training based on the model's predictions. To generate a meaningful training signal, it is essential to adapt the predictions before integrating them into the learning process. Several techniques have been proposed, such as using a teacher network to generate supervision to a student network (Hinton et al., 2015). The teacher network can be made more powerful during training by applying a moving average to the student network's weights (Tarvainen and Valpola, 2017). Additionally, the teacher may undergo weaker augmentations than the student (Berthelot et al., 2019), simplifying the teacher's task.
However, pseudo-labeling is intrinsically susceptible to confirmation bias, which tends to reinforce the model predictions instead of improving the student model. Mitigating confirmation bias becomes particularly important when dealing with erroneous predictions made by the teacher network.
Confidence-based filtering is a popular technique to address this issue (Sohn et al., 2020). This approach assigns pseudo-labels only when the model's confidence surpasses a specified threshold, thereby reducing the number of incorrect pseudo-labels. Though simple, this strategy was proven effective and inspired multiple improvements in semi-supervised classification (Zhang et al., 2021; Rizve et al., 2021), segmentation (Wang et al., 2022), and object detection in images (Sohn et al., 2020; Liu et al., 2021) and 3D scenes (Zhao et al., 2020; Wang et al., 2021). However, the strict filtering of the supervision signal leads to extended training periods and, potentially, to overfitting when the labeled instances used are insufficient to represent the entire sample distribution. Lowering the threshold would allow for higher training volumes at the cost of reduced quality, further hindering the performance (Sohn et al., 2020).
In response to these challenges, we introduce a novel confidence refinement scheme for the teacher network predictions in segmentation tasks designed to increase the availability of pseudo-labels without sacrificing their accuracy. Drawing on the observation that labels in segmentation maps exhibit strong spatial correlation, we propose to group neighboring pixels and collectively consider their pseudo-labels. When considering pixels in spatial groups, we asses the event-union probability, which is the probability that at least one pixel belongs to a given class. We assign a pseudo-label if this probability is sufficiently larger than the event-union probability of any other class. By taking context into account, our approach _Semi-Supervised Semantic Segmentation via Marginal Contextual Information_ (S4MC), enables a relaxed filtering criterion which increases the number of unlabeled pixels utilized for learning while maintaining high-quality labeling, as demonstrated in Fig. 1.
We evaluated S4MC on multiple semi-supervised segmentation benchmarks. S4MC achieves significant improvements in performance over previous state-of-the-art methods. In particular, we observed an increase of **+1.29 mIoU** on PASCAL VOC 12 (Everingham et al., 2010) using 366 annotated images and an increase of **+1.01 mIoU** on Cityscapes (Cordts et al., 2016) using only 186 annotated images. These findings highlight the effectiveness of S4MC in producing high-quality segmentation results with minimal labeled data.
## 2 Related Work
### Semi-Supervised Learning
Pseudo-labeling (Lee, 2013) is a popular and effective technique in SSL, where labels are assigned to unlabeled data based on model predictions. To make the most of these labels during training, it is essential to refine them (Laine and Aila, 2016; Berthelot et al., 2019, 2020; Xie et al., 2020). One way to achieve this is through consistency regularization (Laine and Aila, 2016; Tarvainen and Valpola, 2017; Miyato et al., 2018), which ensures consistent predictions between different views of
Figure 1: **Confidence refinement. Left: pseudo-labels generated by the teacher network without refinement. Middle: pseudo-labels obtained from the same model after refinement with marginal contextual information. Right Top: predicted probabilities of the top two classes of the pixel highlighted by the red square before, and Bottom: after refinement. S4MC allows additional correct pseudo labels to propagate.**
the unlabeled data. Alternatively, a teacher model can be used to obtain pseudo-labels, which are then used to train a student model. To ensure that the pseudo-labels are helpful, the temperature of the prediction (soft pseudo-labels; Berthelot et al., 2019) can be increased, or the label can be assigned to samples with high confidence (hard pseudo-labels; Xie et al., 2020; Sohn et al., 2020; Zhang et al., 2021). In this work, we follow the hard pseudo-label assignment approach and improve upon previous methods by proposing a confidence refinement scheme.
### Semi-Supervised Semantic Segmentation
In semantic segmentation, most SSL methods rely on consistency regularization and developing augmentation strategies compatible with segmentation tasks (French et al., 2020; Ke et al., 2020; Chen et al., 2021; Zhong et al., 2021; Xu et al., 2022). Given the uneven distribution of labels typically encountered in segmentation maps, techniques such as adaptive sampling, augmentation, and loss re-weighting are commonly employed (Hu et al., 2021). Feature perturbations (FP) on unlabeled data (Ouali et al., 2020; Zou et al., 2021; Liu et al., 2022; Yang et al., 2023b) are also used to enhance consistency and the virtual adversarial training (Liu et al., 2022). Curriculum learning strategies that incrementally increase the proportion of data used over time are beneficial in exploiting more unlabeled data (Yang et al., 2022; Wang et al., 2022). A recent approach introduced by Wang et al. (2022) used _unreliable_ predictions by employing contrastive loss with the least confident classes predicted by the model. Unimatch (Yang et al., 2023b) combined SSL (Sohn et al., 2020) with several self-supervision signals, i.e., two strong augmentations and one more with FP, obtained good results without complex losses or class-level heuristics. However, most existing works primarily focus on individual pixel label predictions. In contrast, we delve into the contextual information offered by spatial predictions on unlabeled data.
### Contextual Information
Contextual information encompasses environmental cues that assist in interpreting and extracting meaningful insights from visual perception (Toussaint, 1978; Elliman and Lancaster, 1990). Incorporating spatial context explicitly has been proven beneficial in segmentation tasks, for example, by encouraging smoothness like in the Conditional Random Fields method (Chen et al., 2018) and attention mechanisms (Vaswani et al., 2017; Dosovitskiy et al., 2021; Wang et al., 2020). Combating dependence on context has shown to be helpful by Nekrasov et al. (2021). This work uses the context from neighboring pixel predictions to enhance pseudo-label propagation.
## 3 Method
This section describes the proposed method using the teacher-student paradigm. Adjustments from teacher-student consistency to weak-strong image-level consistency are described in Appendix C.
### Overview
In semi-supervised semantic segmentation, we are given a labeled training set of images \(\mathcal{D}_{\ell}=\left\{(\mathbf{x}_{i}^{\ell},\mathbf{y}_{i})\right\}_{i=1}^ {N_{\ell}}\), and an unlabeled set \(\mathcal{D}_{u}=\left\{\mathbf{x}_{i}^{u}\right\}_{i=1}^{N_{u}}\) sampled from the same distribution, i.e., \(\left\{\mathbf{x}_{i}^{\ell},\mathbf{x}_{i}^{u}\right\}\sim D_{x}\). Here, \(\mathbf{y}\) are 2D tensors of shape \(H\times W\), assigning a semantic label to each pixel of \(\mathbf{x}\). We aim to train a neural network \(f_{\theta}\) to predict the semantic segmentation of unseen images sampled from \(D_{x}\).
We follow a teacher-student approach (Tarvainen and Valpola, 2017) and train two networks \(f_{\theta_{s}}\) and \(f_{\theta_{t}}\) that share the same architecture but update their parameters separately. The student network \(f_{\theta_{s}}\) is trained using supervision from the labeled samples and pseudo-labels created by the teacher's predictions for unlabeled ones. The teacher model \(f_{\theta_{s}}\) is updated as an exponential moving average (EMA) of the student weights. \(f_{\theta_{s}}(\mathbf{x}_{i})\) and \(f_{\theta_{t}}(\mathbf{x}_{i})\) denote the predictions of the student and teacher models for the \(\mathbf{x}_{i}\) sample, respectively. At each training step, a batch of \(\mathcal{B}_{\ell}\) and \(\mathcal{B}_{u}\) images is sampled
from \(\mathcal{D}_{\ell}\) and \(\mathcal{D}_{u}\), respectively. The optimization objective can be written as the following loss:
\[\mathcal{L} =\mathcal{L}_{s}+\lambda\mathcal{L}_{u} \tag{1}\] \[\mathcal{L}_{s} =\frac{1}{M_{l}}\sum_{\mathbf{x}_{i}^{n}\in\mathcal{B}_{l}}\ell_{ CE}(f_{\theta_{s}}(\mathbf{x}_{i}^{\ell}),\mathbf{y}_{i})\] (2) \[\mathcal{L}_{u} =\frac{1}{M_{u}}\sum_{\mathbf{x}_{i}^{n}\in\mathcal{B}_{u}}\ell_ {CE}(f_{\theta_{s}}(\mathbf{x}_{i}^{u}),\hat{\mathbf{y}}_{i}), \tag{3}\]
where \(\mathcal{L}_{s}\) and \(\mathcal{L}_{u}\) are the losses over the labeled and unlabeled data correspondingly, \(\lambda\) is a hyperparameter controlling their relative weight, and \(\hat{\mathbf{y}}_{i}\) is the pseudo-label for the \(i\)-th unlabeled image. Not every pixel of \(\mathbf{x}_{i}\) has a corresponding label or pseudo-label, and \(M_{l}\) and \(M_{u}\) denote the number of pixels with label and assigned pseudo-label in the image batch, respectively.
#### 3.1.1 Pseudo-label Propagation
For a given image \(\mathbf{x}_{i}\), we denote by \(\mathbf{x}_{j,k}^{i}\) the pixel in the \(j\)-th row and \(k\)-th column. We adopt a thresholding-based criterion inspired by FixMatch (Sohn et al., 2020). By establishing a score, denoted as \(\kappa\), which is based on the class distribution predicted by the teacher network, we assign a pseudo-label to a pixel if its score exceeds a threshold \(\gamma_{t}\):
\[\hat{\mathbf{y}}_{j,k}^{i}=\begin{cases}\operatorname*{arg\,max}_{c}\{p_{c}(x_ {j,k}^{i})\}&\text{if }\kappa(x_{j,k}^{i};\theta_{t})>\gamma_{t},\\ \text{ignore}&\text{otherwise},\end{cases}, \tag{4}\]
where \(p_{c}(x_{j,k}^{i})\) is the pixel probability of class \(c\). A commonly used score is given by \(\kappa(x_{j,k}^{i};\theta_{t})=\max_{c}\{p_{c}(x_{j,k}^{i})\}\). However, we found that using a pixel-wise margin, similar to scores proposed by Scheffer et al. (2001) and Shin et al. (2021), produces more stable results. This approach calculates the margin as the difference between the highest and the second-highest values of the probability vector:
\[\kappa_{\text{margin}}(x_{j,k}^{i})=\max_{c}\{p_{c}(x_{j,k}^{i})\}-\max_{c}2\{ p_{c}(x_{j,k}^{i})\}, \tag{5}\]
where \(\max\)\(2\) denotes the vector's second highest value.
Figure 2: **Left:** S4MC employs a teacher-student paradigm for semi-supervised segmentation. Labeled images are used to supervise the student network directly; both teacher and student networks process unlabeled images. Predictions from the teacher network are refined and used to evaluate the margin value, which is then thresholded to produce pseudo-labels that guide the student network. The threshold, denoted as \(\gamma_{t}\), is dynamically adjusted based on the teacher network’s predictions. **Right:** Our confidence refinement module exploits neighboring pixels to adjust per-class predictions, as detailed in Section 3.2.1. The class distribution of the pixel marked by the yellow circle on the left is changed. Before refinement, the margin surpasses the threshold and erroneously assigns the blue class (dog) as a pseudo-label. However, after refinement, the margin significantly reduces, thereby preventing the propagation of this error.
#### 3.1.2 Dynamic Partition Adjustment (DPA)
Following Wang et al. (2022), we use a decaying threshold \(\gamma_{t}\). DPA replaces the fixed threshold with a quantile-based threshold that decreases with time. At each iteration, we set \(\gamma_{t}\) as the \(\alpha_{t}\)-th quantile of \(\kappa_{\text{margin}}\) over all pixels of all images in the batch. We use linearly decreasing \(\alpha_{t}\):
\[\alpha_{t}=\alpha_{0}\cdot(1-\nicefrac{{t}}{{iterations}}). \tag{6}\]
As the model predictions improve with each iteration, gradually lowering the threshold increases the number of propagated pseudo-labels without compromising their quality.
### Marginal Contextual Information
Utilizing contextual information (Section 2.3), we look at surrounding predictions (predictions on neighboring pixels) to refine the semantic map at each pixel. We introduce the concept of "Marginal Contextual Information," which involves integrating additional information to enhance predictions across all classes. At the same time, reliability-based pseudo-label methods focus on the dominant class only (Sohn et al., 2020; Wang et al., 2023). Section 3.2.1 describes our confidence refinement, followed by our thresholding strategy and a description of S4MC methodology.
#### 3.2.1 Confidence Margin Refinement
We refine the predicted pseudo-label of each pixel by considering the predictions of its neighboring pixels. Given a pixel \(x^{i}_{j,k}\) with a corresponding per-class prediction \(p_{c}(x^{i}_{j,k})\), we examine neighboring pixels \(x^{i}_{\ell,m}\) within an \(N\times N\) pixel neighborhood surrounding it. We then calculate the probability that at least one of the two pixels belongs to class \(c\):
\[\tilde{p}_{c}(x^{i}_{j,k})=p_{c}(x^{i}_{j,k})+p_{c}(x^{i}_{\ell,m})-p_{c}(x^{i }_{j,k},x^{i}_{\ell,m}), \tag{7}\]
where \(p_{c}(x^{i}_{j,k},x^{i}_{\ell,m})\) denote the joint probability of both \(x^{i}_{j,k}\) and \(x^{i}_{\ell,m}\) belonging to the same class \(c\).
While the model does not predict joint probabilities, it is reasonable to assume a non-negative correlation between the probabilities of neighboring pixels. This is largely due to the nature of segmentation maps, which are typically piecewise constant. Consequently, any information regarding the model's prediction of neighboring pixels belonging to a specific class should not lead to a reduction in the posterior probability of the given pixel also falling into that class. The joint probability can thus be bounded from below by assuming independence: \(p_{c}(x^{i}_{j,k},x^{i}_{\ell,m})\geqslant p_{c}(x^{i}_{j,k})\cdot p_{c}(x^{ i}_{\ell,m})\). By substituting this into Eq. (7), we obtain an upper bound for the event union probability:
\[\tilde{p}_{c}(x^{i}_{j,k})\leq p_{c}(x^{i}_{j,k})+p_{c}(x^{i}_{\ell,m})-p_{c}( x^{i}_{j,k})\cdot p_{c}(x^{i}_{\ell,m}). \tag{8}\]
This formulation allows us to filter out confidence margins that do not exceed the threshold.
For each class \(c\), we select the neighbor with the maximal information gain using Eq. (8):
\[\tilde{p}_{c}^{\text{N}}(x^{i}_{j,k})=\max_{\ell,m}\tilde{p}_{c}(x^{i}_{j,k}). \tag{9}\]
Computing the event union over all classes employs neighboring predictions to amplify differences in ambiguous cases. Similarly, this prediction refinement prevents the creation of over-confident predictions not supported by additional spatial evidence and helps reduce confirmation bias. The refinement is visualized in Fig. 1. In our experiments, we used a neighborhood size of \(3\times 3\). To determine whether the incorporation of contextual information could be enhanced with larger neighborhoods, we conducted an ablation study focusing on the neighborhood size and the neighbor selection criterion, as detailed in Table 3(a). For larger neighborhoods, we decrease the probability contribution of the neighboring pixels with a distance-dependent factor:
\[\tilde{p}_{c}(x^{i}_{j,k})=p_{c}(x^{i}_{j,k})+\beta_{\ell,m}\big{[}p_{c}(x^{i}_ {\ell,m})-p_{c}(x^{i}_{j,k},x^{i}_{\ell,m})\big{]}, \tag{10}\]
where \(\beta_{\ell,m}=\exp\bigl{(}-\frac{1}{2}(|\ell-j|+|m-k|)\bigr{)}\) is a spatial weighting function. Empirically, contextual information refinement affects mainly the most probable one or two classes. This aligns well with our choice to use the margin confidence (5).
Considering more than two events (more than one neighbor), one can use the formulation for three or four event-union. In practice, we calculate it iteratively, starting with two event-union defined by Eq. (10), using it as \(p_{c}(x^{i}_{j,k})\), finding the next desired event using Eq. (9) with the remaining neighbors, and repeating the process.
#### 3.2.2 Threshold Setting
Setting a high threshold can prevent confirmation bias from the teacher model's "beliefs" transferring to the student model. However, this comes at the expense of learning from fewer examples, potentially resulting in a less comprehensive model. For determining the DPA threshold, we use the teacher predictions pre-refinement \(p_{c}(x^{i}_{j,k})\), but we filter values based on \(\tilde{p}_{c}(x^{i}_{j,k})\). Consequently, more pixels pass the threshold that remains unaffected. We set \(\alpha_{0}=0.4\), i.e., 60% of raw predictions pass the threshold at \(t=0\), as this value demonstrated superior performance in our experiments. An ablation study for \(\alpha_{0}\) is provided in Table 3(b).
### Putting it All Together
We perform semi-supervised learning for semantic segmentation by pseudo-labeling pixels using their neighbors' contextual information. Labeled images are only fed into the student model, producing the supervised loss (Eq. (2)). Unlabeled images are fed into the student and teacher models. We sort the margin based \(\kappa_{\text{margin}}\) (Eq. (5)) values of teacher predictions and set \(\gamma_{t}\) as described in Section 3.2.2. The per-class teacher predictions are refined using the _weighted union event_ relaxation, as defined in Eq. (10). Pixels with higher margin values than \(\gamma_{t}\) are assigned with pseudo-labels as described in Eq. (4), producing the unsupervised loss (Eq. (3)). The entire pipeline is visualized in Fig. 2.
The impact of S4MC is demonstrated in Fig. 4, which compares the fraction of pixels that pass the threshold with and without refinement. Our method uses more unlabeled data during most of the training process (a), while the refinement ensures high-quality pseudo-labels (b). We further analyze the quality improvement by studying true positive (TP) and false positive (FP) rates, as shown in Fig. B.1 in the Appendix. Qualitative results are presented in Fig. 3, where one can see both the confidence heatmap and the pseudo-labels with and without the impact of S4MC.
## 4 Experiments
This section presents our experimental results. The setup for the different datasets and partition protocols is detailed in Section 4.1. Section 4.2 compares our method against existing approaches and Section 4.3 provides the ablation study. Further implementation details are given in Appendix G.
### Setup
DatasetsIn our experiments, we use PASCAL VOC 2012 (Everingham et al., 2010) and Cityscapes (Cordts et al., 2016) datasets.
Figure 3: **Qualitative results of S4MC.** The outputs of two trained models and the annotated ground truth. The segmentation map predicted by S4MC (_Ours_) compared to the segmentation map using no refinement module (_Baseline_) and to the ground truth. _Heat map_ represents the uncertainty of the model as \(\kappa^{-1}\), showing a more confident prediction over certain areas, yielding to a smoother segmentation maps (compared in the red boxes).
The PASCAL VOC dataset comprises 20 object classes (plus background). The dataset includes 2,913 annotated images, divided into a training set of 1,464 images and a validation set of 1,449 images. In addition, the dataset includes 9,118 coarsely annotated training images (Hariharan et al., 2011), in which only a subset of the pixels are labeled. Following previous research, we conducted two sets of experiments. The "classic" setup uses only the original training set (Wang et al., 2022; Zou et al., 2021), while the "coarse" setup uses all available data (Wang et al., 2022; Chen et al., 2021; Hu et al., 2021).
The Cityscapes (Cordts et al., 2016) dataset includes urban scenes from 50 different cities with 30 classes, of which only 19 are typically used for evaluation (Chen et al., 2018, 2018). Similarly to PASCAL, in addition to 2,975 training and 500 validation images, the dataset includes 19,998 coarsely annotated images, which we do not use in our experiment.
Implementation detailsWe implement S4MC on top of two framework variants: teacher-student (Tarvainen and Valpola, 2017; French et al., 2020) and consistency optimization (Sohn et al., 2020; Yang et al., 2023). Both use DeepLabv3+ (Chen et al., 2018) with a Imagenet-pre-trained (Russakovsky et al., 2015) ResNet-101 (He et al., 2016). For the teacher-student setup, the teacher parameters \(\theta_{t}\) are updated via an exponential moving average (EMA) of the student parameters
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Method** & **1/16 (92)** & **1/8 (183)** & **1/4 (366)** & **1/2 (732)** & **Full (1464)** \\ \hline CutMix-Seg (French et al., 2020) & 52.16 & 63.47 & 69.46 & 73.73 & 76.54 \\ ReCo (Liu et al., 2022) & 64.80 & 72.0 & 73.10 & 74.70 & - \\ ST++ (Yang et al., 2022) & 65.2 & 71.0 & 74.6 & 77.3 & 79.1 \\ U\({}^{2}\)PL (Wang et al., 2022) & 67.98 & 69.15 & 73.66 & 76.16 & 79.49 \\ PS-MT (Liu et al., 2022) & 65.8 & 69.6 & 76.6 & 78.4 & 80.0 \\ PCR (Xu et al., 2022) & 70.06 & 74.71 & 77.16 & 78.49 & 80.65 \\ FixMatch* (Yang et al., 2023) & 68.07 & 73.72 & 76.38 & 77.97 & 79.97 \\ UniMatch* (Yang et al., 2023) & 73.75 & 75.05 & 77.7 & 79.9 & 80.43 \\ \hline CutMix-Seg + S4MC & 70.96 & 71.69 & 75.41 & 77.73 & 80.58 \\ FixMatch + S4MC & 73.13 & 74.72 & 77.27 & 79.07 & 79.6 \\ UniMatch\({}^{\psi}\) + S4MC & **74.72** & **75.21** & **79.09** & **80.12** & **81.56** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between our method and prior art on the PASCAL VOC 2012 val on different partition protocols. the caption describes the share of the training set used as labeled data and, in parentheses, the actual number of labeled images. Larger improvement can be observed for partitions of extremely low annotated data, where other methods suffer from starvation due to poor teacher generalization. * denotes reproduced results, \(\psi\) denotes the use of UniMatch (Yang et al., 2023) without the use of feature perturbation.
Figure 4: Pseudo-label quantity and quality on PASCAL VOC 2012 (Everingham et al., 2010) with 366 labeled images using our margin (5) confidence function.
(Tarvainen and Valpola, 2017): \(\theta_{t}^{\eta}=\tau\theta_{t}^{\eta-1}+(1-\tau)\theta_{s}^{\eta}\), where \(0\leq\tau\leq 1\) defines how close the teacher is to the student and \(\eta\) denotes the training iteration. We used \(\tau=0.99\). For the consistency paradigm, the teaching branch uses weak augmentations and the student branch uses strong ones. Additional details are provided in Appendix G.
EvaluationWe compare S4MC with state-of-the-art methods and baselines under the common partition protocols - using \(1/2\), \(1/4\), \(1/8\), and \(1/16\) of the training set as labeled data. For the 'classic' setting of the PASCAL experiment, we additionally compare using all the finely annotated images. We follow standard protocols and use mean Intersection over Union (mIoU) as our evaluation metric. We use the data split published by Wang et al. (2022) when available to ensure a fair comparison. We use PASCAL VOC 2012 val with \(1/4\) partition for the ablation studies.
### Results
PASCAL VOC 2012.Table 1 compares our method with state-of-the-art baselines on the PASCAL VOC 2012 dataset. While Table 2 shows the comparison results on the PASCAL VOC 2012 dataset with additional coarsely annotated data from SBD (Hariharan et al., 2011). In both setups, S4MC outperforms all the compared methods in standard partition protocols, both when using labels only for the original PASCAL VOC 12 dataset and when using SBD annotations as well. Qualitative
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **1/16 (662)** & **1/8 (1323)** & **1/4 (2646)** & **1/2 (5291)** \\ \hline CutMix-Seg (French et al., 2020) & 71.66 & 75.51 & 77.33 & 78.21 \\ AEL (Hu et al., 2021) & 77.20 & 77.57 & 78.06 & 80.29 \\ PS-MT (Liu et al., 2021) & 75.5 & 78.2 & 78.7 & - \\ U\({}^{2}\)PL (Wang et al., 2022) & 77.21 & 79.01 & 79.3 & 80.50 \\ PCR (Xu et al., 2022) & **78.6** & **80.71** & **80.78** & 80.91 \\ FixMatch* (Yang et al., 2023a) & 74.35 & 76.33 & 76.87 & 77.46 \\ UniMatch* (Yang et al., 2023a) & 76.6 & 77.0 & 77.32 & 77.9 \\ \hline S4MC + CutMix-Seg (Ours) & 78.49 & 79.67 & 79.85 & **81.11** \\ FixMatch + S4MC & 75.19 & 76.56 & 77.11 & 78.07 \\ UniMatch\({}^{\psi}\) + S4MC & 76.95 & 77.54 & 77.62 & 78.08 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between our method and prior art on the ‘coarse’ PASCAL VOC 2012 val dataset under different partition protocols, using additional unlabeled data from (Hariharan et al., 2011). We included the number of labeled images in parentheses for each partition ratio. As in Table 1, larger improvements are observed for partitions with less annotated data. * denotes reproduced results, \(\psi\) denotes the use of UniMatch (Yang et al., 2023a) without the use of feature perturbation.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **1/16 (186)** & **1/8 (372)** & **1/4 (744)** & **1/2 (1488)** \\ \hline CutMix-Seg (French et al., 2020) & 69.03 & 72.06 & 74.20 & 78.15 \\ AEL (Hu et al., 2021) & 74.45 & 75.55 & 77.48 & 79.01 \\ U\({}^{2}\)PL (Wang et al., 2022) & 70.30 & 74.37 & 76.47 & 79.05 \\ PS-MT (Liu et al., 2022b) & - & 76.89 & 77.6 & 79.09 \\ PCR (Xu et al., 2022) & 73.41 & 76.31 & 78.4 & 79.11 \\ FixMatch* (Yang et al., 2023a) & 74.17 & 76.2 & 77.14 & 78.43 \\ UniMatch* (Yang et al., 2023a) & 75.99 & 77.55 & 78.54 & 79.22 \\ \hline CutMix-Seg + S4MC & 75.03 & 77.02 & 78.78 & 78.86 \\ FixMatch + S4MC & 75.2 & 77.61 & 79.04 & 79.74 \\ UniMatch\({}^{\psi}\) + S4MC & **77.0** & **77.78** & **79.52** & **79.76** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison between our method and prior art on the Cityscapes val dataset under different partition protocols. Labeled and unlabeled images are selected from the Cityscapes training dataset. For each partition protocol, the caption gives the share of the training set used as labeled data, in parentheses, the number of labeled images. * denotes reproduced results, \(\psi\) denotes the use of UniMatch (Yang et al., 2023a) without the use of feature perturbation.
results are shown in Fig. 3. As can be seen our refinement procedure aids in both adding falsely filtered pseudo-labels as well as removing erroneous ones.
Cityscapes.Table 3 presents the comparison results on the Cityscapes val dataset. Table 3 compares our method with other state-of-the-art methods on the Cityscapes (Cordts et al., 2016) dataset under various partition protocols. S4MC outperforms the compared methods in most partitions, except for the \(1/2\) setting, and combined with the FixMatch scheme, S4MC outperforms compared approaches across all partitions.
Contextual information at inference.Given that our margin refinement scheme operates through prediction adjustments, we explored whether it could be employed at inference time to enhance performance. The results reveal a negligible improvement in the DeepLab-V3-plus model, from an 85.7 mIOU to 85.71. This underlines that the performance advantage of S4MC primarily derives from the adjusted margin, as the most confident class is rarely swapped. A heatmap of the prediction over several samples is presented in Fig. 3 and Appendix H.
### Ablation Study
We ablate different components of our method using the CutMix-Seg framework variant and evaluated using the Pascal VOC 12 dataset with a partition protocol of 1/4 labeled images.
Neighborhood size and neighbor selection criterion.Our prediction refinement scheme employs event-union probability with neighboring pixels, which depends on the chosen neighbor to pair with the current pixel. To assess this, we tested varying neighborhood sizes (\(N=3,5,7\)) and criteria for selecting the neighboring pixel: (a) random, (b) maximal class probability, (c) minimal class probability, and (d) two neighbors, as described in Section 3.2.1. We also compare with \(1\times 1\) neighborhood, which corresponds to not using S4MC at all. As shown in Table 3(a), a small \(3\times 3\) neighborhood with one neighboring pixel of the highest class probability proved most efficient in our experiments.
We also examine the contribution of the proposed pseudo-label refinement (PLR) as well as DPA. Results in Table 5 show that the PLR helps improve the mask mIoU by 1.09%, while DPA alone harms the performance. This indicates that PLR helps semi-supervised learning mainly because it enforces more spatial dependence on the pseudo-labels.
Threshold parameter tuningAs outlined in Section 3.1.2, we utilize a dynamic threshold that depends on an initial value, \(\alpha_{0}\). In Table 3(b), we examine the effect of different initial quantiles to establish this threshold. A smaller \(\alpha_{0}\) would propagate too many errors, leading to significant confirmation bias. In contrast, a larger \(\alpha_{0}\) would mask most of the data, resulting in insufficient label
\begin{table}
\begin{tabular}{l r r r r} \hline \hline \multirow{2}{*}{**Selection criterion**} & \multicolumn{4}{c}{**Neighborhood size N**} \\ \cline{2-5} & \(\mathbf{1\times 1}\) & \(\mathbf{3\times 3}\) & \(\mathbf{5\times 5}\) & \(\mathbf{7\times 7}\) \\ \hline Random neighbor & & 73.25 & 71.1 & 70.41 \\ Max neighbor & & **75.41** & 75.18 & 74.89 \\ Min neighbor & 69.46 & 74.54 & 74.11 & 70.28 \\ Two max neighbors & & 74.14 & 75.15 & 74.36 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The effect of neighborhood size and neighbor selection criterion.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**PLR** & **DPA** & **1/4** \\ \hline & & 76.8 \\ ✓ & & 76.2 \\ & ✓ & 77.89 \\ ✓ & ✓ & **78.07** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study on the different components of S4MC on top of FixMatch. **PLR** is the pseudo-label refinement module and **DPA** is dynamic partition adjustment.
propagation, rendering the semi-supervised learning process lengthy and inefficient. We found that an \(\alpha_{0}\) of 40% yields the best performance.
## 5 Conclusion
In this paper, we introduce S4MC, a novel approach for incorporating spatial contextual information in semi-supervised segmentation. This strategy refines confidence levels and enables us to leverage more unlabeled data. S4MC outperforms existing approaches and achieves state-of-the-art results on multiple popular benchmarks under various data partition protocols, such as Cityscapes and Pascal VOC 12. Despite its effectiveness in lowering the annotation requirement, there are several limitations to using S4MC. First, its reliance on event-union relaxation is applicable only in cases involving spatial coherency. As a result, using our framework for other dense prediction tasks would require an examination of this relaxation's applicability. Furthermore, our method uses a fixed-shape neighborhood without considering the object's structure. It would be interesting to investigate the use of segmented regions to define new neighborhoods; this is a future direction we plan to explore.
### Acknowledgments and Disclosure of Funding
We sincerely thank Amlan Kar for his invaluable feedback and deeply impactful discussions during the entire research process. Evgenii Zheltonozhskii is supported by the Adams Fellowships Program of the Israel Academy of Sciences and Humanities. Or Litany is a Taub fellow and is supported by the Azrieli Foundation Early Career Faculty Fellowship. |
2306.05416 | Tracking Objects with 3D Representation from Videos | Data association is a knotty problem for 2D Multiple Object Tracking due to
the object occlusion. However, in 3D space, data association is not so hard.
Only with a 3D Kalman Filter, the online object tracker can associate the
detections from LiDAR. In this paper, we rethink the data association in 2D MOT
and utilize the 3D object representation to separate each object in the feature
space. Unlike the existing depth-based MOT methods, the 3D object
representation can be jointly learned with the object association module.
Besides, the object's 3D representation is learned from the video and
supervised by the 2D tracking labels without additional manual annotations from
LiDAR or pretrained depth estimator. With 3D object representation learning
from Pseudo 3D object labels in monocular videos, we propose a new 2D MOT
paradigm, called P3DTrack. Extensive experiments show the effectiveness of our
method. We achieve new state-of-the-art performance on the large-scale Waymo
Open Dataset. | Jiawei He, Lue Fan, Yuqi Wang, Yuntao Chen, Zehao Huang, Naiyan Wang, Zhaoxiang Zhang | 2023-06-08T17:58:45Z | http://arxiv.org/abs/2306.05416v1 | # Tracking Objects with 3D Representation from Videos
###### Abstract
Data association is a knotty problem for 2D Multiple Object Tracking due to the object occlusion. However, in 3D space, data association is not so hard. Only with a 3D Kalman Filter, the online object tracker can associate the detections from LiDAR. In this paper, we rethink the data association in 2D MOT and utilize the 3D object representation to separate each object in the feature space. Unlike the existing depth-based MOT methods, the 3D object representation can be jointly learned with the object association module. Besides, the object's 3D representation is learned from the video and supervised by the 2D tracking labels without additional manual annotations from LiDAR or pretrained depth estimator. With 3D object representation learning from Pseudo 3D object labels in monocular videos, we propose a new 2D MOT paradigm, called P3DTrack. Extensive experiments show the effectiveness of our method. We achieve new state-of-the-art performance on the large-scale Waymo Open Dataset.
## 1 Introduction
Multiple Object Tracking (MOT) is the core component of the perception system for many applications, such as autonomous driving and video surveillance. In the deep learning era, metric learning helps the network to learn better object affinity between frames for object association in MOT [56, 3, 16]. Another hot trend is jointly learning object detection and association, termed as end-to-end MOT [32, 59, 68]. In these methods, they use the shared query to generate the object's bounding box in each frame belonging to the same track. This kind of design makes the neural network jointly learn object representation and data association across frames. Previous attempts demonstrate that precise association is crucial in MOT. However, in 2D MOT, object association remains a significant challenge due to object occlusion. The presence of many partially visible objects in congested scenarios like shopping malls and traffic jams makes incorrect association nearly impossible to prevent. Several approaches have aided the data association module with complex appearance models and image-space motion models to address the challenging 2D data association. Despite the efficacy of these techniques, they do not target the main problem of object association, that is, trying to associate 3D objects in 2D image space.
Conversely, in 3D MOT, many works demonstrate that object association is nearly a trivial problem, even with a simple motion model. ImmortalTracker [51], in particular, reveals that using the 3D Kalman Filter to model motion from the LiDAR 3D bounding boxes, the wrong association only occurs _once_ in the entire Waymo Open Dataset (WOD) dataset. This significant gap between 2D and 3D MOT reveals that association in a higher dimension space is much simpler than in a low-dimensional space. Therefore, inspired by this observation, this paper aims to address the 2D object association problem in a 3D space.
Recent works [23, 7] explore the most straightforward way to lift 2D association to 3D space, that is utilizing the off-the-shelf depth model. However, such methods are not effective enough for three reasons. (1) It is hard to estimate the scene-level depth from the monocular image. (2) The camera's intrinsic parameters are different, so the pretrained depth estimation model has limited generalization ability in the tracking scenes. (3) Association with explicit depth is sub-optimal since the depth estimation and the association part are isolated without joint optimization. Meanwhile, without joint optimization, the association is also sensitive to the noise of explicit depth.
Distinct from these works, we want to learn the representation containing 3D position information for the objects, and the representation can be jointly learned with the association module, as shown in Fig. 1. Besides, due to the expensive cost and the additional sensors (e.g., LiDAR) to obtain the 3D annotations, we want to dig the 3D object representation only from 2D tracklet labels annotated in the videos. In this paper, we propose a new video-based 3D representation learning framework and the 2D Multiple |
2305.13724 | ChatGPT-EDSS: Empathetic Dialogue Speech Synthesis Trained from
ChatGPT-derived Context Word Embeddings | We propose ChatGPT-EDSS, an empathetic dialogue speech synthesis (EDSS)
method using ChatGPT for extracting dialogue context. ChatGPT is a chatbot that
can deeply understand the content and purpose of an input prompt and
appropriately respond to the user's request. We focus on ChatGPT's reading
comprehension and introduce it to EDSS, a task of synthesizing speech that can
empathize with the interlocutor's emotion. Our method first gives chat history
to ChatGPT and asks it to generate three words representing the intention,
emotion, and speaking style for each line in the chat. Then, it trains an EDSS
model using the embeddings of ChatGPT-derived context words as the conditioning
features. The experimental results demonstrate that our method performs
comparably to ones using emotion labels or neural network-derived context
embeddings learned from chat histories. The collected ChatGPT-derived context
information is available at
https://sarulab-speech.github.io/demo_ChatGPT_EDSS/. | Yuki Saito, Shinnosuke Takamichi, Eiji Iimori, Kentaro Tachibana, Hiroshi Saruwatari | 2023-05-23T06:19:37Z | http://arxiv.org/abs/2305.13724v1 | ChatGPT-EDSS: Empathetic Dialogue Speech Synthesis Trained from ChatGPT-derived Context Word Embeddings
###### Abstract
We propose _ChatGPT-EDSS_, an empathetic dialogue speech synthesis (EDSS) method using ChatGPT for extracting dialogue context. ChatGPT is a chatbot that can deeply understand the content and purpose of an input prompt and appropriately respond to the user's request. We focus on ChatGPT's reading comprehension and introduce it to EDSS, a task of synthesizing speech that can empathize with the interlocutor's emotion. Our method first gives chat history to ChatGPT and asks it to generate three words representing the intention, emotion, and speaking style for each line in the chat. Then, it trains an EDSS model using the embeddings of ChatGPT-derived context words as the conditioning features. The experimental results demonstrate that our method performs comparably to ones using emotion labels or neural network-derived context embeddings learned from chat histories. The collected ChatGPT-derived context information is available at our project page.
**Index Terms**: text-to-speech, empathetic dialogue speech synthesis, dialogue context, ChatGPT, prompt engineering
## 1 Introduction
Dialogue speech synthesis (DSS) [1], i.e., text-to-speech (TTS) [2] for spoken dialogue systems, is a crucial technology to actualize natural speech communication between humans and robots. In contrast to TTS, which primarily aims to convey information written in the given input text correctly, DSS requires its speaking style to be more properly controlled in accordance with the dialogue situation (e.g., restaurant reservation [3] and persuasion [4]). Such control is achieved by estimating _context_, e.g., intention [5] and speakers' emotions [6], from dialogue history and conditioning a DSS model by the context [7].
Empathetic DSS (EDSS) [8] is an emerging technology for developing a friendly voice agent that can empathize with an interlocutor. As with empathetic dialogue generation [9], an EDSS model is trained to synthesize speech with an empathetic speaking style using the dialogue context. For instance, Saito et al.'s EDSS method [8] uses the speaker's emotion label as the context and improves the quality of synthetic speech. However, this method relies on the annotations of utterance-wise emotion labels for each speaker, which requires the annotators (i.e., _human dialogue advisers_) to deeply understand the empathetic dialogue lines. Although a data-driven context embedding learning [7] can provide a way to control the expressive speaking style of synthetic speech from the chat history, the learned embedding vectors are often hard to interpretable for humans.
In text-based dialogue paradigm, ChatGPT (generative pretrained Transformer), a state-of-the-art artificial intelligence (AI) chatbot, has achieved meaningful breakthroughs in various creative applications, such as writing novels and song lyrics. It is based on GPT-3 [10], which has been fine-tuned using supervised learning and reinforcement learning for generating response texts preferred by humans [11]. This learning mechanism enables ChatGPT to deeply understand the content and purpose of input text prompts and appropriately respond to the user's requests. Although this superior reading comprehension has the potential to extract the dialogue context from the given chat history, the applicability of ChatGPT to spoken dialogue technologies has not yet been investigated.
To this end, we propose _ChatGPT-EDSS_, a ChatGPT-powered EDSS method using ChatGPT as _an AI dialogue adviser_, as shown in Fig. 1. Our method first gives dialogue history to ChatGPT as the prompt and asks it to generate three words related to the context: intention, emotion, and speaking style for each dialogue line. Then, it trains an EDSS model using embedding vectors of the three words as conditional features. We present our methodology to collect ChatGPT-derived context words, training method for ChatGPT-EDSS, and experimental evaluation using a Japanese speech corpus of empathetic dialogue. The contributions of this study are as follows:
* We investigate a way to introduce ChatGPT to spoken dialogue research, especially in EDSS that requires deep understanding of dialogue context to properly control the speaking style of synthetic speech.
* We present the prompt design to obtain meaningful context by using ChatGPT and analyze the obtained context words.
* We demonstrate that the use of ChatGPT-derived context word embeddings can achieve naturalness and style similarity of synthetic speech comparable to that of ground-truth emotion labels or deeply-learned context embeddings [7].
## 2 Related Work
### EDSS using explicit and implicit dialogue context
Saito et al. proposed a baseline EDSS method [8] using ground-truth emotion labels and embedding vectors of chat history as explicit and implicit dialogue context features, respectively. They introduced a conversational context encoder (CCE) [7] that extracts an embedding vector from lines of chat history as the implicit context feature. Their method outperformed a FastSpeech 2 (FS2)-based TTS model [12] regarding the reproducibility of speaking style in EDSS.
Figure 1: Conceptual dialogue of our ChatGPT-EDSS.
### Ability of ChatGPT
As of March 2023, many researchers have been exploring ChatGPT's ability in real-world situations (e.g., education, evaluation [13]) and theory of mind [14]. In addition, some have introduced ChatGPT into the assessment of human's states via texts, e.g., personalities [15] and sentiment [16]. These work motivate us to use text dialogue contexts obtained by ChatGPT for enhancing DSS technologies.
### Media creation from prompt
With the advancement of deep generative modeling techniques such as denoising diffusion probabilistic models [17], media generation from a prompt has been widely studied. DaLL-E [18] is one of the first models to generate a realistic image from an input prompt. GPT-3 [10] is an autoregressive large language model that can continue to generate successive sentences from a given initial text as the prompt. AudioGen [19] and MusicLM [20] are generative models for environmental sound and music from text descriptions, respectively. These technologies offer an intuitive way to control the outcomes of media creation by changing the natural language description in the prompt.
Compared with image or text generation, research on TTS control using text prompts is still developing. One primary reason is the difficulty in constructing a sufficiently large dataset that includes many triplets of text to be spoken, speech, and natural language description to explain the speech. Guo et al. [21] dealt with this difficulty by asking proprietary experts to write prompts that describe the given speech and diversifying the prompts by using SimBERT [22]. Although their dataset contains more than 150,000 data that can be used for training a text-prompt-aware TTS model, such a method for constructing the dataset is very costly and hard to generalize.
## 3 ChatGPT-EDSS
As shown in Fig. 2, our ChatGPT-EDSS consists of two steps: 1) collection of dialogue context words using ChatGPT and 2) training of an EDSS model using the context words.
### Collection of dialogue context words
We ask ChatGPT to generate the dialogue context words from empathetic dialogue lines. As shown in the left part of Figure 2, the text prompt consists of 1) description of dialogue setting, 2) dialogue lines, and 3) request for answering context words.
**Dialogue setting description** explains the roles of the speaker and listener as well as the dialogue situation to ChatGPT. We empirically found that presenting the dialogue situation improved the relevance of outcomes.
**Dialogue lines** describe the content of the conversation per turn. The format is a sequence of "[turn ID] [speaker's name] [content]" for each dialogue line. We limit the maximum turns in one prompt to 5 because when ChatGPT is asked to answer about long dialogues, it tends to hang in the middle of the answer. If one dialogue consists of more than five turns, we divide it into multiple queries overlapping two turns from the previous query. For example, a dialogue taking 10 turns is divided into prompts for 1-5, 3-7, 5-9, and 7-10 dialogue turns.
**Request for answering context words** asks ChatGPT to generate words describing the dialogue context for each line. We consider three kinds of context words: 1) dialogue intention [5], 2) emotion [6], and 3) speaking style. The categories of answers for the emotion and speaking style are { neutral, joy, anticipation, anger, disgust, sadness, surprise, fear, trust } (i.e., neutral and eight emotions defined by Plutchik [23]) and { cute, cool, quiet, polite, intellectual, honest, clear, gentle, gravelly, vibrant }, respectively.
### EDSS model training using context word embeddings
We extract embeddings of the collected words by using BERT [24] and condition an EDSS model by the embeddings. Specifically, we define a ChatGPT-derived dialogue context vector as the sum of the word embeddings and train the EDSS model to predict empathetic dialogue speech from an input text and the context vector. This method regards ChatGPT as an interactive context estimator and replaces the CCE used in Saito et al.'s baseline EDSS method [8] with ChatGPT.
### Discussion
Our ChatGPT-EDSS relates to text-predicted global style tokens (TP-GSTs) [25], a TTS method that predicts an expressive speaking style from a text-derived prosody embedding. From this perspective, our method uses ChatGPT to extract style-related words from dialogue lines and predicts the prosody embedding from the extracted words. Although TP-GSTs can improve the quality of synthetic speech better than a Tacotron-based TTS model [26], it cannot consider the dialogue history to train the TTS model, which is essential for reproducing an empathetic speaking style in EDSS [8]. However, one can introduce a similar idea that uses text-derived prosody embedding to predict the weight for each GST (i.e., predicting combination weights proposed in [25]) in the ChatGPT-EDSS training.
From another perspective, one can regard our ChatGPT-EDSS as weakly supervised learning [27] of expressive TTS that uses the context words to condition the EDSS model instead of the ground-truth emotion label. We discuss the reliability of the collected context words later in Section 4.2 because ChatGPT may generate an improper answer from the given prompt.
Figure 2: _Overview of our ChatGPT-EDSS._
## 4 Experimental evaluation
We evaluated our ChatGPT using the STUDIES [8] corpus including Japanese empathetic spoken dialogues. The dialogue domain was chit-chat between a female teacher (empathetic listener) and students in a school.
### Experimental conditions
This section describes the conditions for our evaluation.
**Conditions for context word collection:** We collected the context words for the long (10-20 turns) and short (4 turns) dialogues included in the STUDIES corpus using ChatGPT. The numbers of dialogues were 150 and 720, respectively. We employed 31 workers who asked ChatGPT to generate the context words and annotated the reliability score for each answer with an integer between 1 ("very unreliable") and 5 ("very reliable"). The workers completed these procedures by 1) accessing Google Sheets prepared by us, 2) copying the text prompts contained in the first field, 3) pasting the prompts to the ChatGPT query field, 4) copying and pasting the ChatGPT answer to the second field, and 5) filling the reliable score in the third field1. We asked workers to resend the query to ChatGPT when 1) it failed to generate the context words, 2) the answer included a sentence other than the context word (e.g., the speaker's name or the original dialogue line), and/or 3) the language of the answer was not Japanese (e.g., English or Chinese).
Footnote 1: We can automate this procedure excluding the reliability scoring because the ChatGPT API has become accessible since March 2, 2023.
**Conditions for EDSS:** We trained an EDSS model of the STUDIES teacher with the collected context words. Following Saito et al.'s study [8], we used 726, 72, and 72 dialogues for training, validation, and evaluation, respectively. We downsampled the speech data to 22,050 Hz. We used the validation data to choose hyperparameters for the following models, whose parameters were randomly initialized.
**Acoustic model for EDSS:** We used FS2 [12] as an acoustic model that predicted a mel-spectrogram from text with PyTorch implementation for Japanese TTS. We followed the settings of a neural network architecture and speech parameter extraction of this implementation. We used the WORLD vocoder [28, 29] to estimate \(F_{0}\). The optimizer was Adam [30] with an initial learning rate \(\eta\) of 0.0625, \(\beta_{1}\) of 0.9, and \(\beta_{2}\) of 0.98. We first pretrained FS2 using the JSUT corpus [31], a Japanese speech corpus including about 10 hours of a female speaker's speech, with 200K iterations. We then fine-tuned it by using the STUDIES training data with 100K iterations.
**Neural vocoder:** We used a HiFi-GAN vocoder [32] for speech waveform generation from a mel-spectrogram with PyTorch implementation provided by the first author of the HiFi-GAN paper. We trained HiFi-GAN by using the same training data as that for FS2 with 350K iterations. The optimizer was Adam with \(\eta\) of 0.0003, \(\beta_{1}\) of 0.8, and \(\beta_{2}\) of 0.99.
### Analysis of ChatGPT-derived context words
Table 1 lists the results of the context word collection summarized in accordance with the ground-truth emotion labels of the STUDIES teacher. First, we found that the averaged reliability scores were more than 3.6 for all emotion labels. Second, the most frequent intention words for "Angry" and "Sad" utterances were "**empathy**." Third, the most frequent emotion word corresponded to the ground-truth emotion label except for "Neutral" and "Angry". Finally, the most frequent style words consisted of "quiet," "gentle," and "polite." These results suggest that ChatGPT 1 actually understands the intention of "empathetic" dialogue, 2) provides reliable weak labels for express TTS to some extent, and 3) roughly estimates the STUDIES teacher's speaking style as moderate.
Table 2 lists the number of unique context words for each ground-truth emotion label of the STUDIES teacher's utterances. First, the intention words were very diverse and consisted of more than 100 unique words for "Neutral" utterances. However, we found that 79% of these words only appeared five times or less. We observed a similar tendency from the results shown in the third and fourth columns in Table 2, despite the fact that we defined the categories of emotion and style in advance. We can confirm these diversities from the t-SNE visualization [33] of context word embeddings by BERT shown in Fig. 3, although the different categories tend to form roughly different clusters. These results indicate that ChatGPT 1) can generate various candidate words for describing the context and 2) does not necessarily satisfy the pre-specified requirements for the word generation.
### Subjective evaluations
We conducted subjective evaluations to investigate whether our ChatGPT-EDSS can reproduce the speaking style of empathetic dialogue speech without degrading naturalness.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline & Reliability score & Intention & Emotion & Style \\ \hline Neutral & 3.95 & \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left| \!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\!\left| \!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left| \!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\!\!\!\!\left|\!\!
**Evaluation setup:** We conditioned the FS2-based EDSS model on the following factors:
* **Emo**: Emotion label annotated by the corpus developers
* **CCE**: Data-driven context embedding extracted from dialogue history [7]
* **IES (ours)**: Embeddings of intention, emotion, and style words generated from ChatGPT
The **CCE** extracted the context embedding from joint vectors of one-hot encoding of speaker identity (3-dim.) and up to four sentence embedding sequences (one current sentence and up to three previous ones) obtained by using BERT (768-dim.) pretrained by using Japanese text data provided at here. The dimensionality of the context embedding was 256, and we prepared a linear layer to project the BERT-derived word embedding onto the 256-dimensional feature space.
As explained in Section 3.1, one sentence may have multiple context words due to the overlapping procedure. In that case, we aggregated the embeddings of multiple words for each intention, emotion, and style by simply taking the average of the embeddings. This aggregation may improve the robustness of our ChatGPT-EDSS towards the large variation of context words described in Section 4.2.
**Evaluation criteria:** We conducted a five-scale mean opinion score (MOS) test on the naturalness of synthetic speech. We presented 30 speech samples to listeners in random order. listeners rated the naturalness of each speech sample from degrees of 1 ("very unnatural") to 5 ("very natural"). We recruited 50 listeners using our crowdsourcing subjective evaluation system. We also conducted a five-scaled MOS test on speaking-style similarity of synthetic speech. Listeners first listened to the reference speech reconstructed from a natural mel-spectrogram with HiFi-GAN and then scored the presented synthetic speech regarding the similarity of speaking style from degrees of 1 ("very dissimilar") to 5 ("very similar"). We recruited 50 listeners using our crowdsourcing subjective evaluation system.
**Evaluation results:** Table 3 shows the evaluation results. We found that using **IES** as the conditional features for the EDSS model performed comparably to using **Emo** or **CCE**. This result demonstrates that we can use ChatGPT as the context embedding extractor from empathetic dialogue lines instead of using the emotion label or conventional data-driven context embedding vector. We also observed that the two EDSS models using **Emo** only and the combination of **Emo** and **CCE** slightly degraded the naturalness MOS, while the one using both **Emo** and **IES** scored higher MOS values regarding the naturalness and similarity. One reason is the fine-grained (and possibly unlimited) emotion categories represented by the emotion words showing in Table 2, which enhances the expression ability of the EDSS model compared with the limited number of emotion categories in the ground-truth label (only four).
To further investigate the effects caused by introducing ChatGPT-derived context words in EDSS, we calculated the differences between the naturalness and similarity MOS of 1) **IES&Emo** and **2) **IES&CCE** and **CCE** with respect to the reliability score for each ground-truth emotion label. Figure 4 shows the results. From this figure, we observe that there is no correlation between the reliability score and the MOS improvement and that the improvement varies widely even when the reliability score is 5. This result suggests that, although the introduction of **IES** does not negatively affect the quality of synthetic speech, we still require countermeasures against the large diversity of ChatGPT responses discussed in Section 4.2.
## 5 Conclusion
We proposed ChatGPT-EDSS, a ChatGPT-powered empathetic dialogue speech synthesis (EDSS) method using word embeddings of ChatGPT answers as the dialogue context. Our method first gives text-based chat history to ChatGPT as a prompt and asks it to generate three words related to the dialogue context: intention, emotion, and speaking style for each dialogue line. Then, it trains an EDSS model using embedding vectors of the three words as conditional features. The evaluation results demonstrated that our ChatGPT-EDSS performed comparably to ones using emotion labels or deeply-learned context embeddings extracted from chat histories. Our future work is to investigate the effect of the dialogue domain in ChatGPT-EDSS and to examine whether ChatGPT's hallucination occurs in our method or not.
**Acknowledgements:** This research was conducted as joint research between LINE Corporation and Saruwatari-Takamichi Laboratory of The University of Tokyo, Japan.
|
2303.06919 | NeRFLiX: High-Quality Neural View Synthesis by Learning a
Degradation-Driven Inter-viewpoint MiXer | Neural radiance fields (NeRF) show great success in novel view synthesis.
However, in real-world scenes, recovering high-quality details from the source
images is still challenging for the existing NeRF-based approaches, due to the
potential imperfect calibration information and scene representation
inaccuracy. Even with high-quality training frames, the synthetic novel views
produced by NeRF models still suffer from notable rendering artifacts, such as
noise, blur, etc. Towards to improve the synthesis quality of NeRF-based
approaches, we propose NeRFLiX, a general NeRF-agnostic restorer paradigm by
learning a degradation-driven inter-viewpoint mixer. Specially, we design a
NeRF-style degradation modeling approach and construct large-scale training
data, enabling the possibility of effectively removing NeRF-native rendering
artifacts for existing deep neural networks. Moreover, beyond the degradation
removal, we propose an inter-viewpoint aggregation framework that is able to
fuse highly related high-quality training images, pushing the performance of
cutting-edge NeRF models to entirely new levels and producing highly
photo-realistic synthetic views. | Kun Zhou, Wenbo Li, Yi Wang, Tao Hu, Nianjuan Jiang, Xiaoguang Han, Jiangbo Lu | 2023-03-13T08:36:30Z | http://arxiv.org/abs/2303.06919v2 | # NeRFLiX: High-Quality Neural View Synthesis
###### Abstract
Neural radiance fields (NeRF) show great success in novel view synthesis. However, in real-world scenes, recovering high-quality details from the source images is still challenging for the existing NeRF-based approaches, due to the potential imperfect calibration information and scene representation inaccuracy. Even with high-quality training frames, the synthetic novel views produced by NeRF models still suffer from notable rendering artifacts, such as noise, blur, etc. Towards to improve the synthesis quality of NeRF-based approaches, we propose NeRFLiX, a general NeRF-agnostic restorer paradigm by learning a degradation-driven inter-viewpoint mixer. Specially, we design a NeRF-style degradation modeling approach and construct large-scale training data, enabling the possibility of effectively removing NeRF-native rendering artifacts for existing deep neural networks. Moreover, beyond the degradation removal, we propose an inter-viewpoint aggregation framework that is able to fuse highly related high-quality training images, pushing the performance of cutting-edge NeRF models to entirely new levels and producing highly photo-realistic synthetic views. Our project page is available at [https://redrock303.github.io/nerflix/](https://redrock303.github.io/nerflix/).
## 1 Introduction
Neural radiance fields (NeRF) can generate photo-realistic images from new viewpoints, playing a heated role in novel view synthesis. In light of NeRF's [38] success, numerous approaches [2, 9, 11, 19, 35, 36, 39, 41, 42, 48, 54,
55, 60, 62, 66] along these lines have been proposed, continually raising the performance to greater levels. In fact, one prerequisite of NeRF is the precise camera settings of the taken photos for training [62, 22, 32]. However, accurately calibrating camera poses is exceedingly difficult in practice. Contrarily, the shape-radiance co-adaption issue [75] reveals that while the learned radiance fields can perfectly explain the training views with inaccurate geometry, they poorly generalize to unseen views. On the other hand, the capacity to represent sophisticated geometry, lighting, object materials, and other factors is constrained by the simplified scene representation of NeRF [79, 19, 78]. On the basis of such restrictions, advanced NeRF models may nonetheless result in _notable artifacts_ (such as blur, noise, detail missing, and more), which we refer to as _NeRF-style degradations_ in this article and are shown in Fig. 1.
To address the aforementioned limitations, numerous works have been proposed. For example, some studies, including [60, 67, 71, 22], jointly optimize camera parameters and neural radiance fields to refine camera poses as precisely as possible in order to address the camera calibration issue. Another line of works [79, 74, 19, 78] presents physical-aware models that simultaneously take into account the object materials and environment lighting, as opposed to using MLPs or neural voxels to implicitly encode both the geometry and appearance. To meet the demands for high-quality neural view synthesis, one has to carefully examine all of the elements when building complex inverse rendering systems. In addition to being challenging to optimize, they are also not scalable for rapid deployment with hard re-configurations in new environments. Regardless of the intricate physical-aware rendering models, _is it possible to design a practical NeRF-agnostic restorer to directly enhance synthesized views from NeRFs?_
In the low-level vision, it is critical to construct large-scale paired data to train a deep restorer for eliminating real-world artifacts [73, 57]. When it comes to NeRF-style degradations, there are two challenges: (1) sizable paired training data; (2) NeRF degradation analysis. First, it is unpractical to gather _large-scale_ training pairs (more specifically, raw outputs from well-trained NeRFs and corresponding ground truths). Second, the modeling of NeRF-style degradation has received little attention. Unlike real-world images that generally suffer from JPEG compression, sensor noise, and motion blur, the NeRF-style artifacts are complex and differ from the existing ones. As far as we know, **no** previous studies have ever investigated NeRF-style degradation removal which effectively leverages the ample research on image and video restoration.
In this work, we are motivated to have the _first_ study on the feasibility of simulating large-scale NeRF-style paired data, opening the possibility of training a NeRF-agnostic restorer for improving the NeRF rendering frames. To this end, we present a novel degradation simulator for typical NeRF-style artifacts (e.g., rendering noise and blur) considering the NeRF mechanism. We review the overall NeRF rendering pipeline and discuss the typical NeRF-style degradation cases. Accordingly, we present three basic degradation types to simulate the real rendered artifacts of NeRF synthetic views and empirically evaluate the distribution similarity between real rendered photos and our simulated ones. The feasibility of developing NeRF-agnostic restoration models has been made possible by constructing a sizable dataset that covers a variety of NeRF-style degradations, over different scenes.
Next, we show the necessity of our simulated dataset and demonstrate that existing state-of-the-art image restoration frameworks can be used to eliminate NeRF visual artifacts. Furthermore, we notice, in a typical NeRF setup, neighboring high-quality views come for free, and they serve as potential reference bases for video-based restoration with a multi-frame aggregation and fusion module. However, this is not straightforward because NeRF input views are taken from a variety of very different angles and locations, making the estimation of correspondence quite challenging. To tackle this problem, we propose a degradation-driven inter-viewpoint "mixer" that progressively aligns image contents at the pixel and patch levels. In order to maximize efficiency and improve performance, we also propose a fast view selection technique to only choose the most pertinent reference training views for aggregation, as opposed to using the entire NeRF input views.
In a nutshell, we present a NeRF-agnostic restorer (termed NeRFLiX) which learns a degradation-driven inter-viewpoint mixer. As illustrated in Fig. 1, given NeRF synthetic frames with various rendering degradations, NeRFLiX successfully restores high-quality results. Our contributions are summarized as
* **Universal enhancer for NeRF models.** NeRFLiX is powerful and adaptable, removing NeRF artifacts and restoring clearly details, pushing the performance of cutting-edge NeRF models to entirely new levels.
* **NeRF rendering degradation simulator.** We develop a NeRF-style degradation simulator (NDS), constructing massive amounts of paired data and aiding the training of deep neural networks to improve the quality of NeRF-rendered images.
* **Inter-viewpoint mixer.** Based on our constructed NDS, we further propose an inter-viewpoint baseline that is able to _mix_ high-quality neighboring views for more effective restorations.
* **Training time acceleration.** We show how NeRFLiX makes it possible for NeRF models to produce even _better_ results with a _50%_ reduction in training time.
## 2 Related Works
NeRF-based novel view synthesis.NeRF-based novel view synthesis has received a lot of attention recently and has been thoroughly investigated. For the first time, Mildenhall _et al_. [38] propose the neural radiance field to implicitly represent static 3D scenes and synthesize novel views from multiple posed images. Inspired by their successes, a lot of NeRF-based models [2, 8, 11, 13, 19, 20, 21, 23, 26, 33, 35, 36, 39, 42, 43, 45, 48, 52, 55, 63, 66, 74, 77] have been proposed. For example, point-NeRF [64] and DS-NeRF [14] incorporate sparse 3D point cloud and depth information for eliminating the geometry ambiguity for NeRFs, achieving more accurate/efficient 3D point sampling and better rendering quality. Plenoxels [16], TensoRF [7], DirectVoxGo [46], FastNeRF [17], Plenoctrees [68], KiloNeRF [44], and Mobilenfer [10], aim to use various advanced technologies to speed up the training or inference phases. Though these methods have achieved great progress, due to the potential issue of inaccurate camera poses, simplified pinhole camera models as well as scene representation inaccuracy, they still suffer from rendering artifacts of the predicted novel views.
**Degradation simulation.** Since no existing works have explored the NeRF-style degradation cases, we will overview the real-world image restoration works that are most related to ours. The previous image/video super-resolution approaches [15, 27, 28, 31, 56, 58, 69, 80, 81] typically follow a fix image degradation type (e.g., blur, bicubic/bilinear down-sampling). Due to the large domain shift between the real-world and simulated degradations, the earlier image restoration methods [27, 29, 72, 80] generally fail to remove complex artifacts of the real-world images. In contrast, BSRGAN [73] design a practical degradation approach for real-world image super-resolution. In their degradation process, multiple degradations are considered and applied in random orders, largely covering the diversity of real-world degradations. Compared with the previous works, BSRGAN achieves much better results quantitatively and qualitatively. Real-ESRGAN [57] develops a second-order degradation process for real-world image super-resolution. In this work, we propose a NeRF-style degradation simulator and construct a large-scale training dataset for modeling the NeRF rendering artifacts.
**Correspondence estimation.** In the existing literature, the video restoration methods [3, 6, 49, 53, 70] aim to restore a high-quality frame from multiple low-quality frames. To achieve this goal, cross-frame correspondence estimation is essential to effectively aggregate the informative temporal contents. Some works [5, 65, 70, 6] explore building pixel-level correspondences through optical-flow estimation and perform frame-warping for multi-frame compensation. Another line of works [50, 56, 81] tries to use deformable convolution networks (DCNs [12]) for adaptive correspondence estimation and aggregation. More recently, transformer-based video restoration models [4, 30] implement spatial-temporal aggregation through an attention mechanism and achieve promising performance. However, it is still challenging to perform accurate correspondence estimation between frames captured with very distinctive viewpoints.
## 3 Preliminaries
In this section, we review the general pipeline of NeRF-based novel view synthesis and discuss potential rendering artifacts. As shown in Fig. 2, three main steps are involved in the rendering: (1) Ray Shooting. To render the color of a target pixel in a particular view, NeRF utilizes the camera's calibrated parameters \(\pi\) to generate a ray \(\mathbf{r}(\mathbf{o},\mathbf{d})\) through this pixel, where \(\mathbf{o},\mathbf{d}\) are the camera center and the ray direction. (2) Ray Marching. A set of 3D points are sampled along the chosen ray as it moves across the 3D scene represented by neural radiance fields. The NeRF models encode a 3D scene and predict the colors and densities of these points. (3) Radiance Accumulation. The pixel color is extracted by integrating the predicted radiance features of the sampled 3D points.
**Discussion.** We can see that establishing a relationship between 2D photos and the 3D scene requires camera calibration. Unfortunately, it is very challenging to precisely calibrate the camera poses, leading to noisy 3D sampling. Meanwhile, some previous works [60, 71, 22, 71] also raise other concerns, including the non-linear pinhole camera model [22] and shape-radiance ambiguity [75]. Because of these inherent limitations, as discussed in Section 1, NeRF models still synthesize unsatisfied novel test views.
## 4 Methodology
**Overview.** In this work, we present NeRFLiX, a general NeRF-agnostic restorer which employs a degradation-driven inter-viewpoint mixer to enhance novel view images rendered by NeRF models. It is made up of two essential components: a NeRF-style degradation simulator (NDS) and an inter-viewpoint mixer (IVM). As seen in Fig. 3(a), during the training phase, we employ the proposed NDS
Figure 2: A general illustration of NeRF-based novel view synthesis pipeline. Three main steps are involved: (1) ray shooting, (2) ray marching, and (3) radiance accumulation.
to create large-scale paired training data, which are subsequently used to train an IVM for improving a NeRF-rendered view using two corresponding reference pictures (reference views). In the inference stage, as illustrated in Fig. 3(b), IVM is adopted to enhance a rendered view by fusing useful information from the selected most relevant reference views.
### NeRF-Style Degradation Simulator (NDS)
Due to the difficulties in gathering well-posed scenes under various environments and training NeRF models for each scene, it is infeasible to directly collect large amounts of _paired_ NeRF data for artifact removal. To address this challenge, motivated by BSRGAN [73], we design a general NeRF degradation simulator to produce a sizable training dataset that is visually and statistically comparable to NeRF-rendered images (views).
To begin with, we collect raw data from LLFF-T1 and Vimeo90K [65] where the adjacent frames are treated as raw sequences. Each raw sequence consists of three images \(\{I^{gt},I_{1}^{r},I_{2}^{r}\}\): a target view \(I^{gt}\) and its two reference views \(\{I_{1}^{r},I_{2}^{r}\}\). To construct the paired data from a raw sequence, we use the proposed NDS to degrade \(I^{gt}\) and obtain a simulated degraded view \(I\), as shown in Fig. 3(a).
Footnote 1: the training parts of LLFF [37].
The degradation pipeline is illustrated in Fig 4. We design three types of degradation for a target view \(I^{gt}\): splattered Gaussian noise (SGN), re-positioning (Re-Pos.), and anisotropic blur (A-Blur). It should be noted that there _may be other models for such a simulation_, and we only utilize this route to evaluate and justify the feasibility of our idea.
**Splatted Gaussian noise.** Although additive Gaussian noise is frequently employed in image/video denoising, NeRF rendering noise clearly differs. Rays that hit a 3D point will be re-projected within a nearby 2D area because of noisy camera parameters. As a result, the NeRF-style noise is dispersed over a 2D space. This observation led us to present a splatted Gaussian noise, which is defined as
(1)
where \(n\) is a 2D Gaussian noise map with the same resolution as \(I^{gt}\) and \(g\) is an isotropic Gaussian blur kernel.
**Re-positioning.** We design a re-positioning degradation to simulate ray jittering. We add a random 2D offset \(\delta_{i},\delta_{j}\in[-2,2]\) with probability 0.1 for a pixel at location \((i,j)\)
\[I^{D2}(i,j)=\begin{cases}I^{D1}(i,j)&\text{if}\;\;p>0.1\\ I^{D1}(i+\delta_{i},j+\delta_{j})&\text{else}\;\;p\leq 0.1\end{cases} \tag{2}\]
where \(p\) is uniformly distributed in \([0,1]\).
**Anisotropic blur.** Additionally, from our observation, NeRF synthetic frames also contain blurry contents. To simulate blur patterns, we use anisotropic Gaussian kernels to blur the target frame.
**Region adaptive strategy.** Neural radiance fields are often supervised with unbalanced training views. As a result,
Figure 4: Overview of our NDS pipeline: using our proposed degradations, we process a target view \(I^{gt}\) to produce its simulated degraded view \(I\). “SGN”, “Re-Pos.” and “A-Blur” refer to the splatted Gaussian, re-positioning, anisotropic blur degradations, and “RA” is the region adaptive strategy.
Figure 5: A visual example of real and simulated rendered views.
Figure 3: Illustration of our proposed NeRFLiX. It consists of two essential modules: (1) NeRF degradation simulator that constructs paired training data \(\{I,I_{1}^{r},I_{2}^{r}|I^{gt}\}\) from a raw sequence \(\{I^{gt},I_{1}^{r},I_{2}^{r}\}\), (2) inter-viewpoint mixer trained on this simulated data is capable of restoring high-quality frames from NeRF rendered views.
given a novel view, the projected 2D areas have varying degradation levels. Thus, we carry out each of the employed degradations in a spatially variant manner. More specifically, we define a mask \(M\) as a two-dimensional oriented anisotropic Gaussian [18]
\[M(i,j)=G(i-c_{i},j-c_{j};\sigma_{i},\sigma_{j},A), \tag{3}\]
where \((c_{i},c_{j}),(\sigma_{i},\sigma_{j})\) are the means and standard deviations and \(A\) is an orientation angle. After that, we use the mask \(M\) to linearly blend the input and output of each degradation, finally achieving region-adaptive degradations. As shown in Fig. 5, our simulated rendered views visually match the real NeRF-rendered ones. All the detailed settings of NDS are elaborated in our supplementary materials.
At last, with our NDS, we can obtain a great number of training pairs, and each paired data consists of two high-quality reference views \(\{I_{1}^{r},I_{2}^{r}\}\), a simulated degraded view \(I\), and the corresponding target view \(I^{gt}\). Next, we show how the constructed paired data \(\{I,I_{1}^{r},I_{2}^{r}|I^{gt}\}\) can be used to train our IVM.
### Inter-viewpoint Mixer (IVM)
**Problem formulation.** Given a degraded view \(I\) produced by our NDS or NeRF models, we aim to extract useful information from its two high-quality reference views \(\{I_{1}^{r},I_{2}^{r}\}\) and restore an enhanced version \(\hat{I}\).
**IVM architecture.** For multi-frame processing, existing techniques either use optical flow [5, 70, 53] or deformable convolutions [56, 12, 30] to realize the correspondence estimation and aggregation for _consistent_ displacements. In contrast, NeRF rendered and input views come from very different angles and locations, making it challenging to perform precise inter-viewpoint aggregation.
To address this problem, we propose IVM, a hybrid recurrent inter-viewpoint "mixer" that progressively fuses pixel-wise and patch-wise contents from two high-quality reference views, achieving more effective inter-viewpoint aggregation. There are three modules i.e., feature extraction, hybrid inter-viewpoint aggregation and reconstruction, as shown in Fig. 6. Two convolutional encoders are used in the feature extraction stage to process the degraded view \(I\) and two high-quality reference views \(\{I_{1}^{r},I_{2}^{r}\}\), respectively. We then use inter-viewpoint window-based attention modules and deformable convolutions to achieve recurrent patch-wise and pixel-wise aggregation. Finally, the enhanced view \(\hat{I}\) is generated using the reconstruction module under the supervision
\[Loss=|\hat{I}-I^{gt}|,\text{ where }\hat{I}=f(I,I_{1}^{r},I_{2}^{r};\theta), \tag{4}\]
where \(\theta\) is the learnable parameters of IVM. The framework architecture is given in our supplementary materials.
### View Selection
In the inference stage, for a NeRF-rendered view \(I\), our IVM produces an enhanced version by aggregating contents from two neighboring high-quality views. But, multiple input views are available and only a part of them are largely overlapped with \(I\). In general, only the most pertinent input views are useful for the inter-viewpoint aggregation.
To this end, we develop a view selection strategy to choose two reference views \(\{I_{1}^{r},I_{2}^{r}\}\) from the input views that are most overlapped with the rendered view \(I\). Specifically, we formulate the view selection problem based on the pinhole camera model. An arbitrary 3D scene can be roughly approximated as a bounding sphere in Fig. 7, and cameras are placed around it to take pictures. When camera-emitted rays hit the sphere, there are a set of intersections. We refer to the 3D point sets as \(\Phi_{i}=\{p_{0}^{i},p_{1}^{i},\cdots,p_{M_{i}}^{i}\}\) and \(\Phi_{j}=\{p_{0}^{j},p_{1}^{j},\cdots,p_{M_{j}}^{j}\}\) for the \(i\)-th and \(j\)-th cameras. For \(m_{i}\)-th intersection \(p_{m_{i}}^{i}\in\Phi_{i}\) of view \(i\), we search its nearest point in view \(j\) with the L2 distance
\[p_{m_{i}}^{i\to j}=\operatorname*{arg\,min}_{p\in\Phi_{j}}(||p-p_{m_{i}}^{i}|| _{2}^{2}). \tag{5}\]
Then the matching cost from the \(i\)-th view to the \(j\)-th view is calculated by
\[C_{i\to j}=\sum_{m_{i}=0}^{M_{i}}||p_{m_{i}}^{i}-p_{m_{i}}^{i\to j}||_{2}^{2}. \tag{6}\]
We finally obtain the mutual matching cost between views \(i\) and \(j\) as
\[C_{i\leftrightarrow j}=C_{i\to j}+C_{j\to i}. \tag{7}\]
In this regard, two reference views \(\{I_{1}^{r},I_{2}^{r}\}\) are selected at the least mutual matching costs for enhancing the NeRF-rendered view \(I\). Note that we also adopt this strategy to decide the two reference views for the LLFF-T [37] data during the training phase.
Figure 6: The framework of our inter-viewpoint mixer.
Figure 7: Illustration of our view selection strategy.
## 5 Experiments
### Implementation Details
We train the IVM for 300K iterations. The batch size is 16 and the patch size is 128. We adopt random cropping, vertical or horizontal flipping, and rotation augmentations. Apart from the inherent viewpoint changes over \(\{I,I^{r}_{1},I^{r}_{2}\}\), random offsets (\(\pm 5\) pixels) are globally applied to the two reference views (\(I^{r}_{1},I^{r}_{2}\)) to model more complex motion. We adopt an Adam [24] optimizer and a Cosine annealing strategy to decay the learning rate from \(5\times 10^{-4}\) to 0. We train a single IVM on the LLFF-T and Vimeo datasets and test it on all benchmarks (including user-captured scenes).
### Datasets and Metrics
We conduct the experiments on three widely used datasets, including LLFF [37], Tanks and Temples [25], and Noisy LLFF Synthetic.
**LLFF [37].** LLFF is a real-world dataset, where 8 different scenes have 20 to 62 images. Following the commonly used protocols [1, 7, 60, 16, 40], we adopt \(1008\times 756\) resolution for _LLFF-P1_ and \(504\times 376\) resolution for _LLFF-P2_.
**Tanks and Temples [25].** It contains 5 scenes captured by inward-facing cameras. There are 152-384 images in the \(1920\times 1080\) resolution. It should be noted that the viewpoints of different frames are significantly larger than LLFF.
**Noisy LLFF Synthetic [38].** There are 8 virtual scenes, each of which has 400 images with a size of \(800\times 800\). To simulate noisy in-the-wild calibration, we ad hoc apply camera jittering (random rotation and translation are employed) to the precise camera poses.
**Metrics.** Following previous NeRF methods, we adopt PSNR (\(\uparrow\))/SSIM [59] (\(\uparrow\))/LPIPS [76](\(\downarrow\)) for evaluation.
#### 5.5.1 NgRF-Style Degradation Simulator
**Simulation quality.** We first examine the simulation quality of the proposed NeRF-style degradation simulator. To this end, we analyze the distribution of our degraded images, BSR [73] degraded images and NeRF rendered images on LLFF [37]. We use t-SNE [51] to visualize deep image features (by Inception-v3 [47]) and results are shown in Fig. 9. Our simulated data is statistically much closer to the real rendered images than BSR. This conclusion is also supported by Table 3b, which demonstrates that our NDS significantly surpasses BSR and yields 0.6-1.0dB improvements when used for learning NeRF degradations.
**Degradation type.** We also evaluate the detailed contribution of each data degradation. We use simulated data to train our models by gradually including four types of degradation, as illustrated in Table 4. From quantitative comparisons on LLFF [37], we observe that all employed degradations are beneficial to our system.
Figure 8: Qualitative evaluation of the improvement over three SOTA NeRFs on LLFF, Noisy LLFF Synthetic, and Tanks and Temples.
Figure 9: Quantitative comparison between our NDS and BSR [73] over six LLFF scenes. We draw the normalized differences between the simulated images of the two degradation methods and the real NeRF-rendered images. The smaller values, the better results are (best viewed in color).
#### 5.5.2 Inter-viewpoint Mixer
View selection strategy.We develop a view selection strategy to make full use of high-quality reference views. As shown in Fig. 10, our system can identify the most relevant views for quality enhancement when compared to random selection. Also, the quantitative results in Table 5 suggest that our view selection achieves significantly improved results, illustrating the usefulness of our method.
Hybrid recurrent multi-view aggregation.To handle large viewpoint differences in reference and rendered views, we develop a hybrid recurrent inter-viewpoint aggregation network. We train models using either pixel-wise or patch-wise aggregation and test different iterations to assess the proposed IVM. Models using a single aggregation approach, as illustrated in Table 6, perform worse than our full configuration. Additionally, by gradually increasing the iteration numbers from 1 to 3, we achieve improvements of 0.12dB and 0.06dB, albeit at an additional cost of 66 ms and 46 ms for aggregation. Last, compared with the existing state-of-the-art models in Table (a)a, thanks to the recurrent hybrid aggregation strategy, our IVM outperforms all of these models in terms of quantitative results, demonstrating the strength of our aggregation design.
Limitation.Though NeRFLiX achieves promising progress for its universal improvements over existing NeRF models. There are still some future directions that deserve further exploration. (1) Our NDS is one of many possible solutions for NeRF degradation simulation. (2) Exploring real-time inter-viewpoint mixers is interesting and useful.
## 6 Conclusion
We presented NeRFLiX, a general NeRF-agnostic restoration paradigm for high-quality neural view synthesis. We systematically analyzed the NeRF rendering pipeline and introduced the concept of NeRF-style degradations. Towards to eliminate NeRF-style artifacts, we presented a novel NeRF-style degradation simulator and constructed a large-scale simulated dataset. Benefiting from our simulated dataset, we demonstrated how SOTA deep neural networks could be trained for NeRF artifact removal. To further restore missing details of NeRF rendered frames, we proposed an inter-viewpoint mixer that is capable of aggregating multi-view frames captured from free viewpoints. Additionally, we developed a view selection scheme for choosing the most pertinent reference frames, largely alleviating the computing burden while achieving superior results. Extensive experiments have verified the effectiveness of our NeRFLiX. Code will be made publicly available.
|
2304.04171 | Learning to Tokenize for Generative Retrieval | Conventional document retrieval techniques are mainly based on the
index-retrieve paradigm. It is challenging to optimize pipelines based on this
paradigm in an end-to-end manner. As an alternative, generative retrieval
represents documents as identifiers (docid) and retrieves documents by
generating docids, enabling end-to-end modeling of document retrieval tasks.
However, it is an open question how one should define the document identifiers.
Current approaches to the task of defining document identifiers rely on fixed
rule-based docids, such as the title of a document or the result of clustering
BERT embeddings, which often fail to capture the complete semantic information
of a document. We propose GenRet, a document tokenization learning method to
address the challenge of defining document identifiers for generative
retrieval. GenRet learns to tokenize documents into short discrete
representations (i.e., docids) via a discrete auto-encoding approach. Three
components are included in GenRet: (i) a tokenization model that produces
docids for documents; (ii) a reconstruction model that learns to reconstruct a
document based on a docid; and (iii) a sequence-to-sequence retrieval model
that generates relevant document identifiers directly for a designated query.
By using an auto-encoding framework, GenRet learns semantic docids in a fully
end-to-end manner. We also develop a progressive training scheme to capture the
autoregressive nature of docids and to stabilize training. We conduct
experiments on the NQ320K, MS MARCO, and BEIR datasets to assess the
effectiveness of GenRet. GenRet establishes the new state-of-the-art on the
NQ320K dataset. Especially, compared to generative retrieval baselines, GenRet
can achieve significant improvements on the unseen documents. GenRet also
outperforms comparable baselines on MS MARCO and BEIR, demonstrating the
method's generalizability. | Weiwei Sun, Lingyong Yan, Zheng Chen, Shuaiqiang Wang, Haichao Zhu, Pengjie Ren, Zhumin Chen, Dawei Yin, Maarten de Rijke, Zhaochun Ren | 2023-04-09T06:18:19Z | http://arxiv.org/abs/2304.04171v1 | # Learning to Tokenize for Generative Retrieval
###### Abstract.
Conventional document retrieval techniques are mainly based on the index-retrieve paradigm. It is challenging to optimize pipelines based on this paradigm in an end-to-end manner. As an alternative, generative retrieval represents documents as identifiers (docid) and retrieves documents by generating docids, enabling end-to-end modeling of document retrieval tasks. However, it is an open question how one should define the document identifiers. Current approaches to the task of defining document identifiers rely on fixed rule-based docids, such as the title of a document or the result of clustering BERT embeddings, which often fail to capture the complete semantic information of a document.
We propose GenRet, a document tokenization learning method to address the challenge of defining document identifiers for generative retrieval. GenRet learns to tokenize documents into short discrete representations (i.e., docids) via a discrete auto-encoding approach. Three components are included in GenRet: (i) a tokenization model that produces docids for documents; (ii) a reconstruction model that learns to reconstruct a document based on a docid; and (iii) a sequence-to-sequence retrieval model that generates relevant document identifiers directly for a designated query. By using an auto-encoding framework, GenRet learns semantic docids in a fully end-to-end manner, where the produced docids can be reconstructed back to the original documents to ensure their semantics. We also develop a progressive training scheme to capture the autoregressive nature of docids and to stabilize training.
We conduct experiments on the NQ320K, MS MARCO, and BEIR datasets to assess the effectiveness of GenRet. GenRet establishes the new state-of-the-art on the NQ320K dataset. Especially, compared to generative retrieval baselines, GenRet can achieve significant improvements on the unseen documents (e.g., at least +14% relative improvements in terms of R@1). Furthermore, GenRet can better represent and retrieve documents that have not been seen during the training phase compared to previous rule-based tokenization methods. GenRet also outperforms comparable baselines on MS MARCO and BEIR, demonstrating the method's generalizability.1
Footnote 1: Preprint. Work in progress.
## 1. Introduction
Document retrieval plays an essential role in web search applications and various downstream knowledge-intensive tasks, such as question-answering and dialogue systems as it is aimed on identifying relevant documents to satisfy users' queries. Most traditional document retrieval approaches apply _sparse retrieval_ methods, which rely on building an inverted index with term matching metrics such as TF-IDF (Wang et al., 2017), query likelihood (Wang et al., 2018), or BM25 (Wang et al., 2018). The term matching metrics, however, often suffer from a lexical mismatch (Wang et al., 2019).
Major progress has recently been made in _dense retrieval_ (DR) models due to advances in pre-trained language models (LMs) (Les and Ghahramani, 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). As illustrated in Figure 1 (a), DR methods learn dense representations of both queries and documents using dual encoders, and subsequently retrieve relevant documents using maximal inner product search (MIPS) (Wang et al., 2018; Wang et al., 2018). DR methods are able to address the lexical mismatch issue with state-of-the-art performance on various retrieval tasks (Wang et al., 2018; Wang et al., 2018).
Despite their success, DR approaches face two main limitations (Wang et al., 2018; Wang et al., 2018): (i) DR models employ an index-retrieval pipeline with a fixed search procedure (MIPS), making it difficult to jointly optimize all modules in an end-to-end way; and (ii) The learning strategies (e.g., contrastive learning (Wang et al., 2018)) are usually not consistent with the pre-training objectives, such as the next token prediction (Chen et al., 2020), which makes it hard to leverage knowledge in pre-trained LMs (Chen et al., 2020).
**Generative retrieval.** Recently, _generative retrieval_ has emerged as a new paradigm for document retrieval (Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020). As illustrated in Figure 1 (b), generative retrieval models directly generate a ranked list of document identifiers (docids) for a given query using generative LMs. Specifically, there are two main steps involved in generative retrieval models: (i) _Document tokenization_, where each document in the corpus is tokenized as a sequence of discrete characters, i.e., docids, and (ii) _Generation as retrieval_, where the docids of relevant documents are output by autoregressively decoding for a given query. Unlike DR, the generative paradigm presents an end-to-end solution for document retrieval tasks (Wang et al., 2018). It also offers a promising approach to better exploit the capabilities of recent large LMs (Chen et al., 2020; Chen et al., 2020).
**Learning to tokenize documents.** Document tokenization plays a crucial role in generative retrieval, as it defines how the document is distributed in the semantic space (Wang et al., 2018). And it is still an open problem how to define the document identifiers. Most previous
Figure 1. Two types of document retrieval models: (a) _Dense retrieval_ encodes queries and documents to dense vectors and retrieves documents by MIPS; (b) _Generative retrieval_ tokenizes documents as docids and autoregressively generates docids as retrieval results.
generative methods tend to employ rule-based document tokenizers, such as generating titles or URLs (Dai et al., 2018; Li et al., 2019), or clustering results from off-the-shelf document embeddings (Zhu et al., 2019; Wang et al., 2019). However, such rule-based methods are usually ad-hoc and do not generalize well. In particular, the tokenization results potentially perform well on retrieving documents that have been seen during training, but generalize poorly to new or out-of-distribution documents (Zhu et al., 2019; Wang et al., 2019).
**The proposed method.** To address the above problem, this paper proposes GenRet, a document tokenization learning framework that learns to tokenize a document into semantic docids in a discrete auto-encoding scheme. Specifically, GenRet consists of a shared sequence-to-sequence-based document tokenization model, a retrieval model, and a document reconstruction model. In the proposed auto-encoding learning scheme, the tokenization model learns to convert documents to discrete docids, which are subsequently utilized by the reconstruction model to reconstruct the original document. The generative retrieval model is trained to generate docids in an autoregressive manner for a given query. The above three models are optimized in an end-to-end fashion to achieve seamless integration.
We further identify two challenges when using auto-encoding to optimize a generative retrieval model: (i) docids with an autoregressive nature, and (ii) docids with diversity. To address the first challenge and also to stabilize the training of GenRet, we devise a progressive training scheme. This training scheme allows for a stable training of the model by fixing optimized prefix docids \(z_{<t}\). To optimize the docids at each step, three proposed losses are utilized: (i) a reconstruction loss for predicting the document using the generated docid, (ii) a commitment loss for committing the docid and to avoid forgetting, and (iii) a retrieval loss for optimizing the retrieval performance end-to-end. To address the second challenge, we propose a parameter initialization strategy and a re-assignment of the docid based on a _diverse clustering_ technique to increase the diversity of the generated docids.
**Experiments.** We conduct experiments on three well-known document retrieval benchmark datasets: (i) NQ320K, with a subset of Wikipedia (Zhu et al., 2019; Li et al., 2019); (ii) MS MARCO, with web pages relevant to a set of search queries (Dai et al., 2018; Li et al., 2019); and (iii) BEIR, with heterogeneous retrieval tasks for out-of-distribution evaluation (Zhu et al., 2019). Our experimental results demonstrate that GenRet attains superior retrieval performance against state-of-the-art dense or generative retrieval models. Experiments on NQ320K show that GenRet establishes the new state-of-the-art on this dataset, achieving +14% relative improvements on the unseen test set compared to the best baseline method. Experiments on MS MARCO and six BEIR datasets also shows that GenRet significantly outperforms existing generative methods and achieves competitive results compared to the best dense retrieval model. Experiments on retrieving new documents, analytical experiments, and efficiency analysis confirm the effectiveness of the proposed model.
**Contributions.** In this paper we make the following contributions: (i) We propose GenRet, a generative retrieval model that represents documents as semantic discrete docids. To the best of our knowledge, this is the first tokenization learning method for document retrieval. (ii) We propose an auto-encoding approach, where the docids generated by our tokenization model are reconstruct by a reconstruction model to ensure the docids capture the semantic information of the document. (iii) We devise a progressive training scheme to model the autoregressive nature of docids and stabilize the training process. (iv) Experimental results demonstrate that GenRet achieves significant improvements, especially on unseen documents, compared to generative retrieval baselines.
## 2. Preliminaries
The document retrieval task can be formalized as the process of retrieving a relevant document \(d\) for a search query \(q\) from a collection of documents \(\mathcal{D}\). Each document, \(d\in\mathcal{D}\), is a plain text document consisting of a sequence of tokens, denoted as \(d=\{d_{1},\ldots,d_{|d|}\}\), where \(|d|\) represents the total number of tokens in the document.
Unlike dense retrieval methods, which return the most relevant documents based on the relevance score of each document with respect to a given query \(q\), _generative retrieval_ models aim to directly generate documents for a given query \(q\) using a generative model.
**Document tokenization.** For _generative retrieval_ models, it is usually challenging and computationally inefficient to directly generate original documents of typically long length. Therefore, most existing approaches rely on the technique named _document tokenization_, which represents a document \(d=\{d_{1},\ldots,d_{|d|}\}\) as a shorter sequence of discrete tokens (docid) \(z=\{z_{1},\ldots,z_{t},\ldots,z_{M}\}\), where each token \(z_{t}\) is as a \(K\)-way categorical variable, with \(z_{t}\in 1,2,\ldots,K\).
As an alternative sequence of the original document, the tokenized \(z\) should satisfy the following two properties: (i) different documents have short but different docids; (ii) docids capture the semantics of their associated documents as much as possible (Zhu et al., 2019). Because \(z\) is a sequence of a fixed length and usually shorter than the original document \(d\), the model's training and inference can be simplified and more efficient.
As mentioned above, this paper employs a tokenization model \(Q\colon d\to z\) to map \(d\) to docid \(z\). More details about \(Q\) are provided in Section 3.1.
**Generation as retrieval.** After tokenizing each document to docid \(z\), a generative retrieval model \(P\colon q\to z\) learns to retrieve relevant documents by generating a query \(q\) to a docid \(z\) autoregressively:
\[z=\prod_{t=1}^{M}P(z_{t}\mid z_{<t},q), \tag{1}\]
where \(z_{<t}\) denotes the prefix of \(z\) up to time step \(t\). The model employs a _constrained decoding_ technique to ensure that the generated docid \(z\) exists in the corpus \(\mathcal{D}\)(Li et al., 2019). This is achieved by constructing a prefix tree based on the valid docids in \(\mathcal{D}\) and truncating the generation probability of invalid docids to 0.0 during the decoding process. The model retrieves multiple documents using beam search.
## 3. Genret
Conventionally, document tokenization is done by a fixed preprocessing step, such as using the title of a document or the results of hierarchical clustering obtained from BERT (Li et al., 2019; Li et al., 2019). However, it has been observed that such ad-hoc document tokenization methods often fail to capture the complete semantics of a document. For
example, the title of a web page often does not exist or has low relevance to the content of the web page, and the use of clustering-based docids arbitrarily defines the document in discrete space.
In this paper, we propose GenRrt, a novel tokenization learning method based on discrete auto-encoding, to learn semantic docid in a fully end-to-end manner. Figure 2 gives an overview of the proposed method. The proposed GenRrt comprises three main components: (i) a sequence-to-sequence based retrieval model (\(P(z\mid q)\)), (ii) a document tokenization model (\(Q(z\mid d)\)), and (iii) a reconstruction model (\(R(d\mid z)\)). The document tokenization model tokenizes a document \(d\) into unique discrete variables \(z\), and the retrieval model is trained to generate the latent variables \(z\) for a given query \(q\). In addition, the reconstruction model is used to re-generate the original document from \(z\) to ensure \(z\) captures the semantics of the original document as much as possible.
We detail the model architecture of the document tokenization and document retrieval model in Section 3.1, the devised reconstruction model in Section 3.2, and the model optimization method in Section 3.3.
### Document tokenization and retrieval model
Since document tokenization and generative retrieval both aim to map the input text to a discrete model, we use a shared T5 Transformer architecture for document tokenization and generative retrieval models. Specifically, given an input text \(d^{2}\), the T5-based tokenization model encodes \(d\) and a prefix of docid \(z_{<t}\) and continuously produces latent representation \(\mathbf{d}_{t}\) of \(d\) at time step \(t\):
\[\mathbf{d}_{t}=\text{Decoder}(\text{Encoder}(d),z_{<t})\ \in\mathbb{R}^{D}, \tag{2}\]
where \(D\) denotes the hidden size of the model, \(\text{Encoder}(d)\) denotes the output of the Encoder.
Then, the tokenization model generates a token for each document based on \(\mathbf{d}_{t}\). At each timestep \(t\), we define an external embedding matrix named _codebook_\(\mathbf{E}_{t}\in\mathbb{R}^{K\times D}\), where \(K\) is the size of the discrete latent space. There are \(K\) embedding vectors \(\mathbf{e}_{t,j}\in\mathbb{R}^{D},j\in[K]\), and each vector \(\mathbf{e}_{t,j}\) can be regarded as the centroid of a segmentation.
Based on the _codebook_\(\mathbf{E}_{t}\), the discrete latent variable \(z_{t}\) at timestep \(t\) is calculated by a dot-product look-up using the codebook \(\mathbf{E}_{t}\):
\[Q(z_{t}=j\mid z_{<t},d)=\text{Softmax}_{j}(\mathbf{d}_{t}\cdot\mathbf{E}_{t} ^{\top}), \tag{3}\]
where \(Q(z_{t}=j\mid z_{<t},d)\) denotes the probability of tokenizing \(d\) to a particular value \(j\in[K]\) at timestep \(t\), \(\text{Softmax}_{j}\) is a softmax function to output the probability of axis \(j\).
Finally, the tokenization model selects the docid that achieves the maximum probability to define the docid \(z_{t}\):
\[z_{t}=\operatorname*{arg\,max}_{j}Q(z_{t}=j\mid z_{<t},d). \tag{4}\]
in which the model selects the id \(j\) corresponding to the embedding vector \(\mathbf{e}_{t,j}\) with the maximum inner-product with \(\mathbf{d}_{t}\) as the docid \(z_{t}\) at timestep \(t\).
The generative retrieval model \(P(z\mid q)\) shares the same architecture as \(Q(z\mid d)\), while generating \(z\) using the input query \(q\), as formulated in Eq. 1.
### Document reconstruction model
The docid generated by the tokenization model \(Q\) is required to capture the semantic information of the document. To this end, we propose an auto-encoding training scheme, where a reconstruction model \(R\colon z\to d\) that predicts \(d\) using \(z\) is designed to force the tokenization model \(Q\colon d\to z\) to reproduce a docid \(z\) that can be reconstructed back-to-the original document.
The input of the reconstruction model is docid \(z\), and the output is its associated document \(d\). We first embed \(z\) into representation matrix \(\mathbf{z}=\{\mathbf{z}_{1},\dots,\mathbf{z}_{M}\}\in\mathbb{R}^{M\times D}\) using the codebook of the tokenization model:
\[\mathbf{z}=\{\mathbf{e}_{1,z_{1}},\mathbf{e}_{2,z_{2}},\dots,\mathbf{e}_{M,z \mathbf{M}}\}\in\mathbb{R}^{M\times D}, \tag{5}\]
where each \(t\in[M]\), \(\mathbf{z}_{t}=\mathbf{e}_{t,z_{t}}\in\mathbb{R}^{D}\) is the embedding vector of \(z_{t}\) in the \(t\)-step codebook \(\mathbf{E}_{t}\).
We then devise a retrieval-based reconstruction model that predicts the target document \(d\) by retrieving it from document collection \(\mathcal{D}\), based on the inputs \(\mathbf{z}\). The relevance score between the input docid \(z\) and the target document \(d\) is defined as follows:
\[R(d\mid\mathbf{z})=\prod_{i=1}^{M}\frac{\exp(\mathbf{z}_{t}\cdot\text{sg}( \mathbf{d}_{t}^{\top}))}{\sum_{d^{*}\in S(z_{<t})}\exp(\mathbf{z}_{t}\cdot \text{sg}(\mathbf{d}_{t}^{\top}))}, \tag{6}\]
where \(S(z_{<t})\) is a sub-collection of \(\mathcal{D}\) consisting of documents that have a docid prefix that is the same as \(z_{<t}\). \(d^{*}\in S(z_{<t})\) represents a document from the sub-collection \(S(z_{<t})\). \(\mathbf{d}_{t}\) and \(\mathbf{d}^{*}_{t}\) are continuous
Figure 2. An overview of the proposed method. The proposed method utilizes a document tokenization model to convert a given document into a sequence of discrete tokens, referred to as a docid. This tokenization process allows for the reconstruction of the original document through a reconstruction model. Subsequently, an autoregressive generation model is employed to retrieve documents through the generation of their respective docids.
representation of documents \(d\) and \(d^{*}\), respectively, as defined in Eq. 2. The operator \(\text{sg}(\cdot)\) is the stop gradient operator defined as follows:
\[\text{sg}(x)=\begin{cases}x,&\text{forward pass}\\ 0,&\text{backward pass}.\end{cases} \tag{7}\]
Intuitively, \(R(d\mid\mathbf{z})\) is designed to retrieve a specific document \(d\) from a set of documents \(S(z_{<t})\) at each timestep \(t\). The set \(S(z_{<t})\) only includes those documents that are assigned the same docid prefix \(z_{<t}\) as the target document \(d\). By utilizing this loss function, at each step \(t\), the model is facilitated to learn the residual semantics of the documents not captured by the previous docid \(z_{<t}\).
### Model optimization
For the document tokenization model \(Q(z\mid d)\), generative retrieval model \(P(z\mid q)\) and reconstruction model \(R(d\mid z)\), jointly optimizing these three models using auto-encoding is challenging for the following two reasons:
* **Learning docids in an autoregressive fashion**. That is: (i) The prediction of the \(z_{t}\) at time \(t\) relies on previously predicted docids \(z_{<t}\), which is often under-optimized at the beginning and rapidly changes during training, making it difficult to reach convergence. (ii) Simultaneously optimizing \(z\) makes it challenging to guarantee a unique docid assignment. To stabilize the training of GenRet, we devise a _progressive training scheme_ (see Section 3.3.1).
* **Generating docids with diversity**. Optimizing the model using auto-encoding often leads to unbalanced docid assignment: a few major docids are assigned to a large number of documents and most other docids are rarely assigned. Such a sub-optimal distribution of docids affects the model distinguishability, which in turns triggers length increments of docids in order to distinguish conflicting documents. We introduce two _diverse clustering_ techniques to ensure docid diversity (see Section 3.3.2).
#### 3.3.1. Progressive training scheme
To optimize each of the three models listed above in an autoregressive manner, we propose a progressive auto-encoding learning scheme, as illustrated in Figure 3. The whole learning scheme contains \(M\) learning steps with respect to the final docid in \(M\)-token. And the docid \(z_{T}\) at step \(T\in[M]\) is learned and optimized at the corresponding learning step. Besides, at each step \(T\in[M]\), the docid \(z_{T}\) and the model parameters associated with \(z_{T}\) generation are updated, while previously produced docids \(z_{<T}\) and other parameters are kept fixed. By progressively performing the above process, we can finally optimize and learn our models.
At each optimization step, say the \(T\)-step, we devise the learning objective for document tokenization consisting of three loss functions detailed below.
**Reconstruction loss.** We utilize the reconstruction model \(R(d\mid z)\) as an auxiliary model to learn to optimize the docid generation, whose main goal is capturing as much semantics in the docid as possible. Therefore, we define a reconstruction loss function of step \(T\) as follows:
\[\begin{split}\mathcal{L}_{\text{Rec}}&=-\log R(d \mid\hat{\boldsymbol{z}}_{\leq T})\\ \hat{\boldsymbol{z}}_{\leq T}&=\{\mathbf{sg}( \boldsymbol{z}_{1}),\ldots,\text{sg}(\boldsymbol{z}_{T-1}),\boldsymbol{z}_{T} \}\;\in\mathbb{R}^{T\times D}\\ \forall t\in[T]:\;\boldsymbol{z}_{t}&=\mathbf{e}_{t,T^{\prime}}\in\mathbb{R}^{D},\;j^{*}=\operatorname*{arg\,max}_{j}Q( \boldsymbol{z}_{t}=j\mid z_{<t},d),\end{split} \tag{8}\]
where \(\hat{\boldsymbol{z}}_{\leq T}\) is the first \(T\) representations of the \(z\), and only the variable \(\boldsymbol{z}_{T}\) is optimized in step \(T\). \(Q(\boldsymbol{z}_{t}=j\mid z_{<t},d)\) is defined in Eq. 3. And the document tokenization model \(Q\) can therefore be optimized when minimizing \(\mathcal{L}_{\text{Rec}}\).
Of note, since the computation involves a non-differentiable operation - \(\operatorname*{arg\,max}(\cdot)\), we apply straight-through gradient estimation to back-propagate the gradient from reconstruction loss (Srivastava et al., 2017; Goodfellow et al., 2016). Specifically, the gradients to document representation \(\boldsymbol{d}_{T}\) are defined as \(\frac{\partial\mathcal{L}_{\text{Rec}}}{\partial\boldsymbol{d}_{T}}\coloneqq \frac{\partial\mathcal{L}_{\text{Rec}}}{\partial\boldsymbol{d}_{T}}\). And the gradients to the _codebook_ embedding \(\mathbf{e}_{T,j}\) are defined as \(\frac{\partial\mathcal{L}_{\text{Rec}}}{\partial\boldsymbol{c}_{T,j}} \coloneqq 1_{z_{T}=j}\frac{\partial\mathcal{L}_{\text{Rec}}}{\partial\boldsymbol{x}_{T}}\).
**Commitment loss.** In addition, to make sure the predicted docid commits to an embedding and to avoid models forgetting previous docid \(z_{<t}\), we add a commitment loss as follows:
\[\mathcal{L}_{\text{Com}}=-\sum_{t=1}^{T}\log Q(\boldsymbol{z}_{t}\mid z_{<t},d). \tag{9}\]
**Retrieval loss.** For the generative retrieval model \(P\), we jointly learn it together with the document tokenization model \(Q\), where \(P\) learns to generate the docids of relevant documents \(d\) given a query \(q\). Specifically, suppose \((q,d)\) are a query and relevant document pair; we define the learning objective of retrieval model \(P\) as:
\[\mathcal{L}_{\text{Ret}}=-\log\frac{\exp(\mathbf{q}_{T}\cdot\mathbf{d}_{T})}{ \sum_{d^{-}B}\exp(\mathbf{q}_{T}\cdot\mathbf{d}^{-}\boldsymbol{r})}-\sum_{t=1 }^{T}\log P(\boldsymbol{z}_{t}\mid z_{<t},q), \tag{10}\]
where the first term is a ranking-oriented loss enhancing the model using \((q,d)\) pair; \(d^{-}\) is an in-batch negative document from the same training mini-batch \(B\); \(\mathbf{q}_{T}\) and \(\mathbf{d}_{T}\) denote the representation of \(q\) and \(d\) at timestep \(T\). The second term is the cross-entropy loss for generating docid \(z\) based on query \(q\).
The final loss we use at step-\(T\) is the sum of reconstruction loss, commitment loss, and retrieval loss:
\[\mathcal{L}=\mathcal{L}_{\text{Rec}}+\mathcal{L}_{\text{Com}}+\mathcal{L}_{ \text{Ret}}. \tag{11}\]
#### 3.3.2. Diverse clustering technique
To ensure diversity of generated docids, we adopt two diverse clustering techniques-codebook initialization and docid re-assignment at each progressive training step, where codebook initialization mainly aims to increase the balance of semantic space segmentation, and the docid re-assignment mainly aims to increase the balance of docid assignments.
**Codebook initialization.** In order to initialize the codebook for our model, we first warm-up the model by passing the continuous representation \(\mathbf{d}_{T}\) to the reconstruction model instead of the
Figure 3. Progressive training scheme. \(z_{t}\) (**docid at timestep \(t\)) is optimized at the \(t\)-th training step, while \(z_{<t}\) (**docids before timestep \(t\)) are kept fixed**.
doci representation \(\mathbf{z}_{T}\) as defined in Eq. 5. During this warm-up phase, we optimize the model using the reconstruction loss \(\mathcal{L}_{\text{Rec}}\) and commitment loss \(\mathcal{L}_{\text{Com}}\). Next, we collect the continuous representations \(\mathbf{d}_{T}\) of all documents in \(\mathcal{D}\), and cluster them into \(K\) groups. The centroids of these clusters are then used as the initialized codebook \(\mathbf{E}_{T}\). To balance the initialized docid distribution, we utilize a diverse constrained clustering algorithm, _Constrained K-Means_, which modifies the cluster assignment step (E in EM) by formulating it as a minimum cost flow (MCF) linear network optimization problem (Bengio et al., 2017).
**Docid re-assignment.** In order to assign docids to a batch of documents, we modify the dot-product look-up results in Eq. 3 by ensuring that the docid for different documents in the batch are distinct (Bengio et al., 2017; Wang et al., 2018). Specifically, let \(\mathbf{D}_{t}=\{\mathbf{d}_{t}^{(1)},\dots,\mathbf{d}_{t}^{(B)}\}\in\mathbb{ R}^{B\times D}\) denote the continuous representation of a batch of documents with batch size of \(B\). The dot-product results are represented by \(\mathbf{H}=\mathbf{D}_{t}\cdot\mathbf{E}_{T}^{\top}\in\mathbb{R}^{B\times K}\). To obtain distinct docids, we calculate an alternative \(\mathbf{H}^{*}=\text{Diag}(\mathbf{u})\exp(\frac{\mathbf{H}}{2})\,\text{Diag}( \mathbf{v})\), where \(\mathbf{u}\) and \(\mathbf{v}\) are re-normalization vectors in \(\mathbb{R}^{K}\) and \(\mathbb{R}^{B}\), respectively. The re-normalization vectors are computed via the iterative Sinkhorn-Knopp algorithm (Bengio et al., 2017). Finally, \(\mathbf{H}^{*}\) is used instead of \(\mathbf{H}\) in the Softmax (Eq. 3) and \(\arg\max\) (Eq. 4) operations to obtain the docid \(z_{t}\).
## 4. Experimental Setup
### Datasets
We conduct experiments on three well-known document retrieval datasets: NQ (Krizhevsky et al., 2015), MS MARCO (Bengio et al., 2017), and BEIR (Wang et al., 2018).
**NQ320K.** NQ320K is a popular dataset for evaluating generative retrieval models (Wang et al., 2018; Wang et al., 2018). It is based on the Natural Questions (NQ) dataset proposed by Google (Krizhevsky et al., 2015). NQ320k consists of 320k query-document pairs, where the documents are gathered from Wikipedia pages, and the queries are natural language questions. We follow the evaluation setup in NCI (Wang et al., 2018) and further split the test set into two subsets: _seen test_, in which the annotated target documents of the queries are included in the training set; and _unseen test_, in which no labeled document is included in the training set.
**MS MARCO.** MS MARCO is a collection of queries and web pages from Bing search. Akin to NQ320k and following (Krizhevsky et al., 2015), we sample a subset of documents from the labeled documents, and use their corresponding queries for training. We evaluate the models on the queries of the MS MARCO dev set and retrieval on the sampled document subset.
**BEIR.** BEIR is a collection of datasets for heterogeneous retrieval tasks. In this paper, we evaluate the models on 6 BEIR datasets, which include distinct retrieval tasks and document collections from NQ and MS MARCO: (i) BEIR-Arg retrieves a counterargument to an argument; (ii) BEIR-Covid retrieves scientific articles about the COVID-19 pandemic; (iii) BEIR-NFC retrieves medical documents from PubMed; (iv) BEIR-SciFact retrieves scientific papers for fact-checking; (v) BEIR-SciDocs retrieves citations for scientific papers; (vi) BEIR-FiQA retrieves financial documents.
We summarize the statistics of above datasets in Table 1.
### Evaluation metrics
On NQ320K, we use Recall@{1,10,100} and Mean Reciprocal Rank (MRR)@100 as evaluation metrics, following (Wang et al., 2018). On MS MARCO, we use Recall@{1, 10, 100} and MRR@10 as evaluation metrics, following (Krizhevsky et al., 2015). On BEIR, we use nDCG@10 as the main metrics and calculate the average nDCG@10 values across multiple downstream sub-datasets as overall metrics.
### Baselines
We consider three types of baselines: sparse retrieval methods, dense retrieval methods, and generative retrieval methods.
The sparse retrieval baselines are as follows: \(\bullet\)**BM25**, uses the tf-idf feature to measure term weights; we use the implementation from [http://pyserini.io/](http://pyserini.io/). \(\bullet\)**DocTSQuery**, expands a document with possible queries predicted by a finetuned T5 with this document as the input.
The dense retrieval baselines are as follows: \(\bullet\)**DPR**(Krizhevsky et al., 2015), a dual-encoder model using the representation of the [CLS] token of BERT. \(\bullet\)**ANCE**(Wang et al., 2018), an asynchronously updated ANN indexer is utilized to mine hard negatives for training a RoBERTa-based dual-encoder model. \(\bullet\)**Sentence-T5**(Ngu
of a pre-built FM-indexer. We refer to the results reported by Wang et al. (Wang et al., 2019). \(\bullet\)**CGR-Contra**(Wang et al., 2019), a title generation model with a contextualized vocabulary embedding and a contrastive learning loss. \(\bullet\)**DSI-QG**(Wang et al., 2019), uses a query generation model to augment the document collection. We reproduce the DSI-QG results using T5 and our dataset. \(\bullet\)**NCI**(Wang et al., 2019), uses a prefix-aware weight-adaptive decoder and various query generation strategies, including DocAsQuery and DocT5Query. In particular, NCI augments training data by generating 15 queries for each document. \(\bullet\)**Ultron**(Wang et al., 2019), uses a three-stage training pipeline and represents the document as three types of identifiers, including URL, PQ, and Atomic.
We highlight three of our reproduced baselines that constitute a fair comparison with the proposed method, all of which use the T5 model and experimental setup, but they differ model outputs: (i) Sentence-T5 outputs continuous vectors, (ii) GENRE outputs document titles, (iii) DSI-QG outputs clustering ID, while GenRet outputs docids learned using the proposed tokenization method.
### Implementation details
**Hyper-parameters.** In our experiments, we utilize the T5-Base model (Wang et al., 2019) as the base Transformer and initialize a new codebook embedding \(\mathbf{E}_{t}\) for each time step. We set the number of clusters to be \(K=512\) for all datasets, with the length of the docid \(M\) being dependent on the number of documents present. For datasets containing a larger number of candidate documents, a larger value of \(M\) is set to ensure that all documents are assigned unique document ids. In the docid re-assignment, the hyper-parameter \(\epsilon\) is set to 1.0, and the Sinkhorn-Knopp algorithm is executed for 100 iterations.
**Indexing with query generation.** Following previous work (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), we use query generation models to generate synthetic (query, document) pairs for data augmentation. Specifically, we use the pre-trained query generation model from DocT5Query (Chen et al., 2019) to augment the NQ and MS MARCO datasets. In query generation, we use nucleus sampling with parameters \(p=0.8,t=0.8\) and generate five queries for each document in the collection. For the BEIR datasets, we use the queries generated by GPL (Wang et al., 2019), which can be downloaded from their website.3 GPL uses a DocT5Query (Chen et al., 2019) generator trained on MS MARCO to generate about 250K queries for each BEIR dataset.
Footnote 3: [https://public.ulp.informatik.tu-darmstadt.de/kwang/gpl/generated-data/beir/](https://public.ulp.informatik.tu-darmstadt.de/kwang/gpl/generated-data/beir/)
**Training and inference.** The proposed models and the reproduced baselines are implemented with PyTorch 1.7.1 and HuggingFace transformers 4.22.2. We optimize the model using AdamW and set the learning rate to \(5e-4\). The batch size is 256, and the model is optimized for up to 500k steps for each timestep. In progressive training, we first warm up the model for 5K steps and then initialize the codebook using the clustering centroids as mentioned in Section 3.3.1. We use constrained clustering4 to obtain diverse clustering results. During inference, we use beam search with constrained decoding (Chen et al., 2019) and a beam size of 100.
Footnote 4: [https://github.com/joshll/k-means-constrained](https://github.com/joshll/k-means-constrained)
## 5. Experimental Results
### Main results
**Results on NQ320K.** In Table 2, we list the results on NQ320K. GenRet outperforms both the strong pre-trained dense retrieval model, GTR, and the previous best generative retrieval method, NCI, thereby establishing a new state-of-the-art on the NQ320K dataset. Furthermore, our results reveal that existing generative retrieval methods perform well on the seen test but lag behind dense retrieval methods on the unseen test. For example, NCI obtains an MRR@100 of 76.8 on the seen test, which is higher than the MRR@100 of 65.3 obtained by GTR-Base. However, on unseen test data, NCI performs worse than GTR-Base. In contrast, GenRet performs well on both seen and unseen test data. This result highlights the ability of GenRet to combine the advantages of both dense and generative retrieval by learning discrete docids with semantics through end-to-end optimization.
**Results on MS MARCO.** Table 3 presents the results on the MS MARCO dataset. GenRet significantly outperforms previous generative retrieval methods and achieves comparable results with the state-of-the-art dense retrieval method GTR. Furthermore, previous generative retrieval methods (e.g., GENRE, Ultron) utilizing metadata such as the title and URL, while exhibiting decent performance on the NQ320K dataset, underperform in comparison to previous-best dense retrieval (GTR) and sparse retrieval (DocT5Query) methods on the MS MARCO dataset. This can likely because that the NQ320K dataset retrieves Wikipedia documents, where metadata like the title effectively capture the semantics of the document. In the case of the MS MARCO dataset, which is a web search dataset, the metadata often does not adequately characterize the documents, resulting in a decline in performance of the generative retrieval model. In contrast, GenRet learns to generate semantic docids that effectively enhance the generative retrieval model.
**Results on BEIR.** Table 4 lists the results of the baselines and GenRet on six datasets of BEIR. These datasets represent a diverse range of information retrieval scenarios. On average, GenRet outperforms strong baselines including BM25 and GTR-Base, and achieves competitive results compared to state-of-the-art sparse and dense retrieval methods. In comparison to the ST5 GPL method that utilizes the same training data and backbone T5 model, GenRet achieves better results. Additionally, GenRet demonstrates a significant improvement over the previous generative retrieval model GENRE that utilizes titles asots. Furthermore, GENRE performs poorly on some datasets, such as BEIR-Covid and BEIR-SciDocs. This may be because the titles of the documents in these datasets do not adequately capture their semantic content.
### Performance on retrieving new documents
In this experiment, we investigate the impact of various document tokenization techniques on the ability of generative retrieval models to retrieve new documents. The generative models with different tokenization methods are trained on NQ320K data, excluding unseen documents, and are evaluated on NQ320K Unseen test set and BEIR-{Arg, NFC, SciDocs} datasets. For the baseline methods, which use rule-based document tokenization methods, the docids are generated for the target document collection using their respective tokenization techniques. In contrast, our proposed method
uses a tokenization model to tokenize the documents in the target collection, producing the docids. However, our method may result in duplicate docids. In such cases, all corresponding documents are retrieved and shuffled in an arbitrary order. The results of this evaluation are summarized in Table 5.
Document tokenization methods that do not consider the semantic information of the documents, such as Naive String and Atomic, are ineffective in retrieving new documents without model updating. Methods that consider the semantic information of the documents, such as those based on title or BERT clustering, show some improvement. Our proposed document tokenization method significantly improves over these existing rule-based document tokenization methods. For instance, when the model trained on NQ - a factoid QA data based on Wikipedia documents - is applied to a distinct retrieval task on a different document collection, BEIR-SciDocs, a citation retrieval task on a collection of scientific articles, our proposed document tokenization model still showed promising results with an nDCG@10 of 12.3, which is comparable to those models trained on the target document collection. This suggests
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Full test**} & \multicolumn{3}{c}{**Seen test**} & \multicolumn{3}{c}{**Unseen test**} \\ \cline{2-10} & R@1 & R@10 & R@100 & MRR@100 & R@1 & R@10 & R@100 & MRR@100 & R@1 & R@10 & R@100 & MRR@100 \\ \hline _Sparse retrieval_ & & & & & & & & & \\ BM25 [39] & 29.7 & 60.3 & 82.1 & 40.2 & 29.1 & 59.8 & 82.4 & 39.5 & 32.3 & 61.9 & 81.2 & 42.7 \\ DocT5Query [8] & 38.0 & 69.3 & 86.1 & 48.9 & 35.1 & 68.3 & 86.4 & 46.7 & 48.5 & 72.9 & 85.0 & 57.0 \\ \hline _Dense retrieval_ & & & & & & & & & \\ DPR [19] & 50.2 & 77.7 & 90.9 & 59.9 & 50.2 & 78.7 & 91.6 & 60.2 & 50.0 & 74.2 & 88.7 & 58.8 \\ ANCE [47] & 50.2 & 78.5 & 91.4 & 60.2 & 49.7 & 79.2 & 92.3 & 60.1 & 52.0 & 75.9 & 88.0 & 60.5 \\ Sentence-T5\({}^{\dagger}\)[30] & 53.6 & 83.0 & 93.8 & 64.1 & 53.4 & 83.9 & 94.7 & 63.8 & 56.5 & 79.5 & 90.7 & 64.9 \\ GTR-Base\({}^{\blacktriangle}\)[31] & 56.0 & 84.4 & 93.7 & 66.2 & 54.4 & 84.7 & 94.2 & 65.3 & 61.9 & 83.2 & 92.1 & 69.6 \\ \hline _Generative retrieval_ & & & & & & & & & & \\ GENBE\({}^{\dagger}\)[5] & 55.2 & 67.3 & 75.4 & 59.9 & 69.5 & 83.7 & 90.4 & 75.0 & 6.0 & 10.4 & 23.4 & 7.8 \\ DSI\({}^{\dagger}\)[42] & 55.2 & 67.4 & 78.0 & 59.6 & 69.7 & 83.6 & 90.5 & 74.7 & 1.3 & 7.2 & 31.5 & 3.5 \\ SEAL [1] & 59.9 & 81.2 & 90.9 & 67.7 & - & - & - & - & - & - & - \\ CGR-Contra [23] & 63.4 & 81.1 & - & - & - & - & - & - & - & - & - \\ DSI-QG\({}^{\dagger}\)[53] & 63.1 & 80.7 & 88.0 & 69.5 & 68.0 & 85.0 & 91.4 & 74.3 & 45.9 & 65.8 & 76.3 & 52.8 \\ NCI [46] & 66.4 & 85.7 & 92.4 & 73.6 & 69.8 & 88.5 & 94.6 & 76.8 & 54.5 & 75.9 & 84.8 & 62.4 \\
**Ours** & **68.1\({}^{\sharp}\)** & **88.8\({}^{\sharp\ddagger}\)** & **95.2\({}^{*}\)** & **75.9\({}^{\sharp\ddagger}\)** & **70.2\({}^{\sharp}\)** & **90.3\({}^{\sharp}\)** & **96.0\({}^{\natural}\)** & **77.7\({}^{\sharp}\)** & **62.5\({}^{**}\)** & **83.6\({}^{**}\)** & **92.5\({}^{**}\)** & **70.4\({}^{**}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Results on Natural Questions (NQ320K). The results of the methods marked with \({}^{\dagger}\) are from our own re-implementation, others are from their official implementation. Methods with \({}^{\blacktriangle}\) use additional annotated document retrieval data during training. \({}^{*}\) and \({}^{**}\) indicate significant improvements over previous-best generative retrieval baselines with p-value \(<0.05\) and p-value \(<0.01\), respectively. \(\natural\) and \(\sharp\) indicate significant improvements over previous-best dense retrieval baselines with p-value \(<0.05\) and p-value \(<0.01\), respectively. The best results for each metric are indicated in boldface.**
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & R@1 & R@10 & R@100 & MRR@10 \\ \hline _Sparse retrieval_ & & & & & \\ BM25 [39] & 39.1 & 69.1 & 86.2 & 48.6 \\ DocT5Query [8] & 46.7 & 76.5 & 90.4 & 56.2 \\ \hline _Dense retrieval_ & & & & & \\ ANCE [47] & 45.6 & 75.7 & 89.6 & 55.6 \\ Sentence-T5\({}^{\dagger}\)[30] & 41.8 & 75.4 & 91.2 & 52.8 \\ GTR-Base\({}^{\blacktriangle}\)[31] & 46.2 & 79.3 & **93.8** & 57.6 \\ \hline _Generative retrieval_ & & & & & \\ GENRE\({}^{\dagger}\)[5] & 35.6 & 57.6 & 79.1 & 42.3 \\ Ultron-URL [52] & 29.6 & 67.8 & - & 40.0 \\ Ultron-PQ [52] & 31.6 & 73.1 & - & 45.4 \\ Ultron-Atomic [52] & 32.8 & 74.1 & - & 46.9 \\
**Ours** & **47.9\({}^{**}\)** & **79.8\({}^{**}\)** & 91.6\({}^{**}\) & **58.1\({}^{**}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results on MS MARCO. The results of the methods marked with \({}^{\dagger}\) are from our own re-implementation. Methods with \({}^{\blacktriangle}\) use additional annotated retrieval data for training. \({}^{*/**}\) indicates significant improvements over previous generative retrieval baselines with p-value \(<0.05/0.01\). The best results for each metric are indicated in boldface.**
that our proposed method effectively encodes the semantic information of documents in the docid and leads to a better fit between the docid and the generative retrieval model.
### Analytical experiments
We further conduct analytical experiments to study the effectiveness of the proposed method.
In Figure 4 (left), we plot the frequencies of docids at the first timestep of various learning methods. We label each method using a box with a docid and a diversity metric \(d\), which is calculated by: \(d=1-\frac{1}{2n}\sum_{j=1}^{K}\left|n\right>-n_{u}\right|\), where \(\left|\cdot\right|\) represents the absolute value, \(n\) denotes the total number of documents, \(n_{j}\) denotes the number of documents that have a docid \(\,=\,j\), and \(n_{u}=\frac{n}{K}\) is the expected number of documents per docid under the uniform distribution.
The results demonstrate the superiority of GenRet (represented by the yellow line) in terms of distribution uniformity. It uses all the potential docid \(k=512\) and achieves the highest diversity metric with a value of \(d=0.90\). The method without docid reassignment also yields a relatively balanced distribution, with a diversity metric of \(d=0.77\). However, the distribution of the method without diverse codebook initialization is highly uneven, which can be attributed to the fact that most of the randomly initialized codebook embeddings are not selected by the model during the initial training phase, leading to their lack of update and further selection in subsequent training. Additionally, the models without diverse clustering tend to converge to a trivial solution where all documents are assigned the same docid.
In Figure 4 (right), the results of two ablated variants are presented. First, GenRet w/o learning is a generative model that has been trained directly using the final output docid from GenRet, without utilizing the proposed learning scheme. Its retrieval performance is comparable to that of GenRet on seen test data; however, it is significantly lower on unseen test data. The proposed progressive auto-encoding scheme is crucial for the model to capture the semantic information of documents, rather than just the well-defined discrete docid. Secondly, GenRet w/ Ts-Small uses a small model, and its performance is inferior to that of GenRet using Ts-Base. However, the gap between the performance on seen and unseen test data is smaller, which could be attributed to the limited fitting capacity of the small model.
### Efficiency analysis
In Table 6, we compare GenRet with baseline models on MS MARCO (323,569 documents) in terms of memory footprint, offline indexing time (not including the time for neural network training), and online retrieval latency for different Top-K values. We have four observations: (i) The memory footprint of generative retrieval models (GENRE, DSI-QG, and the proposed model) is smaller than of dense and sparse retrieval methods. The memory footprint of generative retrieval models is only dependent on the model parameters, whereas dense and sparse retrieval methods require additional storage space for document embeddings, which increases linearly with the size of the document collection. (ii) DSI and GenRet take a longer time for offline indexing, as DSI involves encoding and clustering documents using BERT, while GenRet requires tokenizing documents using a tokenization model. Dense retrieval's offline time consumption comes from document encoding; GENRE uses titles hence no offline computation. (iii) The online retrieval latency of the generative retrieval model is associated with the beam size (i.e., Top-K) and the length of the docid. GenRet utilizes diverse clustering to generate a shorter docid, resulting in improved online retrieval speed compared to DSI and GENRE.
### Case study
Table 7 shows an example of outputs of GENRE, NCI, and GenRet for the query "_what state courts can order a new trial_" and its corresponding document in NQ320K. The results show that GenRet, unlike the baselines, successfully returns the docid of the target
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Method & Memory & Time (Offline) & Top-\(K\) & Time (Online) \\ \hline ANCE & 1160MB & 145min & 100 & 0.69s \\ GTR-Base & 1430MB & 140min & 100 & 1.97s \\ \hline GENRE & **851MB** & **0min** & 100 & 1.41s \\ & & & 10 & 0.69s \\ \hline DSI & 851MB & 310min & 100 & 0.32s \\ & & & 10 & 0.21s \\ \hline
**Ours** & 860MB & 220min & 100 & **0.16s** \\ & & & 10 & **0.10s** \\ \hline \hline \end{tabular}
\end{table}
Table 6. Efficiency analysis.
Figure 4. Left: Docid distribution on NQ320K. The id are sorted by the assigned frequency. Right: Ablation study on NQ320K.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & \multicolumn{3}{c}{**NQ** (R@1)} & \multicolumn{2}{c}{**BEIR** (nDCG@10)} \\ Method & Docid & Unseen & Arg & NFC & SciDocs \\ \hline DSI-Naive\({}^{\ddagger}\)(Naj
document. We highlight words in the target document based on their attention activation in GenRet at different time steps \(t\). The yellow color indicates words that received higher attention at \(t=1\), while gray indicates words that received higher attention at \(t=2\). The example shows that the model focuses on different words at different time steps. GenRet gives more attention to words related to the topic, such as _Appellate_, in \(t=1\), and more attention to words related to the country, such as _United States_, in \(t=2\).
## 6. Related Work
**Sparse retrieval.** Traditional sparse retrieval calculates the document score using term matching metrics such as TF-IDF (Zhou et al., 2017), query likelihood (Zhou et al., 2017) or BM25 (Zhou et al., 2017). It is widely used in practice due to its outstanding trade-off between accuracy and efficiency. Some methods adaptively assign the term importance using deep neural network (Zhou et al., 2017; Li et al., 2017; Li et al., 2018). With the recent development of pre-trained LMs, DeepCT (Liu et al., 2018) and HDCT (Liu et al., 2018) calculate term importance using contextualized text representation from BERT. Doc2Query (Zhou et al., 2017) and DocT5Query (Chen et al., 2018) predict relevant queries to augment documents before building the BM25 index using a generative model like T5. Sparse retrieval often suffers from the lexical mismatches (Zhou et al., 2017).
**Dense retrieval.** Dense retrieval (DR) presents queries and documents in dense vectors and models their similarities with the inner product or cosine similarity (Liu et al., 2018). Compared with sparse retrieval, dense retrieval relieves the lexical mismatch problem. Various techniques have been proposed to improve DR models, such as hard negative mining (Zhou et al., 2017; Li et al., 2018), late interaction (Liu et al., 2018; Li et al., 2018), and knowledge distillation (Li et al., 2018; Li et al., 2018). Recent studies have shown the effectiveness of pre-training DR models using contrastive learning on large-scale corpora (Liu et al., 2018; Li et al., 2018; Li et al., 2018). Despite their success, DR approaches have several limitations (Chen et al., 2018; Li et al., 2018): (i) DR models employ an index-retrieval pipeline with a fixed search procedure (MIPS), making it difficult to optimize the model end-to-end (Zhou et al., 2017; Li et al., 2018). (ii) Training DR models relies on contrastive learning (Liu et al., 2018) to distinguish positives from negatives, which is inconsistent with large LMs training objectives (Chen et al., 2018) and fails to fully utilize the capabilities of pre-trained LMs (Chen et al., 2018).
**Generative retrieval.** Generative retrieval is increasing gaining attention. It retrieves documents by generating their docial using a generative model like T5. Generative retrieval presents an end-to-end solution for document retrieval tasks (Li et al., 2018; Li et al., 2018) and allows for better exploitation of the capabilities of large generative LMs (Chen et al., 2018). Cao et al. (Cai et al., 2018) first propose an autoregressive entity retrieval model to retrieve documents by generating titles. Tay et al. (Tay et al., 2018) propose a differentiable search index (DSI) and represent the document as atomic id, naive string, or semantic string. Bevilacqua et al. (Bevilacqua et al., 2018) suggest using arbitrary spans of a document as docids. Additionally, multiple-stage pre-training (Chen et al., 2018; Li et al., 2018), query generation (Li et al., 2018; Li et al., 2018; Li et al., 2018), contextualized embedding (Li et al., 2018), and continual learning (Li et al., 2018), have been explored in recent studies. However, existing generative retrieval models have a limitation in that they rely on fixed document tokenization to produce docids, which often fails to capture the semantic information of a document (Tay et al., 2018). It is an open question of how one should define the docids. To further capture document semantics in docid, we propose document tokenization learning methods. The semantic docid is automatically generated by the proposed discrete auto-encoding learning scheme in an end-to-end manner.
**Discrete representation learning.** Learning discrete representations using neural networks is an important research area in machine learning. For images, Rolfe (Rolfe, 2018) proposes the discrete variational autoencoder, and VQ-VAE (Tay et al., 2018) learns quantized representations via vector quantization. Dall-E (Dai et al., 2018) uses an autoregressive model to generate discrete image representation for text-to-image generation. Recently, representation learning has attracted considerable attention in NLP tasks, for tasks such as machine translation (Tay et al., 2018), dialogue generation (Li et al., 2018), and text classification (Liu et al., 2018; Li et al., 2018). For document retrieval, RepCONC (Rolfe, 2018) uses a discrete representation learning method based on constrained clustering for vector compression. We propose a document tokenization learning method for generative retrieval, which captures the autoregressive nature of docids by progressive training and enhances the diversity of docids by diverse clustering techniques.
## 7. Conclusions
This paper has proposed a document tokenization learning method for generative retrieval, named GenRet. The proposed method learns to tokenize documents into short discrete representations (i.e., docids) via a discrete auto-encoding approach, which ensures the semantics of the generated docids. A progressive training method and two diverse clustering techniques have been proposed to enhance the training of the model. Empirical results on various document retrieval datasets have demonstrated the effectiveness of the proposed method. Especially, GenRet achieves outperformance on unseen documents and can be well generalized to multiple retrieval tasks. In future work, we would like to extend the approach to large document collections. We also plan to explore generative pre-training for document tokenization using large-scale language models. Additionally, we intend to investigate the dynamic adaptation of docid prefixes for progressive training.
|
2303.08934 | PTMTorrent: A Dataset for Mining Open-source Pre-trained Model Packages | Due to the cost of developing and training deep learning models from scratch,
machine learning engineers have begun to reuse pre-trained models (PTMs) and
fine-tune them for downstream tasks. PTM registries known as "model hubs"
support engineers in distributing and reusing deep learning models. PTM
packages include pre-trained weights, documentation, model architectures,
datasets, and metadata. Mining the information in PTM packages will enable the
discovery of engineering phenomena and tools to support software engineers.
However, accessing this information is difficult - there are many PTM
registries, and both the registries and the individual packages may have rate
limiting for accessing the data. We present an open-source dataset, PTMTorrent,
to facilitate the evaluation and understanding of PTM packages. This paper
describes the creation, structure, usage, and limitations of the dataset. The
dataset includes a snapshot of 5 model hubs and a total of 15,913 PTM packages.
These packages are represented in a uniform data schema for cross-hub mining.
We describe prior uses of this data and suggest research opportunities for
mining using our dataset. The PTMTorrent dataset (v1) is available at:
https://app.globus.org/file-manager?origin_id=55e17a6e-9d8f-11ed-a2a2-8383522b48d9&origin_path=%2F~%2F.
Our dataset generation tools are available on GitHub:
https://doi.org/10.5281/zenodo.7570357. | Wenxin Jiang, Nicholas Synovic, Purvish Jajal, Taylor R. Schorlemmer, Arav Tewari, Bhavesh Pareek, George K. Thiruvathukal, James C. Davis | 2023-03-15T21:01:31Z | http://arxiv.org/abs/2303.08934v1 | # PTMTorrent: A Dataset for Mining Open-source Pre-trained Model Packages
###### Abstract
Due to the cost of developing and training deep learning models from scratch, machine learning engineers have begun to reuse pre-trained models (PTMs) and fine-tune them for downstream tasks. PTM registries known as "model hubs" support engineers in distributing and reusing deep learning models. PTM packages include pre-trained weights, documentation, model architectures, datasets, and metadata. Mining the information in PTM packages will enable the discovery of engineering phenomena and tools to support software engineers. However, accessing this information is difficult -- there are many PTM registries, and both the registries and the individual packages may have rate limiting for accessing the data.
We present an open-source dataset, PTMTorrent, to facilitate the evaluation and understanding of PTM packages. This paper describes the creation, structure, usage, and limitations of the dataset. The dataset includes a snapshot of 5 model hubs and a total of 15,913 PTM packages. These packages are represented in a uniform data schema for cross-hub mining. We describe prior uses of this data and suggest research opportunities for mining using our dataset.
**The PTMTorrent dataset (v1) is available at: [https://app.globus.org/file-manager?origin_id=55e17a6e-9d8f-11ed-a2a2-8383522h48d9@sorign_path=%2F%7E%2F](https://app.globus.org/file-manager?origin_id=55e17a6e-9d8f-11ed-a2a2-8383522h48d9@sorign_path=%2F%7E%2F).**Our dataset generation tools are available on GitHub: [https://doi.org/10.5281/zenodo.7570357](https://doi.org/10.5281/zenodo.7570357)
Open-Source Software, Data Mining, Machine learning, Empirical software engineering
## I Introduction
Modern software systems reuse Deep Neural Networks (DNNs) to build intelligent and adaptive systems [1, 2]. Engineering a DNN from scratch is challenging for many reasons, including the variation in deep learning libraries [3, 4] and the high expense of training models [5]. Organizations and developers can address some of these challenges and reduce the cost and effort associated with DNN development by reusing _pre-trained DNN models_ (PTMs) [6, 7]. PTMs are shared via _deep learning model registries_, which are modeled on traditional software package registries such as NPM [8]. These PTM packages include reusable components, such as model architectures, weights, licenses, and other metadata. Deep learning model registries enable engineers to develop their models with re-usability in mind [9, 10]. Although PTM reuse is still in its early stages, the most popular PTMs are downloaded millions of times each month [11, 12].
As PTM reuse becomes more widespread, the engineering community will benefit from research into PTM reuse practices, challenges, and tools [11, 12]. By analogy to traditional software, mining PTM software repositories can help us understand development trends [13, 14, 15] and usage patterns [16, 17]. However, mining the software repositories associated with PTM packages is difficult for three reasons related to _data availability_. First, researchers must look in many places -- PTM packages are distributed across many competing PTM registries [11]. Second, researchers must access the packages -- PTMs include complex DNN models and weights with sizes over 1 TB, and access to these packages may be hindered by throttling or rate limiting [18]. Third, for scientific replicability, this large-scale data needs to be hosted long-term.
To enable mining of PTM packages, we share _PTMTorrent_, the first many-hub dataset of PTM packages. PTMTorrent contains 15,913 PTMs from 5 different PTM registries identified in our prior work [11]. Hugging Face [19], Model Zoo [20], PyTorch Hub [21], ONNX Model Zoo [22], and Modelhub [23]. Our dataset is hosted on a high-performance storage system (HPSS) maintained by Purdue University's Research Computing center. The dataset includes the metadata of each PTM and the package histories for each GitHub repository. These packages are represented in a uniform data schema for cross-hub mining. Out dataset supports many directions for further research, including studies of the PTM supply chain, PTM package evolution, PTM mining tools, and DNN architectural trends.
## II The PTMTorrent Dataset
### _Data Source_
In prior work we mapped the major model hubs and indicated that there exist open, gated, and commercial hubs [11]. Open and gated hubs tend to be larger and
more widely used because they accept contributions from anyone, and can be accessed by anyone. Commercial hubs are offered by individual companies to share vetted models with their clients. Due to the limited access to commercial model hubs, we only provide a snapshot of the open hubs (Hugging Face) and some of the gated hubs (Model Zoo, PyTorch Hub, Modelhub, and ONNX Model Zoo).
The PTMTorrent dataset contains the repository histories of 15,913 PTM packages available as of January 2023. They are provided as complete git clones, resulting in a compressed footprint of ~61TB. Each PTM package was cloned at its most recent version, including the model card, architecture, weights, and other information provided by the maintainers (_e.g._, training configuration, hyper-parameters).
Figure 1 indicates the collection and preprocessing approaches of our dataset.
We collected PTM packages from all open and gated model hubs per Jiang _et al._[11], excluding TensorFlow Hub because it does not support version control features. We downloaded all PTM packages from Model Zoo, PyTorch Hub, ONNX Model Zoo, and Modelhub. Due to the size of Hugging Face, we downloaded only the top 10% most-downloaded PTMs.1 Overall, our dataset contains 15,913 packages from 5 PTM registries, distributed as described in Table I.
Footnote 1: Although we collected a small amount of the full Hugging Face registry, this “top 10%” snapshot includes all Hugging Face PTMs with over 30 downloads.
### _Data Schema_
Figure 2 shows the overview of the data schema we used to standardize the dataset. We extracted common entities into a general PTM schema. Each PTM registry has some custom features, so we customized the schema slightly for each model registry. The full data schema is encoded following the JSON Schema format,2 and is available in the GitHub repository associated with this project.
Footnote 2: See [https://json-schema.org/draft/2020-12/json-schema-core.html](https://json-schema.org/draft/2020-12/json-schema-core.html)
### _Data Storage_
As shown in Table I, the entire PTMTorrent dataset (v1) needs ~61TB of storage space. A cost-effective storage system is required to serve this dataset. Commercial services are cost-prohibitive at this scale, _e.g._, we estimated a monthly cost of over $1000 to store and serve this dataset from Amazon Web Services. We opted instead for an internal resource available at Purdue University: the Purdue Fortress tape-based hierarchical storage system.3 To facilitate external distribution of our dataset, we offer a Globus share [25] named _PTMTorrent_.
Footnote 3: For more information about Fortress, see [https://www.rcac.purdue.edu/knowledge/fortress/overview](https://www.rcac.purdue.edu/knowledge/fortress/overview). Our GitHub repository includes a guide on how to access data stored in Globus.
### _Maintainability and Extensibility_
The sizes of PTM registries are increasing rapidly. For example, Hugging Face provided 63,182 public PTM packages on August 2022, and now it provides 124,427 packages. We believe the number of open-source PTM packages will increase in the foreseeable future. Therefore, maintainability and extensibility are two important properties of PTMTorrent.
The PTMTorrent dataset is designed to be maintainable by re-running our scripts to gather any additional changes that may have been made to the PTM registry since its last collection. Expect a biannual update.
For extensibility, new model hubs can be incorporated into the dataset. We follow an open-source model
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Name** & **\# Models** & **Data Size** \\ \hline Hugging Face [24] & 12,401 & 61TB \\ Model Zoo [20] & 3,245 & 115GB \\ PyTorch Hub [21] & 49 & 1.5GB \\ ONNX Model Zoo [22] & 185 & 441MB \\ Modelhub [23] & 33 & 721MB \\ \hline
**PTMTorrent** & **15,913** & **\(\thicksim\)61TB** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Details about the PTMTorrent content for each of the 5 model registries we collected.
Fig. 1: Data collection and preprocessing workflow for PTMTorrent. We standardize the PTM metadata by using a data schema, collecting it from PTM packages and the corresponding GitHub repository.
and will review Issue and Pull Request contributions on GitHub. The PTMTorrent data schema captures most elements of a PTM package, though some specialization is needed. The downloaders for a new model hub can be developed based on the examples of the already-supported model hubs in our open-source data collection tools. An extender must provide 2-4 scripts following the pattern we used on the other hubs.
## III Originality and Relevance
Prior works have extracted information from open-source projects to a dataset and provide it for future analysis, such as GHTorrent [26], TravisTorrent [27], and RTPTorrent [28]. These datasets can be used for further mining software repository researches and help the community better understand open-source software projects [14, 15, 29, 30].
Similarly, our dataset captures the open-source PTM packages from many model hubs. The structure of our dataset imitates prior datasets that were focused on traditional open-source software [26, 28]. Compared to prior work, PTMTorrent focuses on PTM packages, including the metadata, architecture, dataset, and performance metrics. Our dataset provides a way for users to efficiently download and access large amount of data on PTM packages and relevant repositories.
## IV Usage Examples
### _Prior Usage in the Literature_
In prior work, we used a part of PTMTorrent (the Hugging Face part) to measure potential risks in the Hugging Face model registry [12]. We measured the dependencies of model architecture and datasets, PTM documentation, and GPG commit signing in Hugging Face PTMs. Our analysis identified potential software supply chain concerns facing PTM reusers, including spoofing, tampering, and repudiation.
In prior work, we also used metadata from Hugging Face to measure model discrepancies and maintainers' reach [11]. Our analysis showed that existing defenses appear insufficient for ensuring the security of PTMs.
The PTMTorrent dataset provides more opportunities for mining PTM data by covering more PTM registries and providing greater structure. We believe that these large amount of PTM packages can be analyzed in similar ways as traditional packages [31, 32, 33].
### _Applying an Existing MSR Tool_
Since PTMTorrent consists of git repositories, it is possible to use existing software repository mining tools on the PTM packages. Our GitHub repository includes a demonstration of this. We used our PRIME tool [34] to analyze software process metrics on a subset of the dataset.
## V Limitations
PTMTorrent is incomplete. It is biased towards the top 10% most-downloaded PTMs in HuggingFace (though this is almost all PTMs with any downloads, cf. SSII-A). There are other model hubs, such as Papers With Code [35], PINTO Model Zoo [36], and Jetson Zoo [37]. Beyond these, there are other deep learning-specific registries that lack versioning or packaging features. The initial PTMTorrent release provides PTMs from model hubs that are similar to traditional software packages, as defined by Jiang _et al._[11]. We leave their capture for future work.
Another limitation of our data is the non-standardized granularity. The current version of PTMTorrent lacks detailed metadata and does not provide uniform information, _e.g.,_ datasets, model architectures. During the data collection, we notice that the provided information from PTM registries can be quite different and we use customized data schemas for each PTM registry. As a result, it is difficult to analyze all the PTM packages under the same umbrella when using our dataset.
For example, Hugging Face provides detailed documentation and structured metadata, as well as relevant configuration files for each PTM, while ONNX Model Zoo provides PTM metadata through unstructured Markdown files. Thereby making metadata extraction challenging. To mitigate this problem, we have a parent data schema for all the PTM registries and child schemas for each specific registry that represents their custom data.
## VI Future Work
In addition to the risk measurements presented by Jiang _et al._[11, 12], the PTMTorrent dataset can be used in different ways. We suggest three research directions: PTM supply chain analysis, tools for PTM reuse, and mining tool development.
### _Supporting Future PTM Supply Chain Analysis_
Prior work has focused on understanding the characteristics of package registries and their supply chains. Zimmermann _et al._ analyzed the metadata of NPM packages and identified the potential threats on downstream users [31]. Ladisa _et al._ proposed an attack taxonomy on open-source supply chains, including code contributions to package distribution [38]. Similar studies are also important in PTM supply chain alongside studies focused on PTM-specific aspects. We propose that future studies can analyze PTMTorrent dataset to understand the characteristics of the PTM supply chain, including the dependency analysis [31], vulnerabilities [39], and code knowledge transfer [40].
Recent advances in AI, such as ChatGPT [41], that clearly build upon composing various PTMs strongly suggest that being able to study how PTMs and are composed to build more complex systems (a trait shared with traditional software) will become more important. We hope our dataset will aid in performing such analyses.
### _Expanding PTM Model Registry Analysis_
Researchers can extract more information from these model registries by reusing or developing software metrics for PTM packages, including provenance, reproducibility, and portability [12]. PTM registries can help us develop comprehensive attributes and provide these details in the PTM dashboard, similar to the measured attributes from NPM [42] and PyPi [43].
Our prior study has indicated that engineers can have trouble finding the best PTM that matches their requirements, and it can therefore be hard to identify the portability and reproducibility of the open-source PTMs [12]. Montes _et al._ shows that there exist notable discrepancies among different model zoos [44]. With more detailed and comprehensive metadata provided for each PTM and the corresponding usage patterns on downstream tasks, it will be possible to develop a recommender system to help engineers find the right set of PTMs for a given application and requirements [45]. PTM Registry contributors can develop sophisticated visualization tools--with the aid of our dataset--that help PTM users understand the strengths and limitations of each model.
### _Furthering the State of Mining Tool Development_
Given the lack of standardization among different PTM registries (SV), it was challenging to standardize all the metadata. PTMTorrent may not have everything needed for every type of analysis. Researchers can augment the dataset during the data collection and processing stage for other subsequent mining needs. We have included the relevant GitHub pages of each PTM in out dataset, and therefore the extraction can be done either based on the provided documentation from PTM registries [46] or source code from the underlying repositories [47].
## VII Acknowledgements
This work was supported by gifts from Google and Cisco and by NSF awards #2107230, #2229703, #2107020, and #2104319.
Fig. 2: An overview of PTMTorrent’s data schema. Each model hub shares a general schema (_grey boxes_), with hub-specific data stored in customized schema (_colored boxes_). The full schema is available in JSON in the dataset generation repository. |
2302.01387 | Object Dimension Extraction for Environment Mapping with Low Cost
Cameras Fused with Laser Ranging | It is essential to have a method to map an unknown terrain for various
applications. For places where human access is not possible, a method should be
proposed to identify the environment. Exploration, disaster relief,
transportation and many other purposes would be convenient if a map of the
environment is available. Replicating the human vision system using stereo
cameras would be an optimum solution. In this work, we have used laser ranging
based technique fused with stereo cameras to extract dimension of objects for
mapping. The distortions were calibrated using mathematical model of the
camera. By means of Semi Global Block Matching [1] disparity map was generated
and reduces the noise using novel noise reduction method of disparity map by
dilation. The Data from the Laser Range Finder (LRF) and noise reduced vision
data has been used to identify the object parameters. | E. M. S. P. Ekanayake, T. H. M. N. C. Thelasingha, U. V. B. L. Udugama, G. M. R. I. Godaliyadda, M. P. B. Ekanayake, B. G. L. T. Samaranayake, J. V. Wijayakulasooriya | 2023-02-01T04:35:16Z | http://arxiv.org/abs/2302.01387v1 | # Object Dimension Extraction for Environment Mapping with Low Cost Cameras Fused with Laser Ranging
###### Abstract
It is essential to have a method to map an unknown terrain for various applications. For places where human access is not possible, a method should be proposed to identify the environment. Exploration, disaster relief, transportation and many other purposes would be convenient if a map of the environment is available. Replicating the human vision system using stereo cameras would be an optimum solution. In this work, we have used laser ranging based technique fused with stereo cameras to extract dimension of objects for mapping. The distortions were calibrated using mathematical model of the camera. By means of Semi Global Block Matching [1] disparity map was generated and reduces the noise using novel noise reduction method of disparity map by dilation. The Data from the Laser Range Finder (LRF) and noise reduced vision data has been used to identify the object parameters
mapping, disparity, LIDAR, camera calibration, stereo vision, dimension extraction
## I Introduction
Mapping is in general, graphical representation of environment with various objects and features. Mapping can be used to extract object information like height size, material types, object types etc. Machine vision is giving the capabilities of human vision to a machine using technology. human vision perceives as a colorful place with three dimensions.
Mapping plays a vital role since it allows us to respond to spacious geographic and social issues. Maps are useful for understanding and identifying spatial connections and explaining concepts in a visual manner that can be easily understood.
There are many applications in technology of environment mapping as military applications, disaster relief, navigation and exploration etc., this can be used in many security applications in industry as well. Simply places where human cannot access this will be an ideal solution.
Explorations human inaccessible areas, finding dimensions of unknown terrains, by implementing this proposed method on a drone can be used for constructing 3D models of the down terrain. Further this can be extended to find material types and identifying object according to application. Number of sensors are available in the literature for detecting obstacles with help of SONAR [2], RADAR, LIDAR [3], vision systems and other proximity sensors [4].
But many of them is running under high computational power and with high cost equipment. Considering them stereo cameras are low cost and mapping has been done through stereo vision Path planning techniques for traversing given point with minimum travel cost has been developed for some time [5]. Also, Laser Range Finder(LRF) is an accurate device for measurement. Various applications have been done with LRF related to the title.
Hence through the proposed method we fused LRF data and stereo cameras with low computational cost to extract dimensions of objects while LRF is focused to find plane X, Y dimensions and stereo camera is used to find \(Z\) dimension. Altogether mapping of an environment can be achieved.
## II Proposed Solution
In the proposed solution, two-low cost off the shelf web cameras has been used for stereo vision and a simple neato XV - 11 LRF salvaged from Neato vacuum cleaner as sensors. Since camera calibration is a requirement it was calibrated properly up to 14 distortion coefficients without being limited to usual method.
From left and right images disparity map was obtained and the height of the object a found with it. Regarding the length and the width we use the LRF introducing this to find the planet of the objects and constructing the 2D environment as discussed in section [E]. method as There are many [6] researches which have been done using stereo cameras but, taking multiple realization is a method with much computational cost in terms of practical implementation on a suitable platform. Hence, what has been proposed is from stereo cameras we extracted the object information of height and to construct the 3D object LRF is used to make the 2D environment. On top of that we are going to construct the 3D object. For this purpose, only the two left and right images are needed with object height information and the LRF sweep information. This is not much computationally costly or unrealizable.
The cameras used has its own distortions. Radial distortions and tangential distortions are prominent two of those. Hence, it is necessary for them to be calibrated and undistorted first. Calibrating in the sense identifying the mathematical model and details of the camera, is done in two parts. First two cameras have been treated separately and calibrated and the error parameters were found. Also for the optimum disparity map generation rectification is necessary as the distortions are corrected
and the images are aligned in a common horizontal line, so that the stereo matching can be executed.
Considering the height extraction, maximum intensity value of the disparity image was considered. Hence to remove the effect of ground plane noise values a constant distance was always maintained when taking images. So that identifying the maximum intensity with the relation of pixel - cm relation height of the object was found. There is a dimension change when taking 3D to 2D plane using stereo cameras. Hence, what we proposed is a method as discussed in section F.
Instead of taking multiple realization [6] since considering practical implementation it is not more efficient hence that we introduce novel approach to reduce noise in disparity image by dilation.
Once height is there and the plane of the object is gained using LRF our focus is to implement in a mobile platform and conclude the map reconstruction.
The dimension data obtained can be used reconstruct the objects in a virtual reality. To create a 3D model of the environment accurately. The identified dimensions can be used compute volume surface area, and they can be used for through analysis of the objects to find the density, mass and many other materialistic properties.
Today most of the areas are using depth extraction methods by stereo cameras with image processing. Proposed method is to take disparity using low cost cameras and dilation method is used to remove noise in disparity.
### Camera model identification
The cameras have its inherent distortions and to find the depth accurately through cameras its intrinsic parameters should be identified and distortions should be corrected. The simple web cameras that has been used can be conveniently modelled d as pinhole cameras. In pinhole camera model a scene view is formed by projecting 3D points into the image plane using a perspective transformation as,
\[sm^{\prime}=A(R|\mathbf{t})M^{\prime}\]
Or,
\[\begin{bmatrix}\mathbf{u}\\ \mathbf{S}\ \mathbf{v}\\ 1\end{bmatrix}=\begin{bmatrix}f_{x}&0&c_{x}\\ 0&f_{y}&c_{y}\\ 0&0&1\\ \end{bmatrix}\begin{bmatrix}r_{11}&r_{12}&r_{13}&t_{1}\\ r_{21}&r_{22}&r_{23}&t_{2}\\ r_{31}&r_{32}&r_{33}&t_{3}\\ \end{bmatrix}\begin{bmatrix}X\\ Y\\ Z\\ 1\end{bmatrix}\]
Where (X, Y, Z) is the coordinates of a 3D point in the world coordinate space. (u, v) are the coordinates of the projection point in pixels, is a principal point that is usually at the image centre, A is a camera matrix and are the focal lengths expressed in pixel units.
Real lenses usually have distortions, without modeling them it is impossible to calculate any accurate measurement from cameras. For the pinhole camera, following distortions are possible. Radial distortions (Pincushion distortion & Barrel distortion.) Tangential distortions
To identify these errors and to rectify the errors a calibration process is needed. The above-mentioned distortions can be mathematically modeled as follows. Here, k\({}_{1}\), k\({}_{2}\), k\({}_{3}\), k\({}_{4}\), k\({}_{5}\), k\({}_{6}\)...., k\({}_{14}\) are radial distortion coefficients and P\({}_{1}\), P\({}_{2}\) are tangential distortion coefficients.
\[\begin{bmatrix}x\\ y_{2}\end{bmatrix}=R\begin{bmatrix}x\\ y_{2}\end{bmatrix}+t\] \[x^{\prime}=\frac{x}{z}\] \[y^{\prime}=\frac{y}{z}\]
\[x^{\prime\prime}=x^{\prime}\frac{1+k_{1}r^{2}+k_{2}r^{4}+k_{3}r^{6}}{1+k_{4}r ^{2}+k_{5}r^{4}+k_{6}r^{6}}+2P_{1}x^{\prime}y^{\prime}\] \[+P_{2}(r^{2}+2x^{\prime})\]
\[y^{\prime\prime}=y^{\prime}\frac{1+k_{1}r^{2}+k_{2}r^{4}+k_{3}r^{6}}{1+k_{4}r ^{2}+k_{5}r^{4}+k_{6}r^{6}}+2P_{2}x^{\prime}y^{\prime}\]
\[+P_{1}(r^{2}+2y^{\prime})\]
Where,
\[r^{2}={x^{\prime}}^{2}+{y^{\prime}}^{2}\]
\[u=f_{x}\,x^{\prime\prime}+C_{x}\]
\[v=f_{y}\,y^{\prime\prime}+C_{y}\]
### Camera calibration
Camera calibration was done in two parts. First individual cameras were calibrated and the error parameters and Camera parameters were found. Then both cameras were calibrated as a pair to find the stereo parameters of the setup. For the calibration process a checkerboard with known dimensions was used as shown in figure 1.
From Camera calibrations following parameters were found.
* Radial distortion: Correction matrix for radial distortion
* Tangential distortion: Correction matrix for Tangential distortion
* Focal length: Focal length of camera
* Principle point: Principal Point of the image plane
Figure 1: Camera calibration checker board images (left & right)
From stereo setup calibration following parameters were found
* Rotation of camera 2 Rotation of image plane of the camera 2 with respect to camera 1
* Translation of camera 2 Distance from the principal point of the camera 1 to principal point of the camera 2
* Fundamental Matrix The fundamental matrix is a relationship between any two images of the same scene that constrains where the projection of points from the scene can occur in both images.
The rotation matrix of camera two with respect to the camera one was found as,
1.0000 0.0027 -0.0036
-0.0027 1.0000 0.0016
0.0036 -0.0016 1.0000
And the translation of camera two with respect to camera one was found as,
-93.1032 1.2802 0.1104
Fundamental Matrix
-0.0000 -0.0000 0.0016
0.0000 0.0000 0.1269
-0.0018 -0.1274 0.6250
Essential Matrix
-0.0048 -0.4414 1.0334
0.1125 0.1493 93.1061
-1.2801 -93.1021 0.1459
Found parameters were used to correct the distortions in the images were aligned horizontally through the process of rectification.
### Disparity map generation
In figure 2, I represent the image plane and c denotes the principal point of image plane. (X, Y, Z) are
Cartesian coordinates. P denotes the real-world point and p denotes the images of that point. Subscript l and r denotes left and right. From geometry, a relationship between real world object points and image points can be derived as,
\[Z_{p} =f\frac{\tau}{d},\] \[X_{p} =x_{l}\frac{\tau}{d},\] \[Y_{p} =y_{l}\frac{\tau}{d},\]
Here d stands for disparity which can be mathematically represented as,
\[d=x_{l}-x_{r},\]
Disparity in each pixel is namely disparity map which is most of the time generated by Semi Global Block Matching (SGBM) [1]. It has been used as the basis to create a disparity map by using stereo images shown in figure 3.
Disparity depends on the various parameters. An experiment was conducted to find those parameters as below.
For that image pairs were captured by increasing the distance from baseline up to 1m. Also, the texture pattern of the surfaces was varied.
The disparity was found and tuned manually to an optimum state. The parameter values were recorded.
Through the experiment following were concluded,
* Obviously, the no of disparities depends on the distance from the baseline, it should decrease when going further away from the baseline.
* As the LRF is available to measure the actual distance from the baseline, adjusting the parameters such that the relative depth of the object is observable corresponding to its surfaces would be sufficient.
Figure 2: Meaning of disparity
* The Block size depends heavily on the texture of the object surface.
* Glare on the object surfaces affects the disparity much.
* The glare on the ground plane too affect it.
* Also, it is hard to compute the disparity with unicolor objects.
Hence it is Recommended that the objects and the ground plane have matt surfaces with a texture variation. Recommended distance from baseline is around 0.75m for the camera setup used in this work. But it may depend on the scale of the object.
With the results of disparity as shown in figure 4, following adjustments were made, test environment which was a cube (30 cm x 30cm x30cm) and the distance to cube was 70cm away from the camera which had a height of 8.5 similar to the mobile platform height since it was planned to mount the camera on a mobile robot. Distance from the camera to nearest edge of the object is 0.70 m
Figure 4(a) shows disparity map without texture surface with good light condition, figure 4(b) shows disparity map with texture surface with poor light condition and figure 4(c) disparity map without texture surface with good light condition while figure 4(c) shows disparity map with texture surface with good light condition. From all these observations, it is proved that disparity depends on various factors.
### Disparity map smoothing
Map of disparity was generated using SGBM and due to various noise components disparity map is not accurate. Hence, dilation was used to remove noise. Although dilation is technique used to smoothen raw camera images in simple image processing, it was applied to smoothen the disparity image due to the internal process of dilation was filling out the pixels with a large intensity deviations from the surrounding pixels. As the noise found in the disparity images were of that nature, dilation was suitable. Dilation of A with a kernel of B is defined by,
\[(A\bigoplus B)=\bigcup_{b\in B}A_{b}\]
\(A_{b}\) is the translation of \(A\) by \(b\).
### The Laser Range Finder(LRF) for 2D map generation
The LRF, Neato laser sensor driven by the Xv-11 LIDAR controller. The laser sensor projects a pulse stream of lasers around and measures the time it takes to reflect back [7]. The distance to the reflection point can be accurately calculated (to 1mm) from the time of flight. The error characteristics of the LRF is shown in the figure 5. The sensor then encodes the distance data into a packet and forward it to the LIDAR controller. The data packets are then decoded in the controller and relayed through the serial port up to a host device (a PC or a single board computer like raspberry pi) As the percentage error is small, in distances less than 3m, the LRF measurement can be used as accurate estimation of distance.
To acquire the distance data from the LRF a separate algorithm was developed. Because the LRF measure the distance with lesser variance in the range of 0-3m, only distances up to 3m are considered. Also, there may be errors due to reflectivity effects. But many of them could be rectified by sampling through multiple realizations. Here it was about five realizations, considering speed of operation and the power consumption. As shown in figure 6 accurate 2D map can be generated using LRF with multiple realizations.
Figure 5: The error characteristics of LRF
### Height extraction
From the disparity map we obtained next we focused on finding the maximum intensity from that since it is consisted with the nearest edge of the object as shown in figure 7. But the same intensity values present in the floor also affects. Hence the images were captured from a constant distance and it was found from the pixel values.
The canny edge detection method was used [8] to draw the bounding rectangular contour and obtain the height of the object in pixels as. And since top minimum row index, of the pixels with maximum intensity was known from simple calculation as below the constant was obtained in pixels. (Constant = 53 pixels). The bounding rectangle, found through canny detector, was varying for different images of the same object and the scene. Hence, another method was proposed to finds the height of the object through disparity image
## III Height extraction method using maximum intensity value.
First the pixel intensity values were accessed and a threshold for filtering out the maximum intensity was identified. Hence it was used to filter out the maximum intensity value of disparity map. Once it was found, the whole disparity image was analysed row wise until first pixel with highest intensity is found. And its column index was recorded. Since the total height of the image and the number of pixels from bottom to lower end of the object are known. Once the maximum intensity points' column index is identified the height of the object can be easily calculated as shown in figure 8.
## IV Pixel-real distance relation
Since same distance and same camera height were used all the time, they are valid for this experiment also. CANNY edge detector was used as in figure 9 to identify the width of the cube (30x30x30) from corner to corner in pixels.
## V Conclusions
In this paper, a method has been proposed for extracting the object dimensions for environment mapping with low cost cameras fused with laser ranging. Considering camera calibrations, it was shifted from the traditional calibrations methods and extended up to 14 levels. Furthermore, dilation method was introduced to remove the noise in the disparity map instead of using high processing power costly methods that needed multiple realizations. Considering the x-y detail extraction of object, we have introduced LRF and from that we have simulated and emulated the 2D map construction using LRF as we discussed about it in section \(E\). By using the height, we gained from the method we proposed 3D map can be generated on the top of the 2D map and at the end environment mapping can be achieved.
Hence with the 3D map it can be used to identify object types, distance to object and also this can be extending to find the material of the objects too. Instead of using high cost equipment this proposed method is economically efficient and easy to implement as we did.
The ratio between dimension in pixels and real dimensions depends on the distance from the base line. Hence a parametric relationship could be identified in between them. This can be used to extend the proposed algorithm to identifying dimensions of the objects from any measurable distance from the base line.
Figure 8: Edge detection results using maximum intensity value of disparity map
Figure 7: Edge detection results using maximum intensity value of disparity map |
2307.04421 | Towards Enabling Cardiac Digital Twins of Myocardial Infarction Using
Deep Computational Models for Inverse Inference | Cardiac digital twins (CDTs) have the potential to offer individualized
evaluation of cardiac function in a non-invasive manner, making them a
promising approach for personalized diagnosis and treatment planning of
my-ocardial infarction (MI). The inference of accurate myocardial tissue
properties is crucial in creating a reliable CDT of MI. In this work, we
investigate the feasibility of inferring myocardial tissue properties from the
electrocardiogram (ECG) within a CDT platform. The platform integrates
multi-modal data, such as cardiac MRI and ECG, to enhance the accuracy and
reliability of the inferred tissue properties. We perform a sensitivity
analysis based on computer simulations, systematically exploring the effects of
infarct location, size, degree of transmurality, and electrical ac-tivity
alteration on the simulated QRS complex of ECG, to establish the limits of the
approach. We subsequently present a novel deep computational model, comprising
a dual-branch variational autoencoder and an inference model, to infer infarct
location and distribution from the simulated QRS. The proposed model achieves
mean Dice scores of 0.457 \pm 0.317 and 0.302 \pm 0.273 for the inference of
left ventricle scars and border zone, respectively. The sensitivity analysis
enhances our understanding of the complex relationship between infarct
characteristics and electrophysiological features. The in silico experimental
results show that the model can effectively capture the relationship for the
inverse inference, with promising potential for clinical application in the
future. The code will be released publicly once the manuscript is accepted for
publication. | Lei Li, Julia Camps, Zhinuo, Wang, Abhirup Banerjee, Marcel Beetz, Blanca Rodriguez, Vicente Grau | 2023-07-10T08:54:12Z | http://arxiv.org/abs/2307.04421v3 | Towards Enabling Cardiac Digital Twins of Myocardial Infarction Using Deep Computational Models for Inverse Inference
###### Abstract
Myocardial infarction (MI) demands precise and swift diagnosis. Cardiac digital twins (CDTs) have the potential to offer individualized evaluation of cardiac function in a non-invasive manner, making them a promising approach for personalized diagnosis and treatment planning of MI. The inference of accurate myocardial tissue properties is crucial in creating a reliable CDT platform, and particularly in the context of studying MI. In this work, we investigate the feasibility of inferring myocardial tissue properties from the electrocardiogram (ECG), focusing on the development of a comprehensive CDT platform specifically designed for MI. The platform integrates multi-modal data, such as cardiac MRI and ECG, to enhance the accuracy and reliability of the inferred tissue properties. We perform a sensitivity analysis based on computer simulations, systematically exploring the effects of infarct location, size, degree of transmaturity, and electrical activity alteration on the simulated QRS complex of ECG, to establish the limits of the approach. We subsequently propose a deep computational model to infer infarct location and distribution from the simulated QRS. The _in silico_ experimental results show that our model can effectively capture the complex relationships between the QRS signals and the corresponding infarct regions, with promising potential for clinical application in the future. The code will be released publicly once the manuscript is accepted for publication.
Cardiac digital twins, myocardial infarction, inverse problem, cardiac MRI, QRS, multi-modal integration.
## I Introduction
Myocardial infarction (MI) is a major cause of mortality and disability worldwide [1]. Assessment of myocardial viability is essential in the diagnosis and treatment management for patients suffering from MI. In particular, the location and distribution of myocardial scars provide important information for patient selection and treatment planning. Late gadolinium enhancement (LGE) magnetic resonance imaging (MRI) has been widely used to characterize myocardial scars [2]. However, the incorporation of LGE into MRI examination prolongs scan times, and has potential side effects [3]. Recent studies have tried to delineate scars using non-enhanced cine MRI, with promising preliminary results [4, 5]. Alternatively, the electrocardiogram (ECG) can be used to reveal abnormalities related in electrophysiology post-MI [6]. For example, ST-segment elevation and T-wave inversion are commonly used indicators of cardiac remodeling associated with different stages of MI [7]. In contrast, QRS patterns have received less attention in the literature, though they also provide valuable information about the extent and location of myocardial damage following an MI [8]. It is still partly unclear how QRS abnormalities reflect MI characteristics, such as location, size, transmural extent, and cardiac electrical activity alterations. Therefore, a reliable technique to detect and delineate infarct regions combining non-enhanced imaging and QRS data is highly desirable.
Cardiac "digital twin" (CDT) technology can create virtual models of the heart combining cardiac images, ECG, and other subject-specific information [9]. It allows clinicians to visualize and analyze the structure, function, and electrical activity of the heart in real-time, providing valuable insights into the underlying mechanisms of MI [10]. As Fig. 1 shows, CDT workflows usually involve two stages, namely anatomical and functional twinnings, which present various challenges to overcome [11]. The anatomical twinning stage involves the segmentation of cardiac images, reconstruction of the 3D geometry of the heart, and the identification and extraction of relevant anatomical structures. It is complicated by the variability in the heart's anatomy across individuals, as well as by imaging artifacts and noise. At the functional twinning stage, the main challenge is to solve the inverse problem of electrocardiography, i.e. inferring electrophysiological properties in the myocardium from the ECG. This is complicated by the limitations of ECG recordings, which are sparse, noisy, and subject to substantial uncertainties. To solve the inverse problem, state-of-the-art approaches can be coarsely separated into two kinds: deterministic and probabilistic methods [12]. Deterministic approaches in cardiac electrophysiology involve minimizing a cost function that quantifies the discrepancy between the observed data and the model predictions. For robust inverse, spatial and/or temporal regularization [13, 14] and physics-informed regularization [15, 16] have been widely used. Probabilistic methods rely on Bayesian inference theory and numerical techniques to generate posterior distributions for the model parameters [17, 18]. They can incorporate prior knowledge into the parameter estimation with an uncertainty, which can be used to guide decision-making and assess the robustness of the results [19]. Nevertheless, conventional probabilistic methods are usually computationally expensive, as repeated numerical simulations are required to generate samples for the posterior distribution.
Recently, deep learning based probabilistic methods have emerged as an alternative to conventional methods for modeling complex dynamics of cardiac electrical activity. They can leverage deep neural networks to approximate the posterior distribution of the model parameters or latent variables, providing faster and more accurate approximations. For example, Ghimire _et al._[20] proposed a deep generative model to reconstruct cardiac transmembrane potential from ECG data. Li _et al._[21] designed a deep computational model for the inverse inference of ventricular activation properties in a non-invasive and efficient manner. Xie _et al._[22] employed a physics-constrained deep learning framework to inversely predict the heart-surface electrical signals from body surface potential maps. Sahli _et al._[23] developed physics-information neural networks for the reconstruction of activation maps in cardiac electrophysiology. Dhamala _et al._[24] proposed a generative variational autoencoder for parameter estimation of a personalized cardiac model. In addition to inferring the electrophysiological properties under sinus rhythm, several studies tried to investigate the propagation of cardiac electrical signals under
arrhythmias based on deep neural networks. For example, Meister _et al._[25] employed graph convolutional neural networks to estimate the depolarization patterns in the myocardium with scars. Bacoyannis _et al._[26] reconstructed activation patterns of the myocardium with various local wall thicknesses, as thin walls indicate infarct regions. However, with regards to different post-MI scenarios, the inverse inference of electrophysiological heterogeneity in the infarct regions has not been fully investigated.
In this work, we develop a deep computational model for the inverse inference of post-MI with different properties, varying the infarct location, size, and transmural extent. We first conduct a sensitivity analysis to investigate the relationship between QRS abnormalities and infarct characteristics in post-MI. This analysis provides insights into how variations in QRS signals are associated with specific infarct properties, informing the subsequent inverse inference process. The framework can efficiently combine the anatomical properties from cine MRI and electrophysiological information from QRS simulated via a biventricular electromechanical model of post-MI. This study provides an integrated and personalised perspective that incorporates the features from multi-modal data to predict tissue properties of post-MI, enabling the construction of a CDT platform. To the best of our knowledge, this is the first deep learning based computational model that addresses the inverse inference of MI with different characteristics.
## 2 Methodology
### Anatomical Twinning: Mesh Reconstruction
At the anatomical twinning stage, we reconstruct a subject-specific 3D torso-biventricular tetrahedral mesh from multi-view cardiac MRIs [27, 28]. Specifically, for the biventricular reconstruction, we first use a deep learning based ventricle segmentation from long- and short-axis cardiac MRIs and thus obtain sparse 3D contours. We then perform a misalignment correction based on the intensity and contour information coupled with a statistical shape model, followed by a surface mesh reconstruction and volumetric tetrahedral mesh generation. We utilize a two-step automated framework for the torso reconstruction, and the locations of the ECG electrodes (I, II, V1-V6, LA, RA, LL, RL) are measured from the personalized 3D torso mesh. To ensure a symmetric, consistent, and intuitive biventricular representation across various geometries, we project the biventricular mesh into a consistent biventricular coordinates (Cobivec) system [29]. The Cobivec system is defined by \((tm,ab,rt,tv)\), which correspond to transmural, apicobasal, rotational, and transventricular coordinates, respectively. The reader is referred to the anatomical twinning stage of Fig. 1 for the illustration of Cobiveco (\(tv\) is excluded there). We represent infarct areas in the myocardium as an ellipse with radii \(r_{tm}\), \(r_{ab}\), and \(r_{rt}\) as follows,
\[\frac{(tm_{i}-tm_{0})^{2}}{r_{tm}{}^{2}}+\frac{(ab_{i}-ab_{0})^{2}}{r_{ab}{}^{ 2}}+\frac{(rt_{i}-rt_{0})^{2}}{r_{rt}{}^{2}}\leq 1, \tag{1}\]
where \((tm_{0},ab_{0},rt_{0})\) is the center coordinate of the scar region.
We consider different post-MI scenarios, including seven locations, two transmural extents, two different sizes, and two different cardiac electrical activity alterations. As Fig. 2 shows, one can define the infarct areas consistently in the 17-segment American Heart Association (AHA) map [30], enabling the study of the effects of MI properties at a population level. Note that in this study, we only consider the scars in the left ventricle (LV), as the majority of clinically significant myocardial scars present there [31]. The LV region is defined in Cobiveco as \(tv=0\vee(tv=1\wedge rt>\frac{2}{3})\) to include the whole septum. For the comparison of different infarct sizes and cardiac electrical activity alterations, we only report on lateral MI as an illustrative case. As Table 1 shows, we simulate infarct at seven different locations and one smaller size on lateral MI, each with two levels of transmural extent, and one scenario with a slower CV on transmural large lateral MI, resulting in a total of 17 post-MI scenarios for each subject. Figure 3 provides several examples of our experimental scenarios.
### Functional Twinning: Forward Electrophysiological Simulation
At the functional twinning stage, we simulate cardiac electrophysiology via an efficient orthotropic Eikonal model [18], which incorporates a human-based Purkinje system into the formulation of the activation times of root nodes (RN). The simulation is performed on the Cobiveco mesh, solving:
\[\begin{cases}\sqrt{\nabla^{T}t\mathcal{V}^{2}\nabla t}=1,\\ t(\Gamma_{0})=pk\left(\Gamma_{0}\right)-\min\left(pk\left(\Gamma_{0}\right) \right),\end{cases} \tag{2}\]
where \(\mathcal{V}\) are the orthogonal conduction velocities (CVs) of fibre, sheet (transmural), and sheet-normal directions, \(t\) is the time at which the activation wavefront reaches each point in the mesh, \(\Gamma_{0}\) is the set of RN locations, and \(pk\) is a Purkinje-tree delay function from the His-bundle to every point. Therefore, the earliest activation time at the RNs is defined as their delay from the His-bundle through the Purkinje tree normalized by the earliest activation, such that the wavefront originates at \(t=0\) in one of the endocardial RNs. The QRS
Figure 1: The cardiac digital twin (CDT) generation workflow combining cardiac magnetic resonance images (MRI) and electrocardiogram (ECG). Here, the anatomical twinning personalizes the geometrical model, while functional twinning personalizes the electrophysiological model. The anatomical and electrophysiological parameters include electrode positions, myocardial infarction (MI) distribution, ventricular muscle fiber orientation, Purkinje system, etc. Our goal is to solve the inverse problem for inferring the infarct location from simulated QRS.
can be calculated from the activation time map (ATM) via a pseudo-ECG equation [33] for a 1D cable source with constant conductivity at a given electrode location \((x^{\prime},y^{\prime},z^{\prime})\), as
\[\phi_{e}\left(x^{\prime},y^{\prime},z^{\prime}\right)=\frac{a^{2}\sigma_{i}}{4 \sigma_{e}}\int-\nabla V_{m}\cdot\left[\nabla\frac{1}{r}\right]\,dx\,dy\,dz\, \tag{3}\]
where \(V_{m}\) is the transmembrane potential, \(\nabla V_{m}\) is its spatial gradient, \(r\) is the Euclidean distance from a given point \((x,y,z)\) to the electrode location, \(a\) is a constant that depends on the fiber radius, and \(\sigma_{i}\) and \(\sigma_{e}\) are the intracellular and extracellular conductivities, respectively. The pseudo-ECG method can efficiently generate normalized ECG signals without significant loss of morphological information compared to the bidomain simulation [34].
In modeling the effects of scars on the QRS, it is essential to consider the electrophysiological properties of the infarct regions, such as the slower CVs [35], which can lead to changes in the timing and amplitude of the ECG waveform and thus manifest as changes in QRS. Therefore, we vary the CVs of infarct and healthy myocardial areas during QRS simulation (see Sec. III-A1). As Fig. 4 shows, the ATM of MI patients presents slower electrical signal propagation compared to that of healthy ones, resulting in corresponding alteration in the simulated QRS morphology.
### Functional Twinning: Inverse Inference of Post-MI Properties
Figure 5 provides an overview of the proposed deep computation model, consisting of a dual-branch variational autoencoder (VAE) and an inference model. The VAE captures both anatomical and electrophysiological features, while the inference model uses the latent space representation to predict scar and border zone location. Fig. 6 depicts the network architecture.
For the geometry reconstruction, we reconstruct coarse and dense point clouds (PCs) to simultaneously learn global shape and local anatomy of the ventricles. Therefore, the PC reconstruction loss function is defined as follows,
\[\mathcal{L}_{\text{PC}}^{\text{nsc}}=\sum_{i=1}^{K}\left(\mathcal{L}_{i, coarse}^{\text{chamfer}}+\alpha\mathcal{L}_{i,dense}^{\text{chamfer}}\right), \tag{4}\]
where \(K\) is the number of classes, \(\alpha\) is the weight term between the two PCs, and \(\mathcal{L}^{\text{chamfer}}\) is the chamfer distance between the input and reconstructed PCs. To improve the fidelity and resemblance of the reconstructed QRS to the original QRS, we minimize their mean-squared error (MSE) and dynamic time warping (DTW) distance [18],
\[\mathcal{L}_{\text{QRS}}^{\text{rec}}=\mathcal{L}_{\text{MSE}}(\text{QRS}, \text{QRS})+\mathcal{L}_{\text{DTW}}(\text{QRS},\text{Q}\text{R}\bar{\text{S }}). \tag{5}\]
Finally, the loss function for training the VAE is calculated as,
\[\mathcal{L}_{\text{VAE}}=\lambda_{\text{PC}}\mathcal{L}_{\text{PC}}^{\text{ rec}}+\lambda_{\text{QRS}}\mathcal{L}_{\text{QRS}}^{\text{rec}}+\lambda_{ \text{KL}}\mathcal{L}^{\text{KL}}, \tag{6}\]
where \(\lambda_{\text{PC}}\), \(\lambda_{\text{QRS}}\), and \(\lambda_{\text{KL}}\) are balancing parameters, and \(\mathcal{L}^{\text{KL}}\) is the Kullback-Leibler (KL) divergence loss to mitigate the distance between the prior and posterior distributions of the latent space.
For the inference, we predict the infarct location based on the low-dimensional features learned from the VAE. To alleviate the class-imbalance issue existed in the MI segmentation, we combine the cross-entropy (CE) loss and Dice score loss,
\[\mathcal{L}_{\text{seg}}=\mathcal{L}_{\text{CE}}+\lambda_{\text{Dice}} \mathcal{L}_{\text{Dice}}, \tag{7}\]
where \(\lambda_{\text{Dice}}\) is a balancing parameter. For realistic infarct shape, we further introduce a compactness loss,
\[\mathcal{L}_{\text{compact}}=\frac{1}{N^{pre}}\sum_{i=1}^{N^{pre}}\frac{d_{i}^ {pre}+d_{i}^{gal}}{d_{\text{max}}^{gal}}, \tag{8}\]
where \(N^{pre}\) is the total number of predicted MI points, \(d_{i}^{pre}\) and \(d_{i}^{gal}\) are the Euclidean distances from each predicted MI point \(i\) to the center of predicted and ground truth MI, respectively, and \(d_{\text{max}}^{gal}\) is the maximum Euclidean distance from ground truth MI points to their center. We introduce two further constraints, to control infarct size and prevent scar from appearing in the right ventricle (RV), through two additional loss functions:
\[\mathcal{L}_{\text{size}}=\frac{N^{pre}-N^{gd}}{N^{gd}}, \tag{9}\]
\[\mathcal{L}_{\text{spa}}=\frac{N^{pre}_{RV}}{N^{pre}}, \tag{10}\]
Fig. 3: Illustration of several post-MI scenarios, including different infarct locations, sizes, and transmural extents. Here, scars refer to the area of damaged or dead heart muscle tissue that has been replaced by non-functional fibrous tissue, while the border zone (BZ) is the area surrounding the scar tissue where there may be some remaining damaged heart muscle tissue that is not yet fully scarred.
Fig. 2: The seven infarct locations defined on the 17-segment American Heart Association (AHA) model. The selection of the seven locations is referring to [32]. Ext: extensive; Lim: limited.
where \(N^{gd}\) is the total number of ground truth infarct points, while \(N^{pre}_{RV}\) is the number of predicted infarct points located in the RV, excluding the septum boundary. Hence, the final inference loss is defined as,
\[\mathcal{L}_{\text{inf}}=\mathcal{L}_{\text{seg}}+\lambda_{\text{compact}} \mathcal{L}_{\text{compact}}+\lambda_{\text{size}}\mathcal{L}_{\text{size}}+ \lambda_{\text{spa}}\mathcal{L}_{\text{spa}}+\lambda_{\text{VAE}}\mathcal{L}_{ \text{VAE}}, \tag{11}\]
where \(\lambda_{\text{compact}}\), \(\lambda_{\text{size}}\) and \(\lambda_{\text{spa}}\) are balancing parameters.
## 3 Experiments and Results
### _Materials_
#### 3.1.1 Dataset and Simulation Setup
We collected 49 subjects with paired 12-lead ECGs and multi-view cardiac MRIs from the UK Biobank study [36]. The dataset was randomly divided into 34 training subjects, 5 validation subjects, and 10 test subjects, and each subject has 17 post-MI scenarios. The biventricular tetrahedral mesh for each subject was converted into PCs and then resampled into coarse and dense versions with 1,024 and 4,096 nodes, respectively. On these meshes, we imposed simulated infarcts with different locations, sizes, transnural extents, and CV alterations.
During the electrophysiology simulations, a fixed set of RN locations and CV values were utilized. Specifically, the RNs were placed at seven specific homologous locations based on Cobiveco - four in the LV and three in the RV. In the LV, they were situated in the mid-septum, basal-anterior paraspetal, and two mid-posterior locations, while in the RV, they were located in the mid-septum and two free wall regions [37]. Two sizes of lateral MI were achieved by halving \(r_{ab}\) and \(r_{rt}\) values for the small lateral MI compared to the large one. Two transnural extents were set by varying \(r_{tm}\), which was set as 3 and 0.5 for transnural and subbocadardial scars, respectively. For baseline QRS simulation, the CV values for different directions were set as follows: 65 cm/s along the fiber direction, 48 cm/s along the sheet direction, 51 cm/s along the sheet-normal direction, and 100 cm/s and 150 cm/s for the sparse and dense endocardial directions, respectively [38]. These values were consistent with reported velocities for healthy human myocardium in previous studies [39, 40]. In the simulation of QRS for MI, the CVs in the areas of myocardial scarring and BZ were set to 10% and 50% (another slower CV configuration: 5% and 25%) of the respective values observed in healthy myocardium.
#### 3.1.2 Evaluation
For evaluation, we compared the predicted MI distribution of our proposed automatic method with the gold standard set in the simulation phase. To evaluate the segmentation accuracy, we calculated the Dice score, precision, and recall of the MI prediction, calculated on the PCs. Furthermore, we propose a novel evaluation metric called the AHA-loc-score, to assess the accuracy of MI localization using the 17-segment AHA map,
\[\mathrm{AHA-loc-score}=\beta_{\text{c-id}}\delta_{\text{e-pre, c-gd}}+\beta_{\text{id}}\mathrm{IoU}_{id}+\beta_{\text{c-d}}(1-\mathrm{d}_{ \text{c}}), \tag{12}\]
where \(\delta_{\text{e-pre, c-gd}}\) indicates whether the AHA index of predicted infarct center is matched with that of ground truth, \(\mathrm{IoU}_{id\text{s}}\) calculates the intersection over union (IoU) score of the AHA indices appeared in the predicted and ground truth MI regions, and \(\mathrm{d}_{\text{c}}\) refers to the normalized distance between predicted and ground truth infarct centers. The weights \(\beta_{\text{c-id}}\), \(\beta_{\text{idk}}\), and \(\beta_{\text{c-d}}\) have values of 0.5, 0.2, and 0.3, respectively.
#### 3.2 Implementation
The framework was implemented in PyTorch, running on a computer with 3.50 GHz Intel(R) Xeon(R) E-2146G CPU and an NVIDIA GeForce RTX 3060. We use the Adam optimizer to update the network parameters (weight decay = 1e-3). The batch size is 4, and the initial learning rate is set to 1e-4 with a stepped decay rate of 0.5 every 6800 iterations. The balancing parameters in Sec. 3.2 are set as follows: \(\alpha=5\), \(\lambda_{\text{KL}}=0.01\), \(\lambda_{\text{compact}}=1\), \(\lambda_{\text{size}}=1\), \(\lambda_{\text{spa}}=1\), and \(\lambda_{\text{VAE}}=1\). The simulation of one QRS of MI spent about 5 min. The training of the model took about 10 hours (300 epochs in total), while the inference of the networks required about 9 s to process one test case.
### _Sensitivity Analysis of QRS for Different Post-MI Characteristics_
We performed a sensitivity analysis in which we studied the effects of different infarct configurations in the QRS complex. The aim was
Figure 4: Illustration of regional alterations in ventricular activation when scars are present in the heart. Here, we employ the subject with transnural extensive anterior MI as an example, to compare its activation time map (ATM) and QRS with that of a corresponding healthy one. The arrows highlight the areas where ATM differs in MI and healthy cases.
Figure 5: Deep computational model for the inverse inference of post-MI location and distribution. Note that the reconstructed point clouds (PCs) include both dense and sparse PCs and the QRS includes 8 leads. For simplicity, the schematic of sparse PC is omitted and only single lead is presented here.
to find out which locations and sizes had a significant effect on QRS, and thus to establish the feasibility of the inverse inference task. To quantify discrepancy between QRS shapes, we employed a global measure, DTW, which compared signals of different lengths with an additional penalty for the difference in QRS duration between the two signals [18]. Furthermore, we introduced four QRS abnormalities reported in literature, _i.e._, _QRS duration prolongation_[41], _pathological Q-waves_[42], _poor R wave progression (PRWP)_[43], and _fragmented QRS (JQRS)_[44]. The reader is referred to Fig. 7 for illustration of each local QRS criteria of post-MI.
#### 3.2.1 Sensitivity Analysis: Global QRS Measure
To assess the impact of QRS on the 17 different MI scenarios, we measured the dissimilarity between each of these and the baseline, as well as the dissimilarity between them. As Fig. 8 shows, the QRS complex showed morphological alterations in most post-MI scenarios when compared to the normal QRS complex. Particularly, inferolateral, extensive anterior, and apical transmural MI presented more evident alterations compared to others. One can see a significant decrease in QRS morphology alteration in small lateral MI when compared to that of large lateral MI, especially for subendocardial one. The degree of transmurality presented a noticeable impact on the QRS morphology at all infarct locations, namely transmural scars generally caused more prominent changes in QRS morphology compared to subendocardial scars. Although the QRS dissimilarities between transmural and subendocardial septal scars were relatively small (DTW\({}^{\text{max}}=0.2\) and DTW\({}^{\text{avg}}=0.3\)), differences in QRS morphology can still be observed, as shown in Fig. 9. Despite the influence of transmurality on QRS morphology, the differences in QRS between various infarct locations seemed to be more pronounced than those caused by the extent of transmurality. This implies that the QRS has greater sensitivity in localizing MI rather than predicting its transmural extent. The primary QRS morphological difference observed with varying degrees of CV reduction was the QRS duration: 99.5 ms vs. 113.8 ms on transmural large lateral MI. However, our initial tests presented unexpected QRS simulation results when we significantly reduced the CVs in the MI regions. This suggests that the personalized CV configuration of infarct areas during simulation requires further investigation in the future. Most infarct locations were represented on the QRS by leads I, V5, and V6, whereas septal MI was represented by leads V1-V4 and V3-V4 for subendocardial and transmural ones, respectively. This result is in agreement with those reported in clinical practice [32]. Generally, larger scars tend to result in QRS changes appearing in more leads.
#### 3.2.2 Sensitivity Analysis: Local QRS Measure
The changes in QRS morphology for the 17 MI scenarios were reflected in multiple ways. Here, we introduced several QRS criteria and compared the contribution of each of these for infarct detection. We found that apical and inferolateral MI tended to present prolongation of the QRS
Figure 6: The network architecture of the proposed deep computational model. Here, \(\mathbf{n}\) is the number of nodes of input PC (\(\mathbf{n}=4096\)), which includes three point coordinates (\(d=3\)) and four Cobbucco coordinates (\(c=4\)); \(s\) is the number of categories of nodes which could be healthy, scar, and BZ (\(s=3\)); \(n_{\text{coarse}}\) and \(n_{\text{dense}}\) are the numbers of nodes in the coarse and dense output PCs, respectively (\(n_{\text{coarse}}=1024\), \(n_{\text{dense}}=4096\)); \(N_{\text{Lead}}\) and \(L_{\text{QRS}}\) refer the number of QRS leads and the unified length of QRS signals, respectively (\(N_{\text{Lead}}=8\), \(L_{\text{QRS}}=512\)).
Figure 7: Sketch map of MI-related QRS abnormalities in literature. (a) prolongation of the QRS duration; (b) pathologic Q waves, i.e., the presence of Q wave with duration \(\geq\) 0.03 s and/or amplitude \(\geq\) 25% of R-wave amplitude; (c) poor R wave progression, i.e., the absence of the normal increase in amplitude of the R wave in the precordial leads when advancing from lead VI to V6; (d) fragmented QRS, i.e., the presence of multiple small spikes or notches within the QRS complex.
duration: 124.1 ms and 107.7 ms (apical and inferolateral MI) vs. 90.4 ms (normal). PRWP mainly occurred in extensive anterior, septal, and apical MI, similar as reported in the literature [43, 45]. Specifically, the R wave amplitude in the septal MI was sometimes flattened, while the R wave of V6 tended to be larger than that of V5 in the apical MI, as Fig. 9 shows. The prevalence of RQRS was more common in the inferior lead (lead II) compared with the anterior leads (leads V3 and V4) and the lateral leads (leads V5 and V6), similar to the results reported in Liu _et al._[46]. The presence of fQRS in lead II and leads V3-V4 indicated inferolateral and extensive anterior MI, respectively. In contrast, pathological Q wave failed to classify MI from healthy subjects in our simulation system.
### Inference Accuracy of Post-MI Properties
Table II presents the quantitative results of the proposed method, and Fig. 10 provides the boxplots of Dice score. The proposed method obtained the best segmentation and localization performance on the transmural extensive anterior MI (Dice\(=0.934\pm 0.028\), AHA-loc-score \(=0.987\pm 0.007\)). Even for the scenarios where there were not notable QRS morphology changes, such as MI in the septum and limited anterior areas, the model still can localize the corresponding infarct (DTW\({}^{\text{max}}=0.4\), AHA-loc-score \(\approx 0.7\)). Nevertheless, the model showed difficulties in detecting lateral (especially for the subendocardial and small size ones, with Dice score of \(0.097\pm 0.112\)) and inferior MI with Dice scores of \(0.228\pm 0.252\) and \(0.173\pm 0.288\) for subendocardial and transmural one, respectively. In general, the segmentation of the transmural MI tended to be more accurate than that of the subendocardial MI (Dice: \(0.518\pm 0.347\) vs. \(0.396\pm 0.271\)). This observation aligned with expectations, since transmural MI often exhibit more pronounced and distinct QRS abnormalities compared to subendocardial MI, as proved in previous sensitivity analysis. As a result, our model can leverage these noticeable differences to identify and segment the affected region accurately. Nevertheless, their ability to precisely determine the location of the infarction within the myocardium did not vary significantly (AHA-loc score: \(0.610\pm 0.343\) vs. \(0.659\pm 0.339\)). This can be attributed to the fact that the localization of MI is not solely dependent on the depth or extent of the infarct. Furthermore, the accuracy of predicting scars was generally higher than that of predicting border zones
Fig. 8: Comparison of QRS dissimilarity using dynamic time warping (DTW) among different MI scenarios and baseline. Note that here we excluded the scenario with slower conduction velocity configuration. (a) QRS dissimilarity of each MI scenario to the baseline in each lead; (b) Maximum and average QRS dissimilarity (DTW\({}^{\text{max}}\) and DTW\({}^{\text{min}}\)) between each MI scenario of all leads, transmural; subendocardial.
Fig. 10: Boxplots of Dice scores of MI inference by the proposed model.
Fig. 9: Comparison of QRS morphology between MI scenarios with different transmural extents and locations, along with poor R wave progression (highlighted with red dashed lines).
(BZs). This could be because the complex nature of BZs, where the myocardial tissue undergoes a transition from healthy to scarred, introduces additional variability and ambiguity in the QRS signals, leading to a lower prediction accuracy for BZs. The performance in terms of Dice coefficient, precision, recall and AHA-loc-score was generally consistent. However, in specific cases like apical, limited anterior, and inferolateral transmural MI, precision may exhibit a slight superiority over the Dice. Apical MI obtained the highest AHA-loc-score, indicating its accurate and reliable localization. This could be attributed to the uniqueness of the apical location, allowing for a more precise and unambiguous localization of MI due to the absence of significant interference from neighboring structures.
Figure 11 provides 3D results of a representative test subject on different scenarios. One can observe that the 3D visualization agrees well with the quantitative analysis result. There were outliers appearing in the inferior area for lateral MI detection and vice versa, which suggests that the model had difficulty distinguishing between the lateral and inferior MI areas based on their QRS. Furthermore, even though extensive anterior and inferolateral MI both covered large areas, the detection of inferolateral MI tended to be more difficult compared to that of extensive anterior MI, which can be further proved in the correlation study of MI volume presented in Fig. 12.
### Ablation Study
Accurate MI inference goes beyond merely identifying the location of the infarction, but also requires a comprehensive assessment of the extent of infarct tissue. Therefore, we introduced additional constrains, namely localization constrains (\(\mathcal{L}_{\text{spa}}\) and \(\mathcal{L}_{\text{comp}}\)) and an extent constrain (\(\mathcal{L}_{\text{size}}\)). To evaluate their effectiveness, we conducted an ablation study by selectively removing them from the proposed framework, as presented in Table 3. One can see that in most scenarios the proposed method obtained the best performance compared to others. For example, without localization constrains, the model presented worse performance in identifying septal MI. Note that septal MI normally presents complexity for detection, due to its unique position and overlapping ECG effects from neighboring regions, such as the anterior and inferior walls. We observed that the absence of \(\mathcal{L}_{\text{comp}}\) led to improved Dice in cases of inferolateral and subendocardial limited anterior MI and decreased Dice in cases of extensive anterior MI. Nevertheless, reduction in outliers observed in the visualization results suggests that \(\mathcal{L}_{\text{comp}}\) effectively minimizes the occurrence of outliers, leading to more reliable and accurate
Figure 11: 3D visualization of infarct detection results using the proposed method. For each scenario, the left side is the ground truth, and the right side is the prediction.
predictions. The extent constraint was also crucial, particularly in distinguishing between subandocardial and transmural MI that present different sizes in the same anatomical position.
### Extended Evaluation
#### 4.5.1 Exploring the Detection Limit of QRS for Small Infract Areas
To investigate what is the smallest infarct area that can be detected from QRS complexes, we employed apical MI as an example and varied the infarct size and retrained the model based on the pre-trained one. The idea behind this approach is to determine the sensitivity of QRS-based detection methods for small infarct areas, which may have important clinical implications for risk stratification and management of post-MI patients. Figures 13 (a) and (c) demonstrate that as the infarct size decreased, the QRS morphological changes also diminished. This is because a smaller infarct would have a lesser impact on the overall electrical conduction and activation patterns of the heart. Consequently, the deviations in the QRS, which represent the depolarization of the ventricles, would be less pronounced. Nevertheless, our method still can extract subtle features from the QRS complex that may be indicative of small infarct areas, as Fig. 13 (b) shows. This ability was limited until when the Cobiveco apicabosal radius \(r_{ab}\) of scars equaled to 0.1 for apical MI.
#### 4.5.2 Correlation Analysis: Relationship between ECG/ PC Reconstruction and MI Inference Accuracy
To evaluate the robustness of the proposed inference scheme to the reconstruction error, we analyzed the relationship between the reconstruction and inference errors by the proposed method. The accuracy of PC and ECG reconstruction was calculated as 0.5\(\sigma_{\text{DC}}^{\text{RC}}\) with \(\alpha=1\) and \(\sigma_{\text{QRS}}^{\text{RC}}\), respectively. The \(r^{2}\) values of scar/ BZ for PC and ECG-MI inference correlations were 0.002/ 0.006 and 0.008/ 0.009, respectively, indicating no relationship between inference and reconstruction accuracy. This implies that the accuracy of MI inference using the proposed method was not significantly influenced by the quality of the reconstruction. This is reasonable, as the proposed method focuses on extracting relevant features from the input data rather than relying solely on accurate reconstruction for MI inference. Nevertheless, the reconstructions are still necessary as they provide valuable information for the inference. To demonstrate this, we conducted a comparison by removing the reconstruction steps, and the results noticeably decreased (AHA-loc scores: \(0.610\pm 0.343\) vs. \(0.561\pm 0.338\) for subandocardial MI, and \(0.659\pm 0.339\) vs. \(0.585\pm 0.367\) for transmural MI), highlighting the significance of incorporating reconstruction in the inverse inference.
## 5 Discussion and Conclusion
In this paper, we have developed a deep computational model to tackle the inverse problem in cardiac electrophysiology, _i.e._, inferring MI distribution from QRS signals. Through the integration of anatomical and electrophysiological data, we achieve a comprehensive analysis that incorporates different infarct locations, sizes, transmural extents, and cardiac electrical activity alterations. By consistently representing the ventricular anatomy in a coordinate reference system,
Figure 12: The scatter plots and correlations between the volumes of ground truth (\(x\)-axis) and predicted (\(y\)-axis) infarct regions. Here, the volume is defined as the number of nodes belonging to scars/ BZ in the resampled PCs. The coefficients of determination \(r^{2}\) for scars and BZ are presented in the upper and low rows, respectively. The dash diagonal line represents perfect correlation between the prediction and ground truth.
we establish a robust sensitivity analysis framework for studying the association between infarct characteristics and QRS abnormalities. The sensitivity analysis results have demonstrated significant morphological alterations in the QRS complex for various post-MI scenarios, particularly inferolateral, extensive anterior, and apical MI. These findings suggest that the involvement of large areas of damaged heart muscle leads to pronounced changes in QRS morphology. Furthermore, the analysis emphasizes the impact of transmularity on QRS morphology, namely transmural MI presents more prominent changes compared to subendocardial MI. However, the differences in QRS between various infarct locations can be more pronounced than those caused by the extent of transmularity, indicating the greater sensitivity of QRS in localizing MI rather than predicting its transmural extent. The analysis further highlight the importance of lead selection in accurately detecting the location of infarction. Overall, the sensitivity analysis provides valuable insights into the relationship between infarct characteristics and QRS abnormalities, enhancing our understanding of the complex interplay between infarct characteristics and electrophysiological features.
The proposed method can effectively segment and localize MI, even in scenarios with limited QRS morphology changes, demonstrating its feasibility of developing CDTs for MI patients. The results of the ablation study emphasize the importance of the localization and extent constraints in accurate MI inference. The proposed method exhibits the ability to detect small infarct areas, although its sensitivity is limited, as proved in our extended study. The correlation analysis demonstrates that while incorporating reconstruction in the inference process is important, the accuracy of MI inference is not significantly dependent on the quality of reconstruction. To conduct a sensitivity analysis of MI properties, we intentionally select consistent infarct location, size and transmural extent for each subject. While it ensures a controlled comparison, it may have led to a limited evaluation of MI inference. We conduct a small test by randomly selecting infarct for each subject and only obtain reasonable good results on few cases. This outcome is expected because randomly simulating a single scenario for each subject limits ability of the proposed model to learn and generalize across different infarct characteristics. In order to improve performance, in the future a more diverse and comprehensive dataset with a wider range of infarct scenarios should be used to train the model.
Note that this work is an initial study, and there are several limitations that need to be acknowledged. Firstly, this study assumes a known set of RNs and fixed CVs for all subjects, which may not fully capture the complexity and heterogeneity present in real-world healthcare data. Therefore, further research is needed to personalize these activation properties based on individual patient characteristics and specific healthcare settings. Secondly, we only consider cardiac anatomical information and electrode nodes while disregarding the full torso geometry. The inclusion of torso geometry could provide valuable insights into its influence on QRS patterns. By incorporating full torso geometry in our future work, we can gain a more comprehensive understanding of the factors influencing QRS patterns and improve the accuracy of our predictions and interpretations. Thirdly, this study focuses solely on the QRS complex, rather than considering the entire ECG signal. Applying the analysis to the whole ECG signal would provide a more comprehensive assessment but may require significant computational resources. To address this limitation, future research could explore computationally efficient surrogate to replace the expensive simulation model. Finally, while the developed CDTs can provide valuable insights into the mechanisms of MI, they are based on simplified assumptions about the heart and may not capture all aspects of the complex interactions between cardiac structures and functions. Given the limitations, particularly in the simulated dataset used, this can only serve as a proof of concept until validation on the clinical data can be performed.
|
2310.11385 | A voxel-level approach to brain age prediction: A method to assess
regional brain aging | Brain aging is a regional phenomenon, a facet that remains relatively
under-explored within the realm of brain age prediction research using machine
learning methods. Voxel-level predictions can provide localized brain age
estimates that can provide granular insights into the regional aging processes.
This is essential to understand the differences in aging trajectories in
healthy versus diseased subjects. In this work, a deep learning-based multitask
model is proposed for voxel-level brain age prediction from T1-weighted
magnetic resonance images. The proposed model outperforms the models existing
in the literature and yields valuable clinical insights when applied to both
healthy and diseased populations. Regional analysis is performed on the
voxel-level brain age predictions to understand aging trajectories of known
anatomical regions in the brain and show that there exist disparities in
regional aging trajectories of healthy subjects compared to ones with
underlying neurological disorders such as Dementia and more specifically,
Alzheimer's disease. Our code is available at
https://github.com/nehagianchandani/Voxel-level-brain-age-prediction. | Neha Gianchandani, Mahsa Dibaji, Johanna Ospel, Fernando Vega, Mariana Bento, M. Ethan MacDonald, Roberto Souza | 2023-10-17T16:32:38Z | http://arxiv.org/abs/2310.11385v2 | # A voxel-level approach to brain age prediction:
###### Abstract
Brain aging is a regional phenomenon, a facet that remains relatively under-explored within the realm of brain age prediction research using machine learning methods. Voxel-level predictions can provide localized brain age estimates that can provide granular insights into the regional aging processes. This is essential to understand the differences in aging trajectories in healthy versus diseased subjects. In this work, a deep learning-based multitask model is proposed for voxel-level brain age prediction from T1-weighted magnetic resonance images. The proposed model outperforms the models existing in the literature and yields valuable clinical insights when applied to both healthy and diseased populations. Regional analysis is performed on the voxel-level brain age predictions to understand aging trajectories of known anatomical regions in the brain and show that there exist disparities in regional aging trajectories of healthy subjects compared to ones with underlying neurological disorders such as Dementia and more specifically, Alzheimer's disease. Our code is available at [https://github.com/nehagianchandani/Voxel-level-brain-age-prediction](https://github.com/nehagianchandani/Voxel-level-brain-age-prediction).
Voxel-level brain age prediction, T1-weighted MRI, regional brain aging, deep learning
## 1 Introduction
As humans progress through life and age, the brain ages as well and it can be observed with neuroimaging (MacDonald and Pike, 2021). This concept, known as brain age, mirrors the chronological age but pertains specifically to the brain. It provides insights into the maturity level and developmental trajectory of an individual's brain which can sometimes be different
from the overall aging process of an individual. For brain age studies, it is assumed that for healthy subjects, brain age is representative of chronological age, indicating that the brain is aging at a similar rate as humans age. However, for subjects with underlying neurological disorders, there is often a deviation in the aging trajectory. An effective biomarker of neurological disorders is increased brain age (Cole et al., 2017, 2018; Huang et al., 2017).
Early works on brain age provide a global estimate, _i.e._, brain age is studied as a single global index for the entire brain. Global brain age has been demonstrated as an effective biomarker to study the brain aging process in the presence and absence of various neurological disorders (Cole, 2017; Franke and Gaser, 2019). However, due to its global nature, it does not provide spatial information on the brain aging process. Studies have shown that the aging process occurs at different rates and may be non-linear across different regions of the brain, highlighting region-specific response to the aging process (Hof et al., 1996; Raz et al., 2010). The global brain age index is not able to capture this regional information related to aging. The concept of voxel-level brain age can help bridge the gap, where a voxel represents a small unit of the brain volume. Brain age prediction at the level of each voxel can provide a fine-grained analysis of how different regions of the brain age in healthy compared to diseased brains assigning a distinct brain age to each voxel of the brain. Voxel-level predictions can be particularly useful for understanding how neurological disorders impact different regions of the brain. Most neurological disorders are often associated with specific regions of the brain, for example, Alzheimer's disease (AD) is associated with atrophy in the hippocampus and temporal regions of the brain (Rao et al., 2022; Pasquini et al., 2019), and Parkinson's is associated with basal ganglia (Blandini et al., 2000; Caligiore et al., 2016), and hence, these regions are expected to have an increased brain age as compared to other regions of the brain in the presence of corresponding disorders.
In this article, an extended analysis of our recently proposed deep learning (DL) model to predict voxel-level brain age using T1-weighted magnetic resonance (MR) images (Gianchandani et al., 2023). The initial work introduced a multitask architecture for voxel-level brain age prediction and evaluation of that model on presumed healthy subjects. In this work, the analysis is extended by performing an ablation study to reflect on how the multi-task architecture is an improvement over a single-task deep learning model. Additionally, the results of the proposed model are inspected and evaluated on subjects with dementia and more specifically, AD and report varying brain ages for different anatomical regions of the brain. A voxel-level brain age prediction model can provide an enhanced understanding of the regional aging processes in the brain while allowing the quantification of the deviation observed in years. Incorporating a multi-task framework moves closer to enhancing the transparency and interpretability of the DL model and it is substantiated by a comparison of the proposed methodology to existing interpretability methods implemented over a state-of-the-art global age prediction model. To summarize, the contributions are (refer to Figure 1):
1. Proposal of a multitask DL voxel-level brain age prediction model, building upon our prior work (Gianchandani et al., 2023), with an extended evaluation encompassing subjects with dementia.
2. An ablation study to show the importance of the different tasks in the multitask architecture.
3. Regional analysis of the brain aging process in presumed healthy and dementia subjects.
4. Comparison of the proposed model with existing interpretability methods implemented over a state-of-the-art global age prediction model.
## 2 Related Work
Brain age prediction is a well-researched domain, however, most studies focus on a global analysis of brain age. Initially, this was done with handcrafted features using traditional machine learning (ML) techniques like Supper Vector Machines, Random Forest, and other tabular machine learning models (Valizadeh et al., 2017; Lemaitre et al., 2012; Beheshti et al., 2021). The approach with traditional ML models is generally considered easier to explain and interpret owing to the reliance on simpler algorithms, fewer parameters,
Figure 1: Overview of the contributions of this article. There are 4 major contributions: (i) Proposal of a voxel-level brain age prediction model with initial validation on healthy subjects, (ii) Ablation experiments were done to justify the use of a multitask architecture of the DL model with three-tasks over two-task, and one-task counterparts. (iii) Regional analysis of predicted brain age by clustering voxel-level brain age predictions into known anatomical regions of the brain and (iv) An interpretability analysis where the proposed voxel-level approach to understanding regional aging trajectories is compared to traditional interpretability methods like Grad-CAM, SmoothGrad, and Occlusion Sensitivity.
engineered features and in-built feature importance scores, and achieved brain age predictions with mean absolute error (MAE) \(\sim\) 4-8 years. The use of manually-engineered features can aid in understanding the model, but can also be restrictive at the same time as it can lead to the omission of crucial features during the feature engineering process. This limitation led the shift towards the use of DL models for predicting brain age. Manual feature engineering can inadvertently simplify and distort complex data representations, leaving scope for future improvements. Therefore, the transition to neural network models allowed to capture complex data representations within the data that are integral to this brain age prediction task (Plis et al., 2014). DL models showed significant improvement in the brain age prediction task (with MAE as low as 2-4 years) (Ito et al., 2018; Kolbeinsson et al., 2020), however, due to the neural networks complexity, and black-box nature, these DL models have limited interpretability.
Studies have attempted to explain DL models for brain age prediction with techniques like Grad-CAM (Bermudez et al., 2019), saliency map-based techniques (Yin et al., 2023), occlusion-map based techniques (Bintsi et al., 2021), layer-wise relevance propagation (Hofmann et al., 2022) and SHapley Additive ex-Planations (SHAP) (Ball et al., 2021), among others, to better understand the regional contribution to the brain age prediction models. However, one common limitation of using existing interpretability techniques lies within the use of gradients to calculate feature importance and consequently, the inability to compare the relevance scores across samples. The explanations provided by the existing interpretability methods are quantitative, but only at a sample level as the relevance scores are based on the relative importance of different regions in the input image. Despite the flaws, the aforementioned methods have proven to be tremendously helpful in making the black-box models more transparent and a step closer to understanding the decision-making process of complex neural network architectures. Achieving state-of-the-art results should not come at the cost of interpretability. The proposed approach to predicting voxel-level brain age produces brain predicted age difference (PAD) maps that reflect on the regional aging processes and provide us with a way to quantify healthy versus diseased aging patterns of the brain that is comparable across samples. Additionally, the proposed modeling method ensures that structural features in the brain are used to predict brain age, this will be discussed in detail in sections 4 and 5.
To move towards a regional analysis of the brain aging process, studies (Beheshti et al., 2019; Bintsi et al., 2020) have attempted to predict brain age at a block or a patch level (with an MAE in the range of \(\sim\) 1.5-2 years) where predictions are made for individual blocks of the brain. These blocks do not necessarily correlate to known anatomical regions of the brain but do provide a level of spatial information compared to the global-age prediction models. The authors postulate the scope of taking this a step further with an analysis at a higher resolution. It is important to acknowledge that studies have attempted to explore and understand regional aging trajectories in the brain using other techniques like regional volume changes (Raz et al., 2005), functional changes (Davidson et al., 1999) etc., however, for the scope of this article, we will be limiting our focus on studies that utilize ML/DL techniques to do so from a brain age prediction perspective. Finally, based on the current literature, voxel-level predictions have only been explored once before by Popescu et al. (2021). Their method produces voxel-level age maps to understand the regional aging process in the brain, however, this is at the cost of a high MAE \(\sim\) 9 years. The authors utilize a
modified version of a U-Net architecture to predict brain age at a voxel-level and block-level. This method will be referred to as the baseline for the scope of this article.
## 3 Materials and Methods
### Data
T1-weighted MR imaging is utilized from publicly available datasets to encourage reproducibility. All data corresponds to presumed healthy controls from the Cambridge Centre for Ageing Neuroscience (Cam-CAN) (Taylor et al., 2017) for training the model. The dataset (n=651) is nearly uniformly distributed across the age range of 18-88 years with a mean age of 54.24\(\pm\)18.56 years. The dataset has a sex-balance of 55%:45%, male:female ratio to limit sex-related bias in the model.
An independent test set (n=359) corresponding to healthy controls for further validation of the model was sourced from the Calgary-Campinas-359 (CC359) dataset (Souza et al., 2018) (age range 36-69 years with a mean of 53.46\(\pm\)9.72 years) with a balanced sex-distribution of 49%:51%. The CC359 dataset contains data acquired on scanners from three different vendors (Philips, General Electric [GE], Siemens) and at two different magnetic field strengths (1.5 T, and 3 T) giving rise to 6 subsets within the dataset to assess the robustness of the proposed model across different data acquisition protocols.
To create the bias correction methodology (further discussed in Section 3.7), 48 healthy control subjects each from the Open Access Series of Imaging Studies (OASIS) (Marcus et al., 2007), Alzheimer's Disease Neuroimaging Initiative (ADNI) (Mueller et al., 2005a,b), and Cam-CAN datasets (unseen during training) were extracted, totalling 144 samples. The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD. The mean age of the bias correction data set was \(63.77\pm 22.88\) years with a male:female ratio of 25%:75%. The imbalance in the sex ratio is observed as an effect of the sex imbalance in the ADNI dataset.
For the evaluation of the proposed model on subjects with underlying neurological disorders, two open-source datasets were utilized. Twenty-eight dementia subjects were extracted from the OASIS dataset (LaMontagne et al., 2019) (mean age \(69.17\pm 5.13\) years) and twenty subjects with AD from the ADNI dataset (mean age \(64.8\pm 5.24\) years).
### Data preparation and pre-processing
To ensure that all MR images have the same orientation, FMRIB Software Library's (FSL) (Jenkinson et al., 2012) 'fslreorient2std' command was used. Brain extraction masks and tissue segmentation masks to segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) for the T1-weighted images from the Cam-CAN dataset were obtained using two U-Net models trained for the specific tasks on the CC359 dataset. The models were trained on the CC359 dataset due to the availability of the binary brain extraction masks and the tissue segmentation masks along with the publicly available T1-weighted images. The binary brain extraction masks are used to obtain brain-extracted input to the model and the tissue segmentation masks are used as ground truths for one of the output tasks in the methodology. All MR images have a voxel size of 1 mm\({}^{3}\).
### Proposed model architecture
In this work, a multitask U-Net architecture is proposed to predict voxel-level brain age along with two additional tasks, global brain age prediction and brain tissue segmentation to segment GM, WM, and CSF. A multitask architecture refers to the presence of multiple outputs that the model is trained for simultaneously. Multi-task learning is known to improve the model training process by including multiple tasks for the model to learn shared representations on, this also helps in avoiding overfitting and leads to fast convergence (Crawshaw, 2020). In the proposed methodology, the main task is the voxel-level brain age prediction task, to complement this task, a brain tissue segmentation task to segment GM, WM, and CSF and a global brain age prediction task are included. Global brain age prediction can be considered a simpler version of the voxel-level brain age prediction task. The segmentation task ensures that relevant structural features are learned from the MR data during training. The backbone of the proposed model is a simple U-Net architecture (Ronneberger et al., 2015) that has an encoder and a decoder network, making a U-like shape. Batch-normalization layers are added after the convolution operations to ensure a smooth training process (Santurkar et al., 2018). The encoder and decoder are connected by skip connections that help with recovering important spatial information that is lost during downsampling. The model architecture visualization can be found in this project's GitHub repository.
### Loss function
To accommodate for the multitask modeling approach with three different outputs, a custom loss function is defined to ensure all tasks are given significant importance as the training progresses. The loss function for the proposed model is made up of three terms. \(\text{Dice}_{\text{loss}}\) is used to accommodate the segmentation task and is computed from the Dice coefficient based on Eq. 1. The Dice coefficient is a measure of the overlap between the ground truth \(Y\) and predicted segmentation \(\hat{Y}\). The DICE and \(\text{Dice}_{\text{loss}}\) are inversely related, making the model learn accurate segmentations as \(\text{Dice}_{\text{loss}}\) is minimized during the training process.
\[\text{Dice}_{\text{loss}}=1-\text{Dice}=1-\frac{1}{m}\sum_{i=1}^{m}\frac{2|Y \cap\hat{Y}|}{|Y|+|\hat{Y}|} \tag{1}\]
MAE is the most commonly used metric for the loss function in brain age prediction studies (Feng et al., 2020; Bermudez et al., 2019; He et al., 2021; Popescu et al., 2021). The remaining two terms are two versions of MAE to accommodate for the age prediction at the global and voxel-level. Eq. 2 is the voxel-level MAE. First averaged across all brain voxels in the input, followed by batch average, where \(y_{i,j}\) is the voxel-level brain age and \(\hat{y}_{i,j}\) is the voxel-level predicted brain age for image \(i\) and voxel \(j\). Eq. 3 is the global-level MAE, averaged over the batch where \(y_{i}\) is the global brain age and \(\hat{y}_{i}\) is the global predicted brain age for image \(i\). MAE is the absolute difference between the ground truth and the predicted age. In eqs. (1) to (3), \(m\) is the batch size, and \(n\) is the total number of brain voxels in one sample.
\[\text{MAE}_{\text{voxel}}=\frac{1}{m}\sum_{i=1}^{m}\frac{1}{n}\sum_{j=1}^{n}|y_{ i,j}-\hat{y}_{i,j}| \tag{2}\]
\[\text{MAE}_{\text{global}}=\frac{1}{m}\sum_{i=1}^{m}|y_{i}-\hat{y}_{i}| \tag{3}\]
The weighted sum (Eq. 4) of the three terms is the loss function (\(\mathcal{L}\)) to be optimized during training. The weights \(w_{s}\), \(w_{g}\), and \(w_{v}\) were set empirically and changed as the training progressed. The weight for the segmentation output, \(w_{s}\), was initialized with the highest weight owing to the value being the smallest among the three loss terms (ranging between 0-1). The global age and voxel-wise age prediction weights, \(w_{g}\), and \(w_{v}\), respectively, were initialized with equal weights and updated as described in Table 1.
\[\mathcal{L}=w_{s}DICE_{loss}+w_{v}MAE_{voxel}+w_{g}MAE_{global} \tag{4}\]
To ensure that the model does not learn a uniform prediction of brain age across all voxels, a subtle noise component is introduced into the loss calculation during the model's training. This noise, randomly selected from the range of -2 to +2, is added to the ground truth labels for each voxel. This strategic addition of noise encourages the model to learn the nuanced variations in the brain aging process across distinct regions and across subjects. We hypothesize that, when exposed to a combination of added noise and variations in underlying structural features, the model will develop accurate representations of these variations during training. By constraining the noise to a narrow range of -2 to +2, it is ensured that the effect on the training process is limited to an intentionally added randomization, without significantly impacting the model training process.
To evaluate the significance of incorporating noise into the ground truth labels, a variant of the proposed model without the inclusion of any additional noise was also trained. In this model, chronological age is assigned to each voxel in the ground truth labels for the voxel-level brain age prediction task based on the assumption discussed in Section 1. In the subsequent result section, a performance comparison of both models is described.
### Ablation study
An ablation study was performed to verify the choice of a multitask architecture. The objective is to demonstrate that both the global-brain age prediction task and the brain tissue segmentation task contribute to the model learning enhanced and accurate representations,
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Weight** & \begin{tabular}{l} **Epochs** \\ \(\in[\mathbf{0},\mathbf{50})\) \\ \end{tabular} & \begin{tabular}{l} **Epochs** \\ \(\in[\mathbf{50},\mathbf{130})\) \\ \end{tabular} &
\begin{tabular}{l} **Epochs** \\ \(\in[\mathbf{130},\mathbf{300}]\) \\ \end{tabular} \\ \hline \(w_{s}\) & 80 & 40 & 15 \\ \(w_{g}\) & 1 & 1 & 0.7 \\ \(w_{v}\) & 1 & 1 & 1.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Loss function weights as the training progresses.
specifically geared towards improving performance in the primary task _i.e._ the voxel-level brain age prediction. Multiple models are trained, starting with a single output model that predicts voxel-level brain age, iteratively adding the other two tasks, one at a time, to analyze how models with different output tasks trained on the same dataset perform in comparison to one another. Thus, 4 different models were trained: 1) a one-task model to predict voxel-level brain age, 2) a two-task model to predict voxel-level brain age and segmentations of GM, WM and CSF, 3) a two-task model to predict voxel-level brain age and global-level brain age and 4) a three-task model that predicts voxel-level brain age, global-level brain age and segmentations of GM, WM and CSF (proposed model).
### Network training
#### 3.6.1 Proposed Model
The Cam-CAN dataset was used for training the proposed model. A train:validation:test split of 489:64:98 subjects was used. Patches of size \(128\times 128\times 128\) voxels were randomly cropped from the MR images on the fly and used as input to the model. Using patches is helpful in reducing the computational load during training allowing for the incorporation of a bigger batch size. Random cropping was done to ensure that a large majority of data samples in each batch had a significant part of the brain region, and randomizing the cropping process helps in exposing the model to brain regions from different perspectives, leading to accurate and robust features being learned. The model was trained for 300 epochs with a batch size of 2. The Adam Optimizer was used with an initial learning rate of 0.001, weight decay of 1e\(-\)5, and beta values set to (0.5, 0.999). The learning rate decreased every 70 epochs by a multiplicative factor of 0.6. The hyperparameters were empirically selected.
#### 3.6.2 Ablation Study
The same train:validation:test split of the Cam-CAN dataset used for the proposed model was used to train the ablation study models described in 3.5. All ablation experiment models were tested on 50 test set subjects from the Cam-CAN dataset and 359 subjects from the CC359 test set. The CC359 was split into six subsets (as described in Section 3.1) based on the scanner used and the magnetic field strength at which the data was acquired. Metrics were obtained for each of the six subsets to compare performance across the varying subsets.
The one-task model to predict voxel-wise brain age and the two two-task models (segmentation/global age + voxel-wise brain age) were all trained for 300 epochs. Various hyperparameters were experimented with, however, the most suitable ones were found to be similar to the ones used to train the proposed model with a slight difference in the beta values that were set to default (0.9, 0.999) for the Adam optimizer.
### Bias Correction
Bias correction is a post-processing step in brain age prediction pipelines. This step is essential to remove bias due to the mean age of the training set. Brain age prediction models have been observed to be biased around the mean age of the training dataset, leading to under-estimations of age for subjects older than the mean age and over-estimations for subjects younger than the mean age. The source of this bias is largely unknown but is
speculated to be due to reasons including noisy data, heterogeneity in the training set, data distribution, availability of data corresponding to varying age ranges, and the modeling techniques used (Aycheh et al., 2018; Cole et al., 2017; Liang et al., 2019). A uniform dataset (Cam-CAN) during training was used, exposing the model to a balanced number of samples across all age ranges (and balanced sex distribution), minimizing biased predictions. However, despite using a theoretically uniform dataset, the number of samples in the extremities (20-30 years, and 80-90 years) is comparatively lower than the rest.
The proposed methodology adapted for the bias correction technique followed by the baseline model (Popescu et al., 2021), which based on the current literature is the only study that proposed a bias correction for voxel-level brain age prediction algorithms. Hence, the goal is to train a model that learns age-specific structural features relevant to predict age such that the predictions have minimal bias. This can be confirmed by comparing the results before and after bias correction, a small difference between the two indicates that bias correction does not impact the results significantly, and hence, predictions are minimally biased.
### Regional Analysis of PAD maps
Research in the field of brain aging studies the aging process at a regional level _i.e._ in the context of different regions of the brain. To better understand the PAD maps and to assess the clinical relevance, a regional analysis of the predicted age difference at the level of known regions of the brain was performed. The publicly available MNI structural atlas (Collins et al., 1995; Mazziotta et al., 2001) provided by the Research Imaging Center, University
Figure 2: (top) Cam-CAN training set follows a rough uniform data distribution, exposing the proposed model to samples of all ages. (bottom) This leads to bias-free predictions mostly, except for the extremities (ages 20-30 and ages 80-90). It can be observed that the predictions are closely aligned around the regression line for ages 30-80, with slight bias observed on the edges. A correction methodology can help correct the observed bias.
of Texas Health Science Center at San Antonio, Texas, USA that segments the brain into 9 anatomical regions namely Caudate, Cerebellum, Frontal Lobe, Insula, Occipital Lobe, Parietal Lobe, Putamen, Temporal Lobe and Thalamus is used. Voxel-level brain PAD values are aggregated within each of the 9 regions to compute the average brain PAD for each region in the healthy and diseased test sets.
### Overview from an interpretability perspective
Previously, with the aim of understanding regional contributions to brain age and ensuring accurate features are learned during training, global age prediction models have been explained using traditional interpretability methods. In this contribution, insights obtained from the voxel-level PAD maps are compared to the 'traditional' way of understanding the models. To do so, a publicly available state-of-the-art Simple Fully Convolutional Neural network (SFCN) for global age prediction (Peng et al., 2021; Gong et al., 2021) was used and three interpretability methods were implemented on it: (i) Grad-CAM (Selvaraju et al., 2017), (ii) Occlusion Sensitivity maps (Zeiler and Fergus, 2014) and (iii) SmoothGrad (Smilkov et al., 2017). The heatmaps/saliency maps obtained were contrasted against voxel-level and regional-level PAD maps and observations were discussed.
The SFCN model was originally designed to approach the brain age prediction task as a soft classification task, however, for the proposed implementation, the output layers of the architecture are modified to a regression head and same feature extractor is utilized as done in the original work. The Cam-CAN dataset was used to train the model following the same train:test split as done for the proposed model for fairness with the difference lying in the preprocessing of the input MR images. As the original modeling processes utilized linearly registered images, the same steps were performed to linearly register the training images to the MNI template before feeding them as input to the model. An important consideration here is that no registration is performed for the proposed model, and hence the PAD maps obtained are in the native image space, whereas the interpretability heatmaps obtained are in the MNI space. Even though linear registration (or 6 degrees of freedom registration) does not alter the shape of the brain as it only implements translational and rotational changes, we believe that the uniqueness of each brain's shape and structure contributes to the prediction of brain age, and hence, it was decided against performing any registration (linear or non-linear) for the voxel-level brain age prediction model.
## 4 Results
For a fair comparison of model performance and as suggested in Popescu et al. (2021), all results are reported before bias correction. Bias-corrected results are only used for visualizations and analysis of diseased subjects where explicitly stated.
**Contribution 1: Proposal of a multitask DL voxel-level brain age prediction model:** The proposed model surpasses the baseline (refer Table 2), demonstrating a 39.22% reduction in error on the internal Cam-CAN test set. The proposed model is also evaluated on a larger external test set (CC359) and obtains an error reduction of 58.88% which reflects on the model's performance on unseen data originating from a different data source. The proposed model variant (with 3-output) without added noise to the loss function comes in second on the Cam-CAN evaluation and second to last on the CC359 test set.
For voxel-level predictions, since it is impossible to present prediction results at the level of each voxel (millions in each brain volume), the mean of the per-sample MAE (MAE\({}_{\text{voxel}}\)) is reported in Table 2. To visualize the voxel-level brain age predictions, predicted age difference (PAD) maps are used, which show the difference between the predicted brain age and the chronological age at the level of each voxel. PAD maps for the Cam-CAN test set samples can be observed in Figure 3, where blue color indicates brain regions that look younger than chronological age and red correlates to older-looking brain regions. The first
\begin{table}
\begin{tabular}{l l l} \hline
**Model (output tasks)** & \begin{tabular}{l} **D\({}_{\text{cm}}\)** \\ **(n=50)** \\ \end{tabular} &
\begin{tabular}{l} **D\({}_{\text{cc}}\)** \\ **(n=359)** \\ \end{tabular} \\ \hline Global age (G) & 5.32\(\pm\)3.67 & 6.50\(\pm\)4.71 \\ Baseline (G+V) & 8.84\(\pm\)4.82 & 16.74\(\pm\)3.71 \\
1 output model (V) & 10.11\(\pm\)5.68 & 7.63\(\pm\)4.53 \\
2 output model (G+V) & 7.90\(\pm\)4.30 & 7.93\(\pm\)4.73 \\
2 output model (S+V) & 6.75\(\pm\)3.94 & 7.83\(\pm\)4.74 \\
3 output model (S+G+V), no noise & 6.14\(\pm\)3.32 & 8.32\(\pm\)5.84 \\
**Proposed model (S+G+V)** & \(\textbf{5.30}\pm\textbf{3.29}^{*}\) & \(\textbf{6.92}\pm\textbf{4.28}^{*}\) \\ \hline \end{tabular}
* voxel-level brain age prediction task, S
- segmentation task (GM, WM, CSF), G
- global-level brain age prediction task, \({}^{*}\)
\end{table}
Table 2: Model performance on an internal and external test set.
Figure 3: Row 1 - PAD maps based on the voxel-level difference between chronological and predicted age, Row 2 - adjusted PAD maps by subtracting the overall MAE of the brain volume from each voxel PAD value. Extended analysis of healthy PAD maps is described in Gianchandani et al. (2023).
row corresponds to the raw PAD maps whereas the second row corresponds to the adjusted PAD maps obtained by subtracting the overall MAE of the brain volume from each voxel PAD value. These adjusted maps allow us to visualize the spatial variations in PAD values across different regions of the brain without the interference of the model error (MAE). The adjusted PAD maps are constructed purely for visualization purposes and are not used for any result comparisons with other models/baseline. Similarly, the PAD maps corresponding to subjects with dementia can be observed in Figure 4. At a high level, it can be observed from the PAD maps corresponding to healthy versus dementia subjects, that the contrasts are sharper and more apparent in subjects with dementia reflecting greater variation in the brain volume.
Figure 4: PAD maps corresponding to diseased subjects (OASIS dataset). Row 1 shows the raw PAD maps obtained from the voxel-level brain age prediction model. Row 2 shows the adjusted PAD maps (for improved visualization). Row 3 shows bias corrected PAD maps using the correction methodology described in Section 3.7. More red regions are observed as compared to healthy PAD maps and accelerated aging in the ventricles which has often been associated with neurological disorders.
regional brain ages. Additionally, the PAD maps for subjects with dementia have intensity PADs spread across a wider range of values, which can be observed from the distribution of values shown alongside the color bar in Figure 4 as well more red regions as compared to healthy PAD maps. More analysis on healthy PAD maps is done in Gianchandani et al. (2023) and that on diseased subjects will be further discussed in the subsequent sections.
The Wilcoxon-Signed Rank test was performed to assess the performance of the proposed model against other variations (1-output, 2-output) of the model and the baseline. \(\alpha\) was set to 0.05 and the Holm-Bonferroni correction was done to account for multiple comparisons. All resulting p-values were found to be less than 0.05, indicating statistical significance.
**Contribution 2: Ablation study to show the importance of using a multitask architecture:** As stated in Section 3.5, the proposed three-task (multitask) model is expected to show superior performance compared to the one-task and two-task counterparts. An ablation study is performed by designing experiments with the same model architecture with different task combinations, and it can be observed in Table 2, that the 3-output
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Output Tasks** & **Test Set (n=60)\({}^{\star}\)** & **MAE\(\pm\)S.D.** \\ \hline & Philips 1.5T & 7.22\(\pm\)3.13 \\ & Philips 3T & 8.02\(\pm\)5.29 \\
1 (Voxel-wise & Siemens 1.5T & 8.26\(\pm\)5.33 \\ brain age) & Siemens 3T & 9.18\(\pm\)5.29 \\ & GE 1.5T & 5.83\(\pm\)3.21 \\ & GE 3T & **7.26\(\pm\)3.46** \\ \hline & Philips 1.5T & 7.83\(\pm\)4.63 \\
2 (Segmentation & Philips 3T & 9.61\(\pm\)5.33 \\ + Voxel-wise & Siemens 1.5T & 8.64\(\pm\)5.60 \\ brain age) & Siemens 3T & 6.21\(\pm\)4.13 \\ & GE 1.5T & 7.17\(\pm\)3.51 \\ & GE 3T & 7.55\(\pm\)4.08 \\ \hline & Philips 1.5T & 9.20\(\pm\)5.36 \\
2 (Global brain & Philips 3T & 9.54\(\pm\)5.99 \\ age + Voxel-wise & Siemens 1.5T & 8.75\(\pm\)4.37 \\ & Siemens 3T & **5.84\(\pm\)4.10** \\ & GE 1.5T & **5.79\(\pm\)2.34** \\ & GE 3T & 8.49\(\pm\)3.73 \\ \hline & Philips 1.5T & **6.94\(\pm\)3.80** \\
3 (Segmentation & Philips 3T & **7.73\(\pm\)5.04** \\ + Global brain & Siemens 1.5T & **6.68\(\pm\)4.80** \\ age + Voxel-wise & Siemens 3T & 6.80\(\pm\)4.22 \\ brain age) & GE 1.5T & 5.98\(\pm\)2.52 \\ & GE 3T & 7.40\(\pm\)4.52 \\ \hline \hline \end{tabular} \({}^{\star}\)All test sets have n=60 samples, except Philips 1.5T with n=59 samples
\end{table}
Table 3: Ablation study results.
proposed model outperforms the 1-output and 2-output models with statistically significant results (p\(<\)0.05) on the internal Cam-CAN test set. To further validate the findings, all ablation study models are subjected to evaluation using the CC359 dataset. This dataset comprises data acquired from 3 distinct scanner vendors, each acquired at 2 different magnetic field strengths. Consequently, this dataset is segregated into 6 subsets, all sharing similar acquisition protocols. The evaluation is conducted independently on each subset (refer to Table 3) for every ablation experiment model. It is observed that the proposed model outperforms the 1-output and 2-output models on 3 out of 6 subsets (Philips 1.5T, Philips 3T, Siemens 1.5T), comes close second on 2 subsets (Siemens 3T, and GE 3T) and takes the third spot on the final subset (GE 1.5T). Closely inspecting the subsets where the proposed model did not take the lead, it was observed that for both Siemens 3T and GE 3T subsets, the proposed mode ranked second with an average MAE on the test set differing by no more than 1 year. Similarly, in the GE 1.5T subset, where the proposed model secured the third position, the difference between the top-ranking model and the proposed three-task model was approximately 0.2 years.
Overall, the proposed model outperformed the ablation experiment models on 50% of the subsets, while consistently performing well across all subsets, unlike the 1-output and 2-output models which obtained significantly higher errors (\(\sim\)9 years) on at least 1 or more of the subsets. The proposed model consistently achieved an average MAE in the range of 5.9 to 7.4 years across all subsets of CC359, whereas other ablation experiment models (1-output and 2-output) exhibited greater fluctuations in the inter-dataset performance. Evaluation on subsets acquired using different scanners, which in turn exhibit scanner-specific differences in the MR images, and at different magnetic field strengths reflects on the model's ability to be robust and generalizable across diverse datasets.
**Contribution 3: Regional analysis of the brain aging process in a healthy versus diseased brain:** The proposed model was tested on healthy subjects from the Cam-CAN dataset, which was used for the regional analysis. For the evaluation of diseased subjects, subjects with AD from the ADNI dataset (n=20) and subjects with dementia from the OASIS3 dataset (n=28) were utilized. It is essential to note that the majority of the open-source MR images of subjects with neurological disorders (especially AD and dementia) corresponds to older age ranges, usually 55 years and above with the frequency of samples available increasing as one goes higher up. To mitigate any biased predictions, filtering was performed on both AD and dementia test sets for subjects with age \(\leq\) 70 years for the regional analysis, leaving us with n=32 subjects for the analysis. This decision will be further justified in the discussion section.
In Table 4, the regional PAD average and standard deviation (S.D.) values based on the MNI atlas (refer to section 3.8) are reported. The regional analysis on three test sets, one corresponding to healthy subjects (Cam-CAN) and two diseased test sets (AD and dementia) was performed. For each dataset, the average (Mean \(\pm\) S.D.) PAD values for each region across the test set samples were reported. Additionally, S.D. per region is described (Mean of S.D. \(\pm\) S.D. of S.D.) to observe the variability of PAD values within independent regions.
Figure 5 and figs. 6 and 7 show average atlases of regional PAD values on the healthy and diseased test sets respectively. A clear distinction can be observed between the healthy versus diseased atlas with the healthy atlas appearing to be having regional PAD values closer to 0, indicating only a slight deviation from the chronological age of the subjects. In
the atlases for diseased subjects (figs. 6 and 7), red colors are observed in most regions of the brain. Overall, the diseased atlases display an accelerated aging trajectory as compared to the atlas corresponding to healthy subjects.
**Contribution 4: Interpretability analysis and comparison with traditional interpretability methods:** PAD maps obtained from the voxel-level brain age prediction model are compared to the heatmaps obtained from 3 interpretability methods. It is imperative to note that for the scope of this article, the objective of this research is not to propose a state-of-the-art global age prediction model to obtain interpretability maps using traditional methods, however, the aim is to observe the difference in underlying properties and insights obtained from PAD maps versus traditional interpretability heatmaps.
In Figure 8, the first column shows Grad-CAM heatmaps that illustrate regions with relative contribution/importance to the brain age prediction. It is often visualized using red-yellow-blue heatmaps with red regions as the most important and blue being the least. However, since Grad-CAM heatmaps are obtained from the later convolutional layers in a model to observe the final features learned through the gradient with respect to input, they are originally obtained at a much smaller size as compared to input and have to be upsampled, which leads to interpolation errors and coarse maps. The second column shows occlusion sensitivity maps where red regions make the model overestimate the brain age prediction and blue ones make the model underestimate the predictions. White regions contribute the least. SmoothGrad maps are similar to Grad-CAM heatmaps, except they are generated as a result of multiple forward passes of noisy input through the model to obtain heatmaps that are more precise counteracting the influence of noise. However, similar
Figure 5: Regional PAD atlas showing average PAD for different regions of the brain in a population of presumed healthy subjects. The atlas has been created using 98 unseen subjects during training. No bias correction is done for healthy subjects and hence, the entire test set of 98 samples is used to account for subject-wise variation in aging trajectories. It can be observed that all regions of the brain show small negative PAD values with the Temporal Lobe looking the youngest with a -1.29 years PAD.
to Grad-CAM they are based on the gradients with respect to an input and hence, illustrate the relative importance of regions in one input and are not comparable across samples.
Figure 6: Regional PAD atlas showing average PAD for different regions of the brain in a population of subjects with dementia. The atlas has been created using 15 subjects with age \(\leq\) 70 years using the voxel-level bias-corrected PAD maps. A variation is observed in terms of PAD values across different regions with the Cerebellum, Occipital Lobe, Parietal Lobe, Temporal Lobe, and Thalamus showing an increased brain age.
Figure 7: (i) Regional PAD atlas showing average PAD for different regions of the brain in a population of subjects with AD. The atlas has been created using 17 subjects with age \(\leq\) 70 years using the voxel-level bias-corrected PAD maps. It can observed that all regions in the atlas show an increased brain age.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{**Test sets**} \\ \cline{2-7}
**Regions** & \multicolumn{3}{c}{**Healthy**} & \multicolumn{3}{c}{**AD**} & \multicolumn{3}{c}{**Dementia**} \\ & Avg regional & Regional & Avg regional & Regional & Avg regional & Regional \\ & PAD & S.D. & PAD & S.D. & PAD & S.D. \\ \hline Caudate & \(-0.70\pm 6.16\) & \(0.76\pm 0.28\) & \(3.76\pm 4.72\) & \(1.69\pm 0.58\) & \(-1.18\pm 11.28\) & \(1.41\pm 0.44\) \\ Cerebell- & \(-1.26\pm 7.05\) & \(1.58\pm 0.63\) & \(1.82\pm 5.34\) & \(3.55\pm 1.42\) & \(5.11\pm 10.86\) & \(3.41\pm 1.06\) \\ um & & & & & \\ Frontal & \(-1.27\pm 6.20\) & \(1.71\pm 0.66\) & \(1.94\pm 4.19\) & \(3.11\pm 0.74\) & \(-1.67\pm 9.56\) & \(3.62\pm 1.45\) \\ Lobe & & & & & \\ Insula & \(-0.75\pm 6.15\) & \(1.12\pm 0.64\) & \(2.77\pm 4.14\) & \(1.57\pm 0.50\) & \(-0.81\pm 11.80\) & \(2.00\pm 0.83\) \\ Occipital & \(-0.75\pm 6.91\) & \(1.56\pm 0.59\) & \(1.17\pm 5.85\) & \(2.51\pm 0.89\) & \(4.00\pm 11.80\) & \(2.40\pm 0.90\) \\ Lobe & & & & & \\ Parietal & \(-0.53\pm 6.29\) & \(1.76\pm 0.80\) & \(1.82\pm 5.05\) & \(3.09\pm 0.74\) & \(2.91\pm 10.70\) & \(3.19\pm 1.36\) \\ Lobe & & & & & \\ Putamen & \(-0.74\pm 6.13\) & \(0.72\pm 0.38\) & \(2.68\pm 4.05\) & \(1.18\pm 0.38\) & \(-0.96\pm 11.26\) & \(1.19\pm 0.44\) \\ Temporal & \(-1.32\pm 6.66\) & \(1.90\pm 0.83\) & \(2.39\pm 2.86\) & \(3.06\pm 1.02\) & \(1.47\pm 9.97\) & \(3.76\pm 1.39\) \\ Lobe & & & & & \\ Thalamus & \(-0.63\pm 6.17\) & \(0.57\pm 0.22\) & \(2.54\pm 3.63\) & \(1.10\pm 0.55\) & \(0.89\pm 11.18\) & \(1.02\pm 0.32\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Regional PAD values. The analysis is done using bias-corrected voxel-level PAD maps for the two diseased test sets (AD and dementia).
Figure 8: Comparison of traditional interpretability heatmaps (left to right: Grad-CAM, Occlusion Sensitivity, and SmoothGrad) with PAD maps (left to right: voxel-level and regional) obtained from the proposed voxel-level brain age prediction model.
Voxel-level PAD maps show the regions with an increased brain age in red and decreased brain age in blue. The maps were obtained at the same resolution as the input image due to the upsampling in the U-Net architecture. The use of skip connections in the U-Net architecture leads to accurate upsampling at a high resolution. The intensity values in the PAD maps are quantified in years by computing the difference between predicted and chronological age, and hence, are comparable across samples. The last column in the figure shows the regional PAD maps (PAD values averaged within different known anatomical regions of the brain), which essentially have similar features and characteristics as the voxel-level PAD maps with the difference being in the granularity of the PAD values. This representation, however, is better suited to analyze the results from the voxel-level age prediction model from an aging perspective.
## 5 Discussion
The proposed voxel-level brain age prediction model outperforms the baseline on two independent test sets while having a simple and straightforward preprocessing pipeline. Diverging from the baseline (Popescu et al., 2021), the proposed methodology introduces two significant modifications. First, the baseline uses non-linear registration as a pre-processing step, registering all T1-weighted images to the MNI atlas, an average atlas representative of a healthy brain. We hypothesized that each brain structure is unique in terms of shape, size, and structural features and the uniqueness is crucial for brain age estimation. Non-linear registration can modify the uniqueness that each brain volume holds and information is lost in the process. Following the same, non-registered images are used as input to the proposed model. This helps retain the original shape, size, and structural features in the truest form possible to be used to predict voxel-level brain age. Second, the baseline uses GM and WM masks obtained from the non-linearly registered images as input to the model, _i.e._, whole T1-weighted volumes are not fed into the network. Previous research has shown the relevance of CSF in the aging process (Houston, 2023; May et al., 1990) and hence, the proposed methodology utilizes skull-stripped T1-weighted volumes (GM, WM, and CSF) as input to the model. Segmentation of GM, WM, and CSF is added as one of the output tasks to the proposed model which also contributes to the interpretability analysis.
To ensure accurate feature representations, a subtle noise component is introduced to the ground truth labels (refer to Section 3.4). This strategic addition of noise serves to facilitate the model's ability to discern and understand variations in aging patterns across different brain regions. While this approach introduces noise at the voxel level, it is important to acknowledge that in certain instances, this technique could theoretically yield drastic differences in PAD values between adjacent voxels. For instance, the inclusion of noise might lead to stark contrasts, such as a red voxel (increased brain age) right adjacent to a contrasting blue voxel (reduced brain age) making the PAD mask appear with a salt and pepper noise appearance. Despite the possibility of sharp contrasts, the PAD maps consistently reveal a tendency toward producing smooth transitions in the brain PAD values with clusters of voxel exhibiting similar patterns of aging. This phenomenon aligns with the inherent nature of aging-related changes, which tend to present on a regional level. Even though the proposed model with intentionally introduced noise performs better than the
no-noise version in terms of MAE, this observation in the PAD maps confirms the inclusion of noise does not pose a hindrance or concern in the proposed methodology.
The proposed model produces voxel-level PAD maps, which are compared to the heatmaps obtained from traditional interpretability methods. An important feature of the proposed approach that contributes towards ensuring that the proposed model is learning correct features from the input image is the addition of the brain tissue segmentation task as one of the outputs in the architecture. Owing to the multitasking design, the model re-uses the features for the segmentation as well as brain age prediction task. The segmentation performance of the proposed model reached a dice score of 85%, indicating substantial overlap between predictions and ground truth segmentations. A considerable performance on the segmentation task goes to show that the model learns the structural intricacies within the brain volume which are then repurposed for the voxel-level brain age prediction task. This confirms that no background regions or extraneous noise in the input contributes to the output and it is indeed the structural features that are driving the voxel-level brain age predictions.
Contrary to the heatmaps obtained from traditional interpretability methods which are based on gradients with respect to an input (Grad-CAM, SmoothGrad), the voxel-level PAD maps reflect differences in the prediction from the chronological age in years, making them quantitative and comparable across samples. The occlusion sensitivity maps come close to voxel-PAD maps, however, they are generated by occluding a single region at a time and evaluating its impact on the global age prediction. It is vital to acknowledge that in most machine learning models, multiple regions, which might not adhere to square or cuboid structures, collectively influence final predictions, thus, assessing these regions in isolation is informative, but doesn't provide the most accurate insight into the collective contributions to brain age predictions. PAD maps, on the other hand, utilize structural features within the brain region and reflect on voxel-level brain age instead of a global brain age, and the results show that the spatial differences in the aging process observed make clinical sense when compared against the structural changes in corresponding T1-weighted images (Gianchandani et al., 2023).
The regional analysis of the PAD maps corresponding to presumed healthy subjects shows PAD values in the narrow range of -1.29 years to -0.48 years, _i.e._, making all regions appear slightly younger than the expected chronological age, however, the difference is minimal and can be accounted for by the modeling error. The values are closely aligned near 0 (brain age = chronological age), which is the ideal and theoretical scenario, however, does not account for the spatial variations observed in the brain ages across different regions and different samples. However, it must be kept in mind that this analysis pertains to a population level encompassing subjects with a diverse age range and unique trajectories of brain aging.
For the analysis of diseased subjects, subjects with age \(\leq\) 70 are filtered for the test set. There are two reasons for doing so: (i) The proposed model is trained on subjects up to 88 years of age and to maintain the reliability of predictions, a deliberate choice was made to refrain from evaluating the model on subjects exceeding 88 years of age. The predictions in the peripheral regions of the data (ages 70 and above as shown in Figure 2) are often observed to exhibit a bias, leading to under-prediction or younger-looking brains for older age ranges. While the bias is addressed through a dedicated correction process as explained
in Section 3.7, it is important to note that the methodology used for this bias correction is built upon data from healthy subjects. It is tailored to the patterns observed in the evaluation of healthy subjects. It would be unfair to assume that, for diseased subjects, the same bias correction methodology would suffice to mitigate the bias observed. (ii) Based on the bias-correction methodology, a different correcting factor is used for different age ranges and theoretically, if diseased subjects are expected to have an increased brain age relative to the corresponding chronological age, it would be unfair to use the correcting factor based on the chronological age as the bias observed would be relative to an older age (compared to the chronological age). Hence, to ensure that bias correction does not fail significantly, and helps with mitigating the bias to a reasonable extent, this precautionary filtering is performed to remove subjects with age \(\geq\) 71 years. Nonetheless, since most neurological disorders are observed in an older population, bias correction becomes imperative for the AD and dementia test sets for the regional analysis to help account for the bias, even though it might not mitigate the bias entirely.
Another important consideration when analyzing the regional PAD values in Table 4 is that in the case of a healthy population, the age range of subjects is wide enough such that the small bias observed is in both directions as over-predictions and as well as under-predictions. Hence, at the population level, the over and under-predictions tend to cancel each other's effect to an extent. However, this might not be the case for diseased subjects as most subjects in the test set are above the average training set age and hence, bias is only observed in the form of under-predictions (_i.e._ negative PAD). As mentioned previously, bias-correction does not account for 100% of the bias in diseased subjects coupled with the fact that only under-predictions are observed, the results of the PAD values in Table 4 and figs. 6 and 7 might still reflect a small degree of bias and be more negative than the actual values.
The regional PAD values, MNI atlases, and PAD maps corresponding to individual subjects were reviewed by a radiologist (JO) and some notable observations were made:
1. It can be observed that in subjects with dementia, ventricles tend to show an accelerated brain age as compared to the rest of the brain regions (refer to adjusted PAD maps in Figure 4). It is unclear whether this increased aging of the ventricles is mostly related to an increase in ventricle size, which is usually a sequelae of generalized brain parenchymal volume loss, or due to differences in CSF composition. Both these explanations seem plausible: large ventricle size is associated with the presence of neurodegenerative disorders, and even in healthy subjects, increased ventricle volume seems to indicate a greater risk of developing dementia in the future (Carmichael et al., 2007). Furthermore, cellular CSF composition is altered in subjects with neurodegenerative diseases, with a shift from central memory to effector T cells (Busse et al., 2021). Such changes do not affect MR image signal intensity in any noticeable way upon visual inspection by radiologists, but there may be subtle signal changes that may have been detected by the proposed model.
2. In AD subjects, PAD was particularly high in the Caudate nuclei (Figure 7). Previous studies have found lower Caudate nuclei volumes in AD compared to healthy control subjects (Madsen et al., 2010). Assuming that lower volumes indicate advanced brain age, these prior findings are in line with the results of the current study. Increased brain age (2.39 years) was also observed in the Temporal Lobe with a high regional standard deviation indicating
a great degree of variation within the region, which is often an important region associated with AD.
3. In the group of dementia subjects, brain age was particularly advanced in the posterior brain regions, _i.e._, the Occipital and posterior Parietal lobes, and the Cerebellum (Figure 6). Atrophy predominantly affecting the posterior brain parenchyma is uncommon in dementia patients. It can sometimes be seen in AD patients (Crutch et al., 2012) and is a hallmark feature of Lewy body dementia, a rare neurodegenerative disease (Silva-Rodriguez et al., 2023). The exact underlying dementia etiologies are not known in the dementia subgroup of this study; it may well be that some of these patients were diagnosed with Lewy body dementia or posterior predominant AD. However, while previous studies mainly focused on brain parenchymal volume, the proposed model predicts brain age using a multidimensional approach. It is possible that characteristics other than volume, for example, changes in brain signal intensity or structure, occur in subjects with dementia that do not affect volume and are, therefore, not well known yet.
The findings from the the proposed brain age prediction model are partially consistent with the known biomarkers of aging in subjects with dementia and more specifically, AD. Some new potential biomarkers like increased brain age in posterior regions of the brain have been identified by the proposed model, and require further validation.
It is crucial to emphasize that though it is important to understand regional aging patterns for older subjects _i.e._, where disorders are observed and are often progressed to a stage where the subject exhibits noticeable symptoms and is already a part of the research study collecting data; another important aim of this research is to predict early onset of neurological disorders before the subjects start exhibiting symptoms and apparent cognitive decline. Therefore, evaluation on healthy subjects is an important part as it can unveil potential indicators of early onset of neurological disorders. A future direction to validate the proposed model would be to evaluate the model on longitudinal data which includes subjects transitioning from an initial presumed healthy stage to some form of underlying neurological disorder.
## 6 Conclusion
In this study, previous analysis of a voxel-level brain age prediction model is extended as a proof-of-concept. Through the experiments, the choice of a multitask architecture is validated and it is shown that using a voxel-level approach can be a way of achieving improved interpretability and a better understanding of regional aging trajectories. Evaluation of the model on healthy subjects as well as ones with dementia and specifically, AD revealed consistent findings on regional brain aging as other aging studies and also revealed new indicators that can be potential biomarkers of the presence of dementia. Through this research, the transition of brain age prediction models towards voxel-level predictions is shown as a way to enhance the understanding of the degenerating brain while demonstrating an improvement with respect to existing implementations.
NG is supported by the Natural Sciences and Engineering Research Council (NSERC) BRAIN CREATE award and the Alberta Innovates Graduate Student Scholarship. RS thanks the NSERC (RGPIN/2021-02867) for ongoing operating support for this project. RS also thanks the Hotchkiss Brain Institute for financial support. MEM acknowledges support from startup funding at the University of Calgary and the NSERC Discovery Grant (RGPIN-03552) and Early Career Researcher Supplement (DGECR-00124). Data collection and sharing for this project was partly funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012).
## Ethical Standards
The work follows appropriate ethical standards in conducting research and writing the manuscript. All data used in this study was obtained from publicly available datasets and has been handled following the terms provided by the data sources. Data anonymity has been maintained and all data sources have been properly cited complying with ethical and privacy regulations.
## Conflicts of Interest
The authors have no competing interests to declare.
|
2307.14632 | Metric-Based In-context Learning: A Case Study in Text Simplification | In-context learning (ICL) for large language models has proven to be a
powerful approach for many natural language processing tasks. However,
determining the best method to select examples for ICL is nontrivial as the
results can vary greatly depending on the quality, quantity, and order of
examples used. In this paper, we conduct a case study on text simplification
(TS) to investigate how to select the best and most robust examples for ICL. We
propose Metric-Based in-context Learning (MBL) method that utilizes commonly
used TS metrics such as SARI, compression ratio, and BERT-Precision for
selection. Through an extensive set of experiments with various-sized GPT
models on standard TS benchmarks such as TurkCorpus and ASSET, we show that
examples selected by the top SARI scores perform the best on larger models such
as GPT-175B, while the compression ratio generally performs better on smaller
models such as GPT-13B and GPT-6.7B. Furthermore, we demonstrate that MBL is
generally robust to example orderings and out-of-domain test sets, and
outperforms strong baselines and state-of-the-art finetuned language models.
Finally, we show that the behaviour of large GPT models can be implicitly
controlled by the chosen metric. Our research provides a new framework for
selecting examples in ICL, and demonstrates its effectiveness in text
simplification tasks, breaking new ground for more accurate and efficient NLG
systems. | Subha Vadlamannati, Gözde Gül Åahin | 2023-07-27T05:45:35Z | http://arxiv.org/abs/2307.14632v1 | # Metric-Based In-context Learning: A Case Study in Text Simplification
###### Abstract
In-context learning (ICL) for large language models has proven to be a powerful approach for many natural language processing tasks. However, determining the best method to select examples for ICL is nontrivial as the results can vary greatly depending on the quality, quantity, and order of examples used. In this paper, we conduct a case study on text simplification (TS) to investigate how to select the best and most robust examples for ICL. We propose **M**etric-**B**ased in-context **I**earning (MBL) method that utilizes commonly used TS metrics such as SARI, compression ratio, and BERT-Precision for selection. Through an extensive set of experiments with various-sized GPT models on standard TS benchmarks such as TurkCorpus and ASSET, we show that examples selected by the top SARI scores perform the best on larger models such as GPT-175B, while the compression ratio generally performs better on smaller models such as GPT-13B and GPT-6.7B. Furthermore, we demonstrate that MBL is generally robust to example orderings and out-of-domain test sets, and outperforms strong baselines and state-of-the-art finetuned language models. Finally, we show that the behaviour of large GPT models can be _implicitly controlled_ by the chosen metric. Our research provides a new framework for selecting examples in ICL, and demonstrates its effectiveness in text simplification tasks, breaking new ground for more accurate and efficient NLG systems.
## 1 Introduction
Text simplification (TS) is a crucial task in natural language processing, with the goal of converting complex text into simpler, easier-to-understand one. This is particularly important for individuals who struggle with comprehending complex languages, such as second language learners or individuals with cognitive impairments Stajner (2021) and disabilities like dyslexia Rello et al. (2013) and autism Barbu et al. (2015). For the aforementioned reasons, NLP community has shown great interest in the topic, introducing plenty of datasets (e.g., ASSET Alva-Manchego et al. (2020)), models, and evaluation metrics (e.g., SARI Xu et al. (2016)).
There have been numerous approaches to TS proposed in the literature, including non-neural or rule-based methods Nassar et al. (2019), machine translation approaches Xu et al. (2016), and finetuning of large language models Sheang and Saggion (2021) on downstream task data. Recently, it has been shown that large language models such as GPT-3, are capable of in-context learning (ICL) Brown et al. (2020)--an emerging ability to learn from in-context samples without modifying model parameters. 1 Despite its strong ability, ICL still mostly falls behind the performance of finetuning techniques Dong et al. (2023).
Footnote 1: We refer the readers to [http://ai.stanford.edu/blog/understanding-incontext/](http://ai.stanford.edu/blog/understanding-incontext/) for a summary of in-context learning inner mechanics.
Recent studies have shown that in-context learning is highly variable to a range of factors, such as the number of examples, quality of examples, and even the order of examples Lu et al. (2022); Liu et al. (2022); Dong et al. (2023). To address these concerns, recent literature has proposed several techniques for selecting the most relevant examples for ICL Liu et al. (2022); Sorensen et al. (2022); Gonen et al. (2022); Rubin et al. (2022). The majority of them aim to _retrieve_ a set of samples from the validation set that resembles the test set most by either training a separate retrieval model or utilizing an existing encoder to calculate similarities between pairs of sentences. However, adopting these techniques for text-generation tasks with multiple references is nontrivial, and the need to access to the full test set to pick examples from is not desirable, and may not always be possible in real-life scenarios.
In order to address this problem, we propose a simple yet intuitive metric-based selection tech
nique, which we refer to as **M**etric-**B**ased in-context **L**earning (MBL), to perform efficient and robust in-context learning with large language models for text generation tasks. Unlike previous ICL techniques, MBL only requires access to the development set and uses more informed measures rather than requiring generating sentence embeddings or training separate specified retrieval models. Furthermore, we perform an extensive set of experiments with GPT-3 models of various sizes (\(175\)B, \(13\)B, and \(6.7\)B) 2, specifically focusing on their performance for TS. We investigate utilizing commonly used TS metrics (e.g., SARI, compression ratio) for example selection and answer several research questions regarding their strengths and weaknesses on a variety of datasets and models. Through our experiments, we show that metric-based selection can significantly improve the performance of large language models on TS. We also demonstrate that these results are generally robust to various orderings and perform well in out-of-domain settings. This paper provides the following contributions:
Footnote 2: Throughout the paper, GPT-\(175\)B, GPT-\(13\)B, and GPT-\(6.7\)B will refer to the GPT-\(3\) model with \(175\)B, \(13\)B, and \(6.7\)B parameters respectively.
* We provide a naive yet effective and robust approach to selecting examples for in-context learning, a.k.a., _metric-based learning_ (MBL) 3, and show that it achieves state-of-the-art results on two well-known benchmark datasets (TurkCorpus and ASSET when the optimal metric is used (see SS5.1)) 4. Footnote 3: We use metric-based selection and learning interchangeably.
* We demonstrate the robustness of MBL to example ordering (see SS5.2) and to out-of-domain test sets with some exceptions (see SS5.3), suggesting that the order of examples and the origin of the development data are not the most important factors for MBL.
* We show that MBL improves upon important baselines (e.g., zero-shot, random selection), state-of-the art ICL selection (e.g., KATE-GPT (Liu et al., 2022)) and text simplification methods (Sheang and Saggion, 2021) (see SS5.4).
* **Our results suggest that GPT-175B can be _implicitly controlled_** via optimal metric-based learning, i.e., BERTScore Precision-based learning optimizes BLEU, while SARI-based selection optimizes SARI scores.
We release all generation outputs, baseline models and evaluation scripts publicly with [https://github.com/NLP-KU/metric-based-in-context-learning/](https://github.com/NLP-KU/metric-based-in-context-learning/).
## 2 Related Work
Text Simplification (TS) MethodsRecently, LLMs have been applied to text simplification through transfer learning approaches. For instance, Qiang et al. (2020) fine-tuned a BERT model on a text simplification dataset, achieving strong results on multiple benchmarks. Similarly, Sheang and Saggion (2021) introduced a transfer learning approach for text simplification using the T5 model and achieved current state-of-the-art results on standard TS benchmarks. Recent work in the TS domain has a particular focus on controllable text simplification, in which different "control tokens" are embedded in seq2seq models to control model outputs. This is seen in both Sheang and Saggion (2021) and Chamovitz and Abend (2022), where a large language model (BART, T5, etc.) is modified with several control tokens, like the number of words, Levenshtein similarity, and various text rewriting operations. A vast amount of earlier systems (e.g., Xu et al. (2016)) have formulated text simplification as a machine translation task and employed neural machine translation architectures.
TS EvaluationWork on the suitability of various metrics for TS has also been an active area of discussion. While the most commonly used metric in TS is currently SARI (Xu et al., 2016), there is a concern over the metric that best correlates with human judgment. Alva-Manchego et al. (2021) conduct a detailed analysis of several commonly used metrics in the TS field, and suggest BERTScore_Precision as a primary metric of reference-based evaluation. Following these results, we also use BERTScore_Precision as a metric to select examples. Recent studies (Sulem et al., 2018; Tanprasert and Kauchak, 2021) analyzing the suitability of the other two common metrics, namely BLEU (Papineni et al., 2002) and FKGL (Kincaid et al., 1975), strongly advise against these metrics for TS. For these reasons, we do not select
examples based on either metric.
Example Selection and Ordering in ICLWhile large language models like GPT-\(3\) perform exceptionally well on a variety of downstream tasks, selecting examples for in-context learning is non-trivial. Research on example selection is still in early stages, and a unified approach to selecting examples for downstream tasks has not yet been proposed (Dong et al., 2023). Liu et al. (2022) propose selecting the \(k\) most similar examples to the test set from the training/development set via measuring cosine distance in an embedding space (e.g., encodings from RoBERTa), and achieve strong results on various tasks like table-to-text generation. On a similar line, Rubin et al. (2022) introduce a more sophisticated method, where the authors train a two-step retrieval model to select ICL examples. Another set of work focus on optimizing prompts via mutual information (Sorensen et al., 2022) or perplexity (Gonen et al., 2022), that don't require labeled sets. We consider Kate-GPT (Liu et al., 2022) as the closest work to ours, since both the intuition (i.e., choosing from a labeled validation set) and approach (i.e., learning-free) are similar.
## 3 Metric-based In-Context Learning
Following the line of work for retrieving the best samples from development set (Liu et al., 2022; Sorensen et al., 2022), we introduce a simple and intuitive technique based on employing standard evaluation metrics for selecting the examples.
Task SetupGiven the list of sentences \(l=[c,r_{1},r_{2},...,r_{n}]\), where \(c\) is the complex and \(r_{i}\) is the simple reference sentence; our goal is to find the best \(k\) pairs, \([c,r_{i}]\), such that the final text simplification performance on the test set is maximized. To do so, we go through each \(l\) in the development set and measure the distance between each \(c\) and \(r_{i}\) according to a metric, \(m\). Finally, we pick the top \(k\) pairs and fill the prompt template with the samples: "Complex sentence: {\(c\)}, Simple sentence:{\(r_{i}\)}".
We initially considered a long list of task-specific as well as general generation metrics that contain the standard evaluation metrics for TS, namely as SARI, BLEU, FKGL; as well as a simple analysis metric: Compression Ratio, and a more recent textual similarity metric BERTScore as suggested by Alva-Manchego et al. (2021). Following the criticisms by Sulem et al. (2018) and Tanprasert and Kauchak (2021), we removed BLEU and FKGL from the list of candidate metrics.
Compression Ratio (CR)It is simply calculated by dividing the number of characters in \(c\) by the number of characters \(r_{i}\). We consider the pairs with higher compression ratios as more preferable candidates for TS.
BERTScore Precision (BP) (Zhang et al., 2020)BERTScore computes the cosine similarity between each token in the candidate, \(y\), and reference, \(x\), sentences. Precision is calculated as:
\[\text{Prec}=\frac{1}{|y|}\sum_{y_{j}\in y}\max_{x_{i}\in x}x_{i}^{\top}y_{j} \tag{1}\]
We discard pairs with a score of 1 since they would simply be duplicates.
SARI (Xu et al., 2016)It is the defacto standard evaluation metric for TS. In general terms, it compares prediction against both the input and the reference sentences. It calculates a weighted average of F1 scores for three operations: addition, deletion, and keeping. Precision and recalls for each operation are calculated based on n-gram overlaps between the prediction, input and reference sentences. To calculate the SARI score for each \(c\)-\(r_{i}\) pair, we denote \(r_{i}\) as the prediction, \(c\) as the input, and \([r_{1},...,r_{i-1},r_{i+1},...,r_{n}]\) as the reference sentences. Hence, this measure can only be applied when there are multiple references.
## 4 Experimental Setup
To investigate the effects of metric-based selection techniques on TS, we perform a comprehensive set of experiments using various LLMs, sample sizes, and datasets; and compare against strong baseline and state-of-the-art models. Following the criticism (Sulem et al., 2018) on using BLEU (Papineni et al., 2002), we use SARI (Xu et al., 2016) as our main evaluation metric. However, we also report BLEU for two reasons: i) to be consistent with previous works (see SS4.4) and ii) to gain more insights on how the chosen metric for MBL effects the results measured with different metrics.
### Models
Due to its recent success in text generation and in-context learning for various downstream tasks, we experiment with the GPT-3 (Brown et al., 2020) model. We use three different version with the following parameter sizes:
175B, a.k.a., da-vinci-003, 13B, a.k.a., curie, and 6.7B, a.k.a., babbage. We used OpenAI API5 to generate responses using temperature=\(0.7\), max_tokens=\(256\) top_p=\(1\), frequency penalty=\(0\) and presence penalty = \(0\).
Footnote 5: [https://openai.com/](https://openai.com/)
### Datasets
We perform our main experiments on the ASSET Alva-Manchego et al. (2020) and TurkCorpus Xu et al. (2016) datasets. To investigate the transferability of our models, we conduct additional experiments on an out-of-domain cognitive simplification dataset, FestAbility Chamovitz and Abend (2022).
TurkCorpusis a widely-used dataset with 2000 validation and 359 test sentences. It has 8 reference sentences for each original sentence in both the validation and test set.
AssetIs another widely used TS dataset with the intention of improving upon TurkCorpus. It has the same 2000 validation and 359 original test sentences but introduces 10 new reference sentences for each original sentence. ASSET is deemed to be simpler by human evaluation in both fluency and simplicity Alva-Manchego et al. (2020). ASSET improves upon TurkCorpus as it allowed human reviewers to focus on a wider variety of TS operations, which are: lexical paraphrasing, compression, and sentence splitting. Because of this, we emphasize the experiments done on ASSET rather than TurkCorpus while interpreting the results and answering the research questions in SS4.
FestAbilityis a cognitive simplification dataset with 321 pairs of complex and simple sentences--i.e., only one reference sentence. Each of these is additionally annotated with rewriting operations such as <ADDITION> and <DELETION>. These sentences are generated from the transcript of the virtual accessibility conference, and simplifications are generated from the Yalon Method Chamovitz and Abend (2022), a specialized method for simplifying text for individuals with cognitive impairments.
### Baselines
For comparison, we implement three baselines: i) random selection ii) KATE-GPT Liu et al. (2022) and iii) zero-shot. In the random setting, we randomly select \(c\) and \(r_{i}\) pairs from the validation sets. For KATE-GPT Liu et al. (2022), we use the default setting that employs RoBERTa-base for contextualized embeddings and cosine similarity for the distance metric. Given that KATE-GPT calculates sentence pair similarities between the development and test set, unlike just the development set (like ours), we choose complex sentences as the representative. Zero-shot setting is simply conducted with the same instruction prompt without providing any examples.
### Text Simplification State-of-the-art
We compare our results across multiple state-of-the-art systems.
Muss (BART+ACCESS Supervised)Martin et al. (2022) fine-tune BART Lewis et al. (2020) and add information from the four simplification tokens trained in ACCESS.
Finetuned-T5Sheang and Saggion (2021) fine-tune T5 by adding multiple control tokens (e.g., compression ratio, Levenshtein similarity ratio, word rank, and number of words) similar to ACCESS, which control the model's outputs. To the best of our knowledge, they achieve the current state-of-the-art on both the TurkCorpus and ASSET datasets, with SARI scores of 43.31 and 45.04 respectively.
### Evaluation
Even though SARI is considered the standard evaluation metric in our experiments below, we evaluate the results both with SARI and BLEU to emphasize the behavior differences in metric-based selection. It should be noted that, SARI compares prediction against input and reference(s), while BLEU compares only the prediction against reference(s). We use the package EASSE Alva-Manchego et al. (2019) with the default settings 6 to generate reports for all of our experiments.
Footnote 6: We used the BLEU with \(n=5\) against all references provided. The implementation is available at [https://github.com/fervalvam/easse](https://github.com/fervalvam/easse).
## 5 Experiments and Results
We conduct a comprehensive set of experiments with the setup explained in SS4. Following the work by Lu et al. (2022), we experiment with the \(k\) values as \(1,2,4,6,8,10,15,20\) examples. We repeat the random baseline experiments three times for each \(k\). We aim to answer the following research questions (RQ):
RQ1:How do different metric-based selection techniques compare? (SS5.1)
RQ2:Is metric-based sample selection robust to the order of the prompts? (SS5.2)
RQ3:How does metric-based ICL compare to state-of-the-art text simplification methods? (SS5.3)
RQ4:Does metric-based selection performance on one dataset transfer to other out-of-domain datasets? (SS5.4)
### RQ1: Effect of Metrics
Our main results with our default settings (GPT-\(175\)B on ASSET) is shown in Fig. 1. First of all, we observe that the random baseline is quite strong on average, however, with a **large variation** for most \(k\) values; while zero-shot results are quite weak for all datasets. Interestingly, SARI-based selection consistently leads to the highest SARI scores for \(k>1\), while BERTPrec-based selection gives the highest BLEU and lowest SARI scores consistently for each \(k\). Kate-GPT follows BERTPrec-based selection for the BLEU score, while providing results on par or lower than the random baseline for the SARI score.
Next, we check whether our findings hold for smaller models. In Fig. 2, we show the results of our smallest model, GPT-6.7B on the ASSET dataset. Since the zero-shot results were significantly lower than \(k=1\), we show them in Table 1, rather than plotting. Not surprisingly the highest SARI scores are achieved via the largest model; however, the opposite is not true for BLEU. The smallest model achieves the highest BLEU scores that raises another warning flag for using BLEU for TS evaluation.
Similar to the larger model, BERTPrec-based selection achieves the highest BLEU, and the lowest SARI scores. SARI-based selection provides considerably high SARI scores only for larger \(k\)s, suggesting the implicit controlling mechanism does not exist, or is only triggered with more samples. We also observe that CR performs relatively better on GPT-6.7B which suggests compression provides a stronger signal (e.g., deletion, shorter tokens) that can be utilized better by smaller models for simplification.
Finally, we investigate how the quality of the dataset affects the metric-based selection techniques, i.e., whether they are robust to noise. Fig 3 shows an overview of the SARI scores from all models on the noisy (i.e., TurkCorpus) and the cleaner (i.e., ASSET) dataset. Even though the general patterns are visible, the results on TurkCorpus are moderately less conclusive.
dataset size settings, we observe that BLEU scores are consistently higher when the examples are selected via BERTScore_Precision. When we evaluate with the SARI score, SARI-metric behaves similarly for the GPT-175B model, however CR-metric performs better for the smaller models. More evidence for the relation between BLEU and BERTScore_Precision can be found in Appendix B. This suggests that the behaviour of large-enough GPT models can be _implicitly controlled_ via MBL, which paves the way to a new research direction and needs further investigation.
### RQ2: Effect of Order
Previous research (Lu et al., 2022) has shown that the order of the examples may have a significant impact on ICL performance. Commonly used orderings (Lu et al., 2022) include sorting from highest to lowest quality example, vice versa, and random selection. Inspired by these findings, we investigate the robustness of our selection metrics across sample orders. To do so, for each metric we perform three different order arrangements, namely as highest \(\rightarrow\) lowest, lowest \(\rightarrow\) highest, and random ordering for each metric. To have enough variation, we only experiment with \(k=6,8,10,15\). As the baseline, we randomly pick samples and arrange them in 3 different randomized orders.
In Fig 4, we show how the performance of GPT-\(175\)B varies on ASSET when the samples that are i) picked randomly, ii) by SARI-based selection and iii) by BERTPrec-based selection are reordered following the above setup. As can be seen, the best-performing metrics, are also the most robust compared to others. To elaborate, SARI-based selection that provided the highest SARI scores has the least variation, i.e., most robust to order; while BERTPrec-based selection provides the most stable BLEU scores along with the highest.
### RQ3: Comparison to State-of-the-art
Finally, we compare our best and average model settings to state-of-the-art fine-tuned models 7. The results are given in Table 2. Here, Random and SARI averages are calculated from SS5.1 results, averaged over all \(k\), with random selection being additionally averaged over all three random selections. These averages are reported for GPT-\(175\)B results, because it is generally the best model when considering averages across both datasets. As can be seen, the GPT-\(175\)B model with SARI-based selection outperforms existing results on all datasets, followed by random best and SARI-averaged. The exact settings (number of examples, model, and ordering) for SARI and Random Best can be found Appendix A.
Footnote 7: The models which do not provide SOTA (e.g., KATE) are not included in the Table. The statistical significance cannot be provided since there is only one setting for the few-shot setting.
### RQ4: Task Transfer
In order to evaluate the suitability of our approach for unseen tasks and datasets, we experiment with choosing samples from a tune set and testing the performance on an unseen set. Here, we use ASSET and TurkCorpus as the tune set and evaluate on all three datasets: ASSET, TurkCorpus, and FestAbility. To investigate different metrics and language models, we perform experiments with GPT-\(175\)B with SARI-based selection and GPT-\(13\)B with CR-based selection as both metric selection techniques are generally best on those respective models. We
Figure 3: SARI scores for GPT-\(6.7\)B, GPT-\(13\)B and GPT-\(175\)B models on ASSET (top) and TurkCorpus (bottom) datasets. See App. B for BLEU scores.
use the best experimental settings from Table 2 in out-of-domain settings, comparing them with their in-domain counterparts. For example, the best setting for TurkCorpus is k=\(6\) with high to low ordering (see Appendix A for more details on optimal experiment settings), so we compare the results of the model when given this setting on both the TurkCorpus and ASSET datasets. State-of-the-art results in this table refer to the best setting for in-domain experiments (i.e. ASSET evaluated on ASSET or TurkCorpus evaluated on TurkCorpus).
Our results are given in Table 3. For easy comparison, the Table also includes in-domain selection results as well as the current state-of-the-art scores taken from Table 2. In the first row, we observe that samples selected from TurkCorpus and tested on ASSET achieve significantly lower SARI scores than their in-domain variant for the GPT-\(175\)B setting, whereas the gap is lower for the GPT-\(13\)B. On the other hand, for the TurkCorpus test setting (second row), we see that GPT-\(175\)B model prompted with the best ASSET examples achieves even better results than the in-domain setting, suggesting a highly successful transfer. This ability cannot be observed for the GPT-\(13\)B model with CR-based selection. The final row shows the transfer results to another related but different task. It is apparent that both models prompted with ASSET examples achieve marginally higher scores than the TurkCorpus ones.
Taking a look at the BLEU scores, we see that out-of-domain configurations on the TurkCorpus and ASSET datasets generally tend to match or even exceed their in-domain counterparts, suggesting a successful transfer. However, on the FestAbility dataset, we observe notably low BLEU scores, which are in-part due to the nature of FestAbility, in which sentences are often simplified in unconventional ways. Additionally, FestAbility sentences are extremely short, with only 1452 unique tokens in the original sentences and 996 unique tokens in the simplified sentences (Chamovitz and Abend, 2022), leading to unconventional results.
## 6 Qualitative Analysis
In this section, we perform a qualitative analysis of different model generated simplifications and
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **ASSET** & **TurkCorpus** & **FestAbility** \\ \hline Finetuned TS & 45.04 & 43.31 & N/A \\ MUSS (BART + ACCESS) & 43.63 & 42.62 & N/A \\ BART-large-Classifier & 38.76 & N/A & 27.13 \\ \hline (Ours) Random-Best & 46.93 & 43.14 & N/A \\ (Ours) Random-Average & 45.33 & 40.32 & N/A \\ \hline (Ours) MILL-Best & **47.94** & **43.46** & **44.86** \\ Ours MILL-Average & 46.63 & 41.78 & 43.55 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison to TS state-of-the-art models. Random- and MBL-best examples are selected from the top examples in all experiments run. The best results are shown in bold. For more information on the exact settings for MBL and Random Best, see Appendix A.
Figure 4: Boxplots for GPT-\(175\)B model performance on ASSET with sample (re)ordering via random, SARI-based and BERTPrec-based selections. Performance shown in SARI (left), and BLEU (right)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Test Set** & **Model Setting** & **Tune Set** & **SARI** & **BLEU** \\ \hline \multirow{6}{*}{} & GPT-\(175\)B, SARI, high to low, k=6 & TurkCorpus & 43.46 & 79.83 \\ & GPT-\(175\)B, SARI, high to low, k=6 & ASSET & 46.93 & 75.67 \\ & GPT-\(13\)B, CR, high to low, k=15 & TurkCorpus & 41.73 & 74.57 \\ & GPT-\(13\)B, CR, high to low, k=15 & ASSET & 41.9 & 76.49 \\ & _State-of-the-art (MBL-Best)_ & _47.94_ & _73.92_ \\ & _Zero-for (GPT-\(175\)B)_ & _38.49_ & _60.48_ \\ \hline \multirow{6}{*}{} & GPT-\(175\)B, SARI, random, k=15 & ASSET & 42.37 & 64.52 \\ & GPT-\(175\)B, SARI, random, k=15 & TurkCorpus & 41.48 & 85.89 \\ & GPT-\(13\)B, CR, high to low, k=15 & ASSET & 39.44 & 71.15 \\ & GPT-\(13\)B, CR, high to low, k=15 & TurkCorpus & 40.37 & 73.83 \\ & _State-of-the-art (MBL-Best)_ & _43.46_ & _76.83_ \\ & _Zero-shot (GPT-\(175\)B)_ & _32.17_ & _42.34_ \\ \hline \multirow{6}{*}{} & GPT-\(175\)B, SARI, random, k=6 & TurkCorpus & 42.24 & 20.76 \\ & GPT-\(175\)B, SARI, random, k=15 & ASSET & 44.86 & 17.08 \\ \cline{1-1} & GPT-\(13\)B, CR, high to low, k=15 & TurkCorpus & 25.46 & 23.37 \\ \cline{1-1} & GPT-\(13\)B, CR, high to low, k=15 & ASSET & 36.63 & 12.01 \\ \cline{1-1} & _State-of-the-art (MBL-Best)_ & _44.86_ & N/A \\ \cline{1-1} & _Zero-shot (GPT-\(175\)B)_ & _40.77_ & _6.9_ \\ \hline \hline \end{tabular}
\end{table}
Table 3: ICL out-of-domain results for GPT-\(175\)B, SARI-based selection and GPT-\(13\)B, CR-based selection. Examples are picked from the _Tune Set_ and tested on the _Test Set_. Zero-shot results are from GPT-\(175B\) and given in the final rows for each dataset.
metric-based prompting examples in order to better understand how different settings affect model outputs.
### Explaining Performance as \(k\) Increases
We aim to understand why certain metrics (BERTPrec and KATE-GPT) tend to perform worse as \(k\) increases, while other metrics (SARI) tend to perform better as \(k\) increases when evaluated on SARI scores (as seen in Fig. 3). In fact, this result is commonly seen in other papers Zhao et al. (2021); Zhang et al. (2022), where they describe that adding more training examples can sometimes hurt accuracy. By analyzing output of metric-based selection on a fixed dataset and model (ASSET, GPT-\(175\)B) seen in Appendix D.3, we aim to understand the performance of different metrics as the number of examples, or \(k\), increases. Our analysis focuses on three different metrics (KATE-GPT, BERTPrec, and SARI) and a particularly difficult example due to its unconventional subject nature, multiple abbreviations, and unknown words, and objectively confusing sentence structure. In general, we see from earlier trends that KATE-GPT and BERTPrec-selected examples tend to get worse (w.r.t SARI) as \(k\) increases (see Figure 1-top). We also observe this qualitatively, as \(k\) increases, KATE-GPT and BERTPrec examples become closer to the original sentence, with BERTPrec generations even matching the original sentence at \(k=15\). However, as the value of k increases, SARI-selected examples show an improvement in quality. We observe that examples selected using the SARI score metric tend to: i) split sentences more frequently, and ii) decode potentially confusing abbreviations, such as "OEL".
**Sentence Splitting:** SARI-selected examples are more prone to splitting sentences (see \(k\)=2,15), which may be in-part due to the style of the top SARI examples, which include sentence splitting; while this is not present in any of the other metrics. See Appendix D.3 for examples. Sentence splitting is correlated with increased human comprehension of TS outputs Williams et al. (2003). This is particularly interesting because it leads us to infer that models can potentially learn the "style" of the reference sentences.
**Abbreviations:** In all three cases, (\(k=2,8,15\)) examples selected by SARI score remove the potentially confusing abbreviation "OEL" and instead replace it with either "original English-language" or "English-language", while KATE-GPT and BERTPrec selected examples only exhibit this behavior for \(k=2\) (see Appendix D.3).
\begin{table}
\begin{tabular}{l l} \hline \hline Metric & Top 2 Examples \\ \hline Compression Ratio & **Complex Sentence** They manifest with either neurological complications or with skin problems (or occasionally both). \\ & **Simple Sentence** They show either brain or skin problems (or both). \\ \hline & **Complex Sentence** The psychological state of sympathy is closely linked with that of compassion, empathy and empathic concern. \\ & **Simple Sentence** Sympathy is closely linked with compassion and empathy. \\ \hline BertScore Precision & **Complex Sentence** Sthenurine forelimbs were long with two extra-long fingers and claws compared with the relatively small, stiff arms of modern macroopods. \\ & **Simple Sentence** Sthenurine forelimbs were long with two extra-long fingers and claws compared with the small, stiff arms of modern macroopods. \\ \hline & **Complex Sentence** In 1828, Coernad Johannes van Houten developed the first cocoa powder producing machine in the Netherlands. \\ & **Simple Sentence** In 1828, Coernad Johannes van Houten created the first cocoa powder producing machine in the Netherlands. \\ \hline SARI & **Complex Sentence** The organic matter in soil derives from plants and animals. \\ & **Simple Sentence** The organic matter in soil comes from plants and animals. \\ \hline & **Complex Sentence** Dennis Lee Hopper (born May 17, 1936) is an American actor, filmmaker and artist. \\ & **Simple Sentence** Dennis Lee Hopper was born on May 17, 1936. He is an American actor, filmmaker and artist. \\ \hline \hline \end{tabular}
\end{table}
Table 4: Top 2 examples from each applicable selection metric (random and KATE-GPT selection were not applicable). All samples taken from the ASSET Validation dataset. We color rephrases first in blue and then in yellow, mark significant deletions in red, and underline sentence splits.
### Analyzing Model Size
Model size plays a significant role in output sentences, with smaller models (especially GPT-\(6.7\)B) tending to change very little structurally from the original sentence, regardless of the metric used to select examples. See Appendix D for a complete list of model outputs on all metrics for the original sentence "OEL manga series Graystripe's Trilogy There is a three volume original English-language manga series following Graystripe, between the time that he was taken by Twolegs in Dawn until he returned to ThunderClan in The Sight". From these results, we conclude that GPT-\(6.7\)B tends to hardly change sentences at all, with both Random and BERTPrec-selected examples having no change from the original sentence. SARI-selected examples adds a comma, but CR-selection prompts the model to rephrase key parts of the sentence. GPT-\(13\)B performs considerably better when looking at a qualitative analysis, as all examples have removed "OEL manga series Graystripe's Trilogy" and restructured the sentence to be more concise, and SARI-selected going as far to remove an ambiguous abbreviation "OEL". These qualitative observations are consistent with our results from Figure 1.
### Analyzing Top Metric-Selected Examples
In this section, we analyze the top metric-selected examples for compression ratio, BERTPrec, and SARI. In Table 4 we include the top \(2\) examples for each metric from the ASSET validation dataset, and in Appendix C we include the remaining top \(8\) examples for SARI and BERTPrec selection.
Looking at the style of both BERTPrec and SARI score, both metrics' top examples barely change from the original sentences, often only changing one or two words (i.e., movie \(\rightarrow\) film) but leaving the rest unchanged, primarily using deletion or rewriting operations. However, in CR top examples, we see extreme deletions from the original sentences and several rewriting operations done (which is consistent with our understanding of the compression ratio). Notably, we also see that top SARI examples are the only metric that use sentence splitting (see the \(2\)nd example under SARI from Table 4).
## 7 Conclusion and Future Work
In conclusion, we propose a novel and robust method for selecting examples in the TS domain, evaluating its effectiveness on multiple well-known TS datasets and even on downstream tasks like cognitive simplification. Our experiments demonstrate state-of-the-art results in the field of TS and CS, reaching scores of 44.86 on FestAbility, 47.94 on ASSET and 43.46 on TurkCorpus. We hope that future work will generalize our findings in other text generation tasks and other domains.
### Limitations
Our approach is computationally and financially intensive, especially on the GPT-\(175\)B model, which limits its scalability to smaller, open-source models. While our approach has shown strong results in the TS domain, we are not yet sure whether using domain-specific selection methods is widely applicable. Our approach also is not applicable in true few-shot settings in which a large validation set is not available to select examples from. Also, our approaches' scalability to other downstream TS tasks outside of cognitive simplification is yet to be tested, especially in different domains. We tested on two well-known TS datasets (ASSET and TurkCorpus), and we did not test on another known TS dataset, Newsela, due to its restrictive licensing. Additionally, we have tested our approach on example numbers up to \(15\) due to financial constraints, and testing on higher numbers of examples may show additional insights and we leave this for future researchers.
## Ethics Statement
We acknowledge that while our approach reaches high scores on datasets aimed for individuals with disabilities, further research and evaluation from humans with specific disabilities listed in this paper is crucial to determine the true effectiveness of our approach in these scenarios.
## Acknowledgements
This work has been supported by the Scientific and Technological Research Council of Turkiye (TUBITAK) as part of the project "Automatic Learning of Procedural Language from Natural Language Instructions for Intelligent Assistance" with the number 121C132. We also gratefully acknowledge the Fatima Fellowship and KUIS AI Lab for providing support. We thank our anonymous reviewers and the members of GGLab who helped us improve this paper. |
2306.03067 | Interactive Editing for Text Summarization | Summarizing lengthy documents is a common and essential task in our daily
lives. Although recent advancements in neural summarization models can assist
in crafting general-purpose summaries, human writers often have specific
requirements that call for a more customized approach. To address this need, we
introduce REVISE (Refinement and Editing via Iterative Summarization
Enhancement), an innovative framework designed to facilitate iterative editing
and refinement of draft summaries by human writers. Within our framework,
writers can effortlessly modify unsatisfactory segments at any location or
length and provide optional starting phrases -- our system will generate
coherent alternatives that seamlessly integrate with the existing summary. At
its core, REVISE incorporates a modified fill-in-the-middle model with the
encoder-decoder architecture while developing novel evaluation metrics tailored
for the summarization task. In essence, our framework empowers users to create
high-quality, personalized summaries by effectively harnessing both human
expertise and AI capabilities, ultimately transforming the summarization
process into a truly collaborative and adaptive experience. | Yujia Xie, Xun Wang, Si-Qing Chen, Wayne Xiong, Pengcheng He | 2023-06-05T17:43:53Z | http://arxiv.org/abs/2306.03067v1 | # Interactive Editing for Text Summarization
###### Abstract
Summarizing lengthy documents is a common and essential task in our daily lives. Although recent advancements in neural summarization models can assist in crafting general-purpose summaries, human writers often have specific requirements that call for a more customized approach. To address this need, we introduce REVISE, an innovative framework designed to facilitate iterative editing and refinement of draft summaries by human writers. Within our framework, writers can effortlessly modify unsatisfactory segments at any location or length and provide optional starting phrases - our system will generate coherent alternatives that seamlessly integrate with the existing summary. At its core, REVISE incorporates a modified fill-in-the-middle model with the encoder-decoder architecture while developing novel evaluation metrics tailored for the summarization task. In essence, our framework empowers users to create high-quality, personalized summaries by effectively harnessing both human expertise and AI capabilities, ultimately transforming the summarization process into a truly collaborative and adaptive experience.
## 1 Introduction
Human intelligence has been significantly augmented by the rapid development of Artificial Intelligence AI, Engelbart (1962); Lee et al. (2022)), particularly with the emergence of large language models Devlin et al. (2018); Raffel et al. (2020); Brown et al. (2020); OpenAI (2023). AIs have shown great potential in various practical settings, helping users brainstorm ideas (e.g., Jasper, Copy.ai), paraphrase sentences (e.g.,Wordtune, QuillBot), reformulate queries Nogueira and Cho (2017), autocomplete sentences Chen et al. (2019), and write code (e.g., Copilot, TabNine). One specific area where AI can revolutionize our daily life is in the realm of document summarization, which is the focus of this paper.
In this work, we present REVISE- Refinement and Editing Via Iterative Summarization Enhancement, a novel framework that transforms the summarization process into an interactive experience for human writers. Instead of generating static summaries, our approach enables users to efficiently edit and improve draft summaries iteratively, tailoring them to their specific needs and preferences. This results in the creation of high-quality, customized summaries that cater to individual requirements and contexts, moving beyond the limitations of traditional, one-size-fits-all summarization models. Figure 1 shows an illustration.
Our framework primarily consists of two models: one providing the initial draft summary and the other supporting the writer in refining the summary through interactive suggestions. We build upon a pretrained encoder-decoder summarization model and enhance its ability to generate contextually relevant and coherent suggestions for human edits. Through extensive experimentation and evaluation, we demonstrate the superior performance of our proposed framework in terms of salience,
Figure 1: Illustration on how REVISE interacts with human writer.
coherence, and adaptability to different situations.
The key innovation in REVISE lies in fostering a seamless collaboration between human writers and AI models, creating a truly interactive summarization experience. This interactive approach not only enables users to harness the power of AI to extract key information but also preserves the creativity and adaptability offered by human input. By empowering users to edit the summary iteratively until they are satisfied, our framework ensures the delivery of personalized, high-quality summaries tailored to diverse requirements.
We perform extensive human evaluations, suggesting the proposed framework can not only improve the efficiency of human editing, but also significantly enhance the summary quality.
## 2 Related Works
**Interactive summarization**. There are several works exploring the how to facilitate human summary writing in an iterative way. For example, Yan et al. (2011) generate new summaries after their users to click on sentences they want to know more about. In Avinesh and Meyer (2017) and Avinesh et al. (2018), users can indicate which bigrams of a candidate summary are relevant to their interests. The APRIL system Gao et al. (2020) first let users to indicate preference between candidate summaries, and then train a summary-ranking model to select the next pair of candidate summaries. Bohn and Ling (2021) optimize document summaries for personal interest by collecting user feedback in the normal flow of reading, such as dwell time or gaze location. More recently, Shapira et al. (2021, 2022) allow users to interactively submit queries in order to expand on the information on a topic in the summary. In contrast, our framework is more versatile - users can either specify their intent by the prompts, or let the model provide a few alternatives.
**Text generation - interactive editing**. The task of human editing text interactively with AI is widely explored in other text generation tasks (Cheng et al., 2022; Lee et al., 2022), e.g., machine translation (Barrachina et al., 2009). Many works are _prefix-based_, i.e., new completions can only be generated left-to-right (Gonzalez-Rubio et al., 2013; Peris and Casacuberta, 2018, 2019, 2020). Few works (Gonzalez-Rubio et al., 2016; Weng et al., 2019) allow edits at arbitrary positions. However, unlike REVISE, the edits can only be words or sentences. **Text infilling**. There are two approaches for imbuing models with infilling capabilities: first, through new architectures like SpanBERT (Joshi et al., 2020) and XLNet (Yang et al., 2019). To list a few examples, XLNet modifies the attention mask in a standard transformer to enable token generation in any user-specified order, while Insertion Transformer (Stern et al., 2019), KERMIT (Chan et al., 2019), and InDIGO (Gu et al., 2019) allow the model to predict a location for the next token before predicting the token. Similarly, Blank Language models (Shen et al., 2020) generate text by iteratively selecting a blank and replacing it with a token (and optionally more blanks).
Alternatively, Zhu et al. (2019), Donahue et al. (2020), GLM (Du et al., 2022), CM3 (Aghajanyan et al., 2022), InCoder (Fried et al., 2022), and Bavarian et al. (2022) utilize left-to-right autoregressive modeling by moving the infill regions to the end of context, with regions separated by sentinels. Notably, Bavarian et al. (2022) show the computational efficiency and superior performance of training in this way at scale. Our work extends Bavarian et al. (2022) to encoder-decoder models, and demonstrates the feasibility of its usage in summarization.
Text infilling can also be performed using a GAN-based method (Fedus et al., 2018), where REINFORCE is needed, or through gradient search (Liu et al., 2019).
## 3 Method
The key component of our framework is a Fill-In-the-Middle (FIM) model, which can provide alternatives to the human specified unsatisfactory part. Specifically, the input of the model is a source document sequence, a prefix sequence containing the summary before the deleted part and an optionally human start, a suffix sequence containing the summary after the deleted part. The goal of the model is to fill in a sequence that not only contains the key information of document, but also connect coherently with the prefix and the suffix. Following the advanced neural models for abstractive summarization (Zhang et al., 2020; He et al., 2022), we adopt an encoder-decoder model architecture.
### Training for FIM
We start from a standard summarization training data \(\mathcal{D}=\{d_{i},t_{i}\}_{i=1}^{N}\), where \(d_{i}\) is the \(i\)-th document, \(t_{i}\) is the corresponding summary, and they are both sequences of tokens. During training, we
randomly divide the summary \(t_{i}\) into three parts: the prefix \(p_{i}\), the middle \(m_{i}\), and the suffix \(s_{i}\). Then, we concatenate the prefix, the suffix, and the document, together with their sentinel tokens, as the input for the encoder,
\[\texttt{[PRE]}\circ p_{i}\circ\texttt{[SUF]}\circ s_{i}\circ\texttt{[CLS]} \circ d_{i},\]
where \(\circ\) is the concatenation of tokens. We input the middle together with its sentinel tokens into the decoder,
\[\texttt{[BOS]}\circ m_{i}\circ\texttt{[EOS]}.\]
Here, we follow Bavarian et al. (2022) and adopt an [EOS] token to signal a successful concatenation of the middle and the suffix. The model is then trained with a standard cross-entropy loss for sequence-to-sequence models.
### Training for the Corner Cases
Human edits can appear anywhere - not only the middle, but also in the beginning and the end. In our preliminary experiments, we find that if the model is only trained for the data in the middle, it cannot handle the edits in the beginning and the end of the summaries. Especially, if the edits are in the end, the generation usually cannot reach [EOS] token, i.e., the generation cannot end.
Therefore, we sample a proportion \(\gamma\) of data for the edits in the beginning and the end. Specifically, we randomly split the summary \(t_{i}\) into the prefix \(p_{i}\) and the suffix \(s_{i}\). On the one hand, to train the generation in the end, we input the concatenation of the prefix and the document into the encoder,
\[\texttt{[PRE]}\circ p_{i}\circ\texttt{[CLS]}\circ d_{i},\]
and the suffix for the decoder,
\[\texttt{[BOS]}\circ s_{i}\circ\texttt{[EOS]}.\]
On the other hand, we exchange the position of \(p_{i}\) and \(s_{i}\) to train the generation in the beginning.
## 4 Experiments
### Settings
We use the CNN / Daily Mail (CNNDM) dataset (See et al., 2017) for training and evaluation. We adopt the pretrained Z-Code++ (He et al., 2022) as the backbone encoder-decoder model. The proportion \(\gamma\) of the training for the corner cases is \(0.5\). We train the model for \(10\) epochs with learning rate \(7\times 10^{-6}\).
### Evaluation Metrics
We evaluate the FIM model for three aspects:
1. Is the generation salient in the document?
2. Does the generation connect coherently with the rest of the summary?
3. Can the model handle any possible positions?
We propose to use three evaluation metrics for these three aspects, respectively.
**ROUGE score**. We split the test set summaries of CNNDM into prefixes, middles, and suffixes. We feed the golden prefixes and suffixes to the model, and compute the ROUGE scores (Lin, 2004) between the generated texts and the golden middles. In this way, we measure whether the generated texts captures the important information as the golden summaries do.
**GPT Likelihood**. Large pretrained language models can be used to evaluate the coherence of text (Yuan et al., 2021). Here, we adopt GPT-3.5 (Brown et al., 2020). Specifically, given a sequence of tokens \(x=(x^{(1)},\cdots,x^{(j)},x^{(j+1)},\cdots,x^{(N)})\), we evaluate whether \((x^{(1)},\cdots,x^{(j)})\) are locally coherently connect with \((x^{(j+1)},\cdots,x^{(N)})\) by computing the log-likelihood of generating
Figure 2: Illustration on how the FIM model is trained.
\(x^{(j+1)},x^{(j+2)},\cdots\), given the previous tokens,
\[\ell_{H}((x^{(1)},\cdots,x^{(j)}),(x^{(j+1)},\cdots,x^{(N)}))\] \[= \log p(x^{(j+1)},\cdots,x^{(j+H)}|x^{(1)},\cdots,x^{(j)})\] \[= \sum_{h=1}^{H}\log p_{\texttt{\tiny GPT}}(x^{(j+h)}|x^{(1)}, \cdots,x^{(j+h-1)})\]
where \(p_{\texttt{\tiny GPT}}(a^{(k)}|a^{(1)},\cdots,a^{(k-1)})\) is the probability that next token is \(a^{(k)}\) given previous token sequence \(a^{(1)},\cdots,a^{(k-1)}\), and the probability is provided by GPT model. Here, we adopt the likelihood on a fixed-length sequence \(x^{(j+1)},\cdots,x^{(j+H)}\) instead of the complete sequence \(x^{(j+1)},\cdots,x^{(N)}\) to alleviate the length effect.
For summarization, we consider the connectivity of the prefix-middle and the middle-suffix,
\[\ell_{1} =\ell_{H}(p_{i},m_{i}),\] \[\ell_{2} =\ell_{H}(p_{i}\circ m_{i},s_{i}),\]
where \(p_{i},m_{i},s_{i}\) is from the split test set summary in the above section.
**ROUGE score for corner cases**. Following the training recipe in Section 3.2, we split the test set summaries into the prefix and the suffix, ask the model to generate the prefix or the suffix accordingly, and compute the corresponding ROUGE scores.
### Empirical Results
**Different design choices for text infilling**. We include three different variants of the proposed model in Table 1. Our proposed training method can achieve the best performance in terms of salience and coherence. In Table 2 we also show how the design choices in the pretraining stage affect the performance.
**Comparing to standard summarization model**. The last column of Table 1 shows the results when we do not provide summary context for generation, i.e., we ask the model to generate a complete summary. In comparison, in He et al. (2022), a vanilla finetuned summarization model can achieve a ROUGE-2 of 22.2. Our corner case training can significantly improve the generation quality.
### Human Evaluation
To validate our proposed pipeline can effectively help human writer to make edits, we conduct a contrast experiment on whether using REVIVE can improve editing efficiency. Specifically, we collect 120 document in three domains1 - news, conversations, and blogs, and ask human annotators to edit the draft summaries until they are satisfied with the summary. The draft summaries are generated by the standard summarization model. We compare the editing processes with and without interaction.
Footnote 1: We release the data together with the evaluation results at [https://github.com/microsoft/Interactive-Summarization](https://github.com/microsoft/Interactive-Summarization).
Figure 3 shows an illustration of the editing process with interaction. In practice, we prompt the annotators with 3 suggestions. We adopt the top-3 beams of beam search decoding method as suggestions, to enforce the suggestions are at least one-token different.
The experiment consists of two stages. In the first stage, we perform a contrast experiment, to collect annotations with or without interaction. In the
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Model & R-FIM & R-begin & R-end & R-end & R-all \\ \hline Base model & 47.79/27.20 & 46.01/25.98 & 34.46/18.13 & 41.92/19.36 \\ - DA & 46.52/25.97 & 44.99/25.19 & 34.04/17.52 & 35.44/15.53 \\ - DA - RTD & 46.22/25.80 & 45.15/25.22 & 33.40/17.02 & 39.90/18.21 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Different pretraining setting.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Model & R-FIM & R-begin & R-end & \(\ell_{1}\) & \(\ell_{2}\) & R-all \\ \hline Proposed & 51.79/30.30 & 49.51/28.87 & 37.36/20.31 & -3.14 & -2.97 & 43.32/20.96 \\ Context in decoder & 45.72/25.62 & 44.25/24.72 & 36.32/19.41 & -3.14 & -3.84 & 43.37/20.94 \\ No corner case training & 51.86/30.24 & 46.67/27.11 & 26.97/12.67 & -4.01 & -3.04 & 10.67/2.42 \\ Base model & 47.79/27.20 & 46.01/25.98 & 34.46/18.13 & -4.32 & -3.19 & 41.92/19.36 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Different variants of the FIM model. Here, for R-FIM, R-begin, R-end, and R-all, the first number is ROUGE-1, the second number is ROUGE-2.
Figure 3: Illustration on the human editing process with interaction. The annotators will be prompted with multiple suggestions of summary completions, and they choose one from them.
second stage, we evaluate the annotations collected from the first stage, to see whether the annotation quality and efficiency are improved. The experiment takes 3 annotators, where each annotator will
1. annotate 40 documents with interaction,
2. annotate 40 documents without interaction,
3. evaluate the summaries of 40 documents, where each document has 3 summaries, i.e., the draft summary and the human annotated summary with and without interaction.
The 40 documents in each of the above tasks are different for each annotator. In this way, we enforce the annotation time and evaluation results are unbiased, as the annotators will never see the same document across different tasks.
Table 3 shows REVISE can significantly save editing time and improve annotation quality. In practice, annotators trigger 2.2 suggestions for each document on average. Due to model inference latency and network latency, the suggestions will take 4.1s on average, which is also included in the annotation time reported in Table 3.
## 5 Conclusion
In this work, we propose REVISE, offering an interactive and iterative approach for human writers to efficiently edit and refine summaries tailored to their specific needs. We adopt a fill-in-the-middle model for the encoder-decoder architecture underlies the framework, empowering it to generate coherent and contextually relevant suggestions for human edits, validated by a set of novel evaluation metrics. We perform extensive human evaluation, suggesting the proposed framework can improve both the editing efficiency and summary quality. We hope REVISE can herald a new era in the summarization experience, with potentially transformative implications for practical applications and inspiring further research in interactive AI systems.
|
2301.08195 | Towards Improved Quantum Simulations and Sensing with Trapped 2D Ion
Crystals via Parametric Amplification | Improving coherence is a fundamental challenge in quantum simulation and
sensing experiments with trapped ions. Here we discuss, experimentally
demonstrate, and estimate the potential impacts of two different protocols that
enhance, through motional parametric excitation, the coherent spin-motion
coupling of ions obtained with a spin-dependent force. The experiments are
performed on 2D crystal arrays of approximately one hundred $^9$Be$^+$ ions
confined in a Penning trap. By modulating the trapping potential at close to
twice the center-of-mass mode frequency, we squeeze the motional mode and
enhance the spin-motion coupling while maintaining spin coherence. With a
stroboscopic protocol, we measure $5.4 \pm 0.9$ dB of motional squeezing below
the ground-state motion, from which theory predicts a $10$ dB enhancement in
the sensitivity for measuring small displacements using a recently demonstrated
protocol [Science $\textbf{373}$, 673 (2021)]. With a continuous squeezing
protocol, we measure and accurately calibrate the parametric coupling strength.
Theory suggests this protocol can be used to improve quantum spin squeezing,
limited in our system by off-resonant light scatter. We illustrate numerically
the trade-offs between strong parametric amplification and motional dephasing
in the form of center-of-mass frequency fluctuations for improving quantum spin
squeezing in our set-up. | Matt Affolter, Wenchao Ge, Bryce Bullock, Shaun C. Burd, Kevin A. Gilmore, Jennifer F. Lilieholm, Allison L. Carter, John J. Bollinger | 2023-01-19T17:45:48Z | http://arxiv.org/abs/2301.08195v3 | Toward improved quantum simulations and sensing with trapped two-dimensional ion crystals via parametric amplification
###### Abstract
Improving coherence is a fundamental challenge in quantum simulation and sensing experiments with trapped ions. Here we discuss, experimentally demonstrate, and estimate the potential impacts of two different protocols that enhance, through motional parametric excitation, the coherent spin-motion coupling of ions obtained with a spin-dependent force. The experiments are performed on 2D crystal arrays of approximately 100 \({}^{9}\)Be\({}^{+}\) ions confined in a Penning trap. By modulating the trapping potential at close to twice the center-of-mass mode frequency, we squeeze the motional mode and enhance the spin-motion coupling while maintaining spin coherence. With a stroboscopic protocol, we measure \(5.4\pm 0.9\) dB of motional squeezing below the ground-state motion, from which theory predicts a 10-dB enhancement in the sensitivity for measuring small displacements using a recently demonstrated protocol [K. A. Gilmore _et al._, Science **373**, 673 (2021)]. With a continuous squeezing protocol, we measure and accurately calibrate the parametric coupling strength. Theory suggests this protocol can be used to improve quantum spin squeezing, limited in our system by off-resonant light scatter. We illustrate numerically the trade-offs between strong parametric amplification and motional dephasing in the form of center-of-mass frequency fluctuations for improving quantum spin squeezing in our set-up.
## I Introduction
Trapped-ion systems have demonstrated high-fidelity quantum logic gates [1; 2; 3; 4; 5], spin squeezing [6; 7; 8; 9; 10], the generation of many-ion entangled states [11; 12], and motional sensing below the ground-state motion of the ions [13; 14; 15; 16]. In these experiments, the interaction between qubits is engineered by coupling the spins to the collective motion of the ions using lasers or magnetic-field gradients. Depending on the source of decoherence, stronger spin-motion coupling can enable higher-fidelity gates and improved quantum sensing. However, the interaction strength is limited by the available laser power or current that can be driven through the trap electrodes. In the case of laser-based interactions, the dominant source of decoherence is typically spin decoherence from spontaneous emission, so increasing the laser power alone will not necessarily improve the fidelity.
A viable approach to stronger spin-motion coupling that overcomes some of these technical and fundamental challenges is parametric amplification [17; 18]. By modulating the trapping potential at twice the motional mode frequency, one quadrature of motion is amplified while the orthogonal quadrature is attenuated [19]. This leads to motional squeezing and an amplification of the spin-motion coupling strength. Improved displacement sensing of a single ion [16] and faster two-qubit ion gates have recently been achieved with this technique [20]. Here we experimentally characterize both a stroboscopic and a continuous squeezing protocol on two-dimensional (2D) crystal arrays of approximately 100 ions. Based on this characterization we estimate the potential improvements for displacement sensing and spin squeezing with large trapped-ion crystals.
In the stroboscopic protocol, parametric amplification is applied over a discrete interval to create a squeezed motional state. The size of this motional squeezing is experimentally characterized through a phase-coherent Ramsey sequence to measure the variance of the center-of-mass (c.m.) motion in the squeezed and unsqueezed directions. The c.m. motion is squeezed by \(5.4\pm 0.9\) dB (\(10.8\pm 1.8\) dB) below the ground-state motional uncertainty (variance). By applying this level of squeezing to amplify a spin-independent displacement, theory predicts a 10-dB improvement in the sensitivity for measuring small displacements, accounting for c.m. frequency fluctuations. This has the potential to improve the sensitivity for measuring small displacements obtained with the protocol demonstrated in Ref. [13] from 8.8 dB to nearly 19 dB below the standard quantum limit (SQL). Here the SQL is given by the ground-state c.m. mode zero-point fluctuations.
For faster higher-fidelity quantum simulations, we also investigate a continuous squeezing protocol [20]. In this experiment, the parametric drive is applied simultaneously
with the spin-dependent force to amplify the strength of the spin-motion coupling. Theory [17] shows that this protocol leads to an equivalent interaction Hamiltonian with a rescaled detuning and stronger interaction strength. By measuring this rescaled detuning, we experimentally determine the parametric coupling strength \(g\), which agrees well with predictions from a numerical model. As an application of this continuous protocol, we explore theoretically how this increased interaction strength will lead to an improvement in quantum spin squeezing [6]. Our model suggests that frequency fluctuations of the c.m. mode will be the primary limitation to the enhancement. With a reduction of the frequency fluctuations from our current 40 Hz to 10 Hz, theory predicts 14 dB of spin squeezing is possible for a 400-ion crystal.
The rest of the paper is structured as follows. In Sec. II we describe the Penning trap setup and the implementation of parametric amplification. In Sec. III the stroboscopic squeezing protocol is discussed. We describe how the motional squeezing is characterized and the predicted improvements in the sensitivity of detecting small displacements. Section IV focuses on the continuous squeezing protocol. Experimental data with and without continuous parametric amplification are presented, from which the parametric coupling strength is extracted without precise control of the phase of the parametric drive. Theory predicts that this continuous parametric amplification protocol will enable improved spin squeezing in the presence of decoherence due to off-resonant light scatter [6]. The amount of improvement will likely be limited by the frequency stability of the c.m. mode. A summary is given in Sec. V.
## II Experimental Apparatus
In our experimental apparatus [14], 2D crystal arrays of approximately 100 \({}^{9}\)Be\({}^{+}\) ions are confined in a Penning trap as shown in the simplified schematic of Fig. 1. The ions are confined axially by a quadratic potential formed by static voltages applied to a stack of cylindrical electrodes. Radial confinement results from \(\vec{E}\times\vec{B}\) induced rotation of the crystal. This rotation of the crystal through the \(\vec{B}=4.5\hat{z}\) T magnetic field (\({}^{9}\)Be\({}^{+}\) cyclotron frequency of \(\Omega_{c}/=7.6\) MHz) produces a radially confining Lorentz force. In the rotating frame of the crystal, to a very good approximation the confining potential can be written as
\[q\phi_{\mathrm{trap}}=\frac{1}{2}M\omega_{z}^{2}\left(z^{2}+\beta_{r}\rho^{2} \right), \tag{1}\]
where \(M\) is the mass and \(q\) is the charge of \({}^{9}\)Be\({}^{+}\), \(z\) (\(\rho\)) is the axial distance (cylindrical radius) from the trap center, and the axial c.m. oscillation frequency \(\omega_{z}/=1.59\) MHz quantifies the strength of this confining potential. The magnetic-field strength and rotation frequency determine the relative strength of the radial confinement
\[\beta_{r}=\frac{\omega_{r}\left(\Omega_{c}-\omega_{r}\right)}{\omega_{z}^{2}} -\frac{1}{2}, \tag{2}\]
which we control by setting the rotation frequency \(\omega_{r}/=180\) kHz through the use of a weak dipole rotating wall potential [21] [neglected in Eq. (1)].
To parametrically amplify the axial c.m. mode, the confining potential is modulated at near twice the mode frequency. This modulation is achieved by applying a sinusoidal voltage with an amplitude ranging from 1 V to 51 V at a frequency \(\omega_{p}\sim 2\omega_{z}\) in addition to the confining voltage \(V_{M}=-1.75\) kV (see Fig. 1). Ideally, this modulation parametrically amplifies the c.m. motion along one quadrature and attenuates the motion in the orthogonal quadrature, producing motional squeezing.
However, if the ion crystal is not in the null of the modulated potential, motion is also directly driven at \(\omega_{p}\). To minimize this motion, we modulate the confining potential at an off-resonant frequency \(\omega_{p}/\sim 1.7\) MHz,
Figure 1: Simplified cross-sectional schematic of the Penning trap. To generate the \(\omega_{z}/2\pi=1.59\) MHz axial confinement potential (blue solid curve), voltages \(V_{C}=-2\) kV and \(V_{M}=-1.75\) kV are applied to three of the ring electrodes (yellow) with radius \(R_{W}=1\) cm. This confining potential is modulated to induce parametric amplification by applying a sinusoidal voltage of zero-to-peak amplitude \(V_{p}\) on top of \(V_{M}\). A small tuning voltage \(V_{T}=-43\) V shifts the ion crystal (blue dots) to the null of this modulated potential. These electrodes are held in the room-temperature bore of a \(B=4.5\) T superconducting magnet to provide radial confinement of the ions. When cooled, the ions form a 2D crystal with an ion-ion spacing of approximately 15 \(\mu\)m, resulting in a crystal with a radius of approximately 100 \(\mu\)m. The two optical dipole force beams (green) intersect at a 20\({}^{\circ}\) angle at the ions to form the 1D traveling wave potential with a wavelength of 900 nm and a frequency \(\mu\) that is equal to the frequency difference between the two beams.
and detect this driven motion using techniques similar to those in Ref. [15]. A small tuning voltage \(V_{T}\) is then applied to one of the end cap electrodes, which shifts the crystal axially, and this experiment is repeated. Through this process, the axial position with the minimum driven motion (null of the modulated potential) is determined to within \(0.1\,\mu\)m.
For our experiments, the two qubit states are the \(\ket{\uparrow}\equiv\ket{S_{1/2},m_{s}=+1/2}\) and \(\ket{\downarrow}\equiv\ket{S_{1/2},m_{s}=-1/2}\) valence electron spin projections of the \({}^{2}S_{1/2}\) electronic ground state, which are separated by approximately \(124\,\)GHz. Global spin rotations are implemented with a microwave source with a \(\pi-\)pulse duration \(t_{\pi}\) of about \(50\)\(\mu s\)[22].
In this apparatus axial motion is coupled to the spins through the implementation of a spin-dependent optical dipole force (ODF). Two far-detuned approximately \(313\,\)nm beams are overlapped at a \(20^{\circ}\) angle at the ions, as shown in Fig. 1, to form a moving 1D optical lattice with an effective wavelength of \(900\,\)nm and a tunable beat frequency \(\mu\). As described in the Supplementary Information of Ref. [23], the frequency and polarizations of the ODF laser beams are adjusted to null the ac Stark shift of each beam and to produce a force on the \(\ket{\uparrow}\) state that is equal and opposite to the force on the \(\ket{\downarrow}\) state.
With the assumption that the ODF couples only to the c.m. mode, the interaction Hamiltonian describing this spin-motion coupling is [17; 18]
\[\hat{H}_{\mathrm{ODF}}=\frac{\hbar f}{2\sqrt{N}}\left(\hat{a}e^{-i\phi_{ \mathrm{ODF}}}+\hat{a}^{\dagger}e^{i\phi_{\mathrm{ODF}}}\right)\sum_{i}^{N} \hat{\sigma}_{i}^{z}-\hbar\delta\hat{a}^{\dagger}\hat{a}, \tag{3}\]
where \(f\) represents the strength of the ODF interaction, \(\hat{\sigma}_{i}^{z}\) is the Pauli \(z\) operator for spin \(i\), \(\delta=\mu-\omega_{z}\) is the detuning of the ODF beat note frequency \(\mu\) from the c.m. mode, \(\hat{a}^{\dagger}(\hat{a})\) is the raising (lowering) operator for the c.m. mode, and \(\phi_{\mathrm{ODF}}\) is the phase of the optical dipole force. When the ODF is applied for a duration \(\tau\) on-resonance \(\delta=0\), a spin-dependent displacement \(\hat{D}_{\mathrm{sd}}(\alpha)\equiv\exp[(\alpha\hat{a}^{\dagger}-\alpha^{*} \hat{a})\sum\hat{\sigma}_{i}^{z}]\) is created with \(\alpha=-ife^{i\phi_{\mathrm{ODF}}}\tau/(2\sqrt{N})\).
## III Stroboscopic protocol
### Characterizing Motional Squeezing
To characterize the motional squeezing from parametric amplification, we use a stroboscopic protocol as shown in Fig. 2(a). First, pulses of Doppler and electromagnetically induced transparency (EIT) cooling are sequentially applied to cool the c.m. mode to \(\bar{n}_{z}=0.38\pm 0.2\), and the other axial drumhead modes to near their motional ground state [24; 25]. A repump laser then initializes the spins in the state \(\ket{\uparrow}^{\otimes N}\). The c.m. motion is then squeezed, followed by an analysis of the motional state with a resonant (\(\mu=\omega_{z}\)) spin-dependent force [26].
For initial spin states that are product states and resonant applications of the spin-dependent force, the spins remain uncorrelated and the calculation of the expectation value of the composite spin of the system reduces to calculating the expectation value of an individual spin [27]. We also begin by treating the c.m. motional state as a Fock state and extend this treatment to a thermal state by averaging a mixture of Fock states. Therefore, after the spins are optically pumped, we assume the system is in the state \(\ket{\Psi_{0}}=\ket{\uparrow}\ket{n}\), where \(\ket{n}\) is the harmonic oscillator Fock state of the c.m. mode.
The c.m. mode is squeezed by briefly modulating the trap potential to induce parametric amplification, which is described by the interaction Hamiltonian [17; 18]
\[\hat{H}_{s}=i\hbar\frac{g}{2}(\hat{a}^{2}e^{-i\theta}-\hat{a}^{\dagger 2}e^{i \theta}), \tag{4}\]
where \(g\) and \(\theta\) are the parametric coupling strength and phase, respectively. The parametric coupling strength is dependent on the amplitude of the voltage modulation \(V_{p}\) of the quadratic trapping potential. By numerically modeling the trap geometry and applied voltages, we calculate the expected \(g\) for an applied modulation amplitude \(V_{p}\).
Application of this Hamiltonian for duration \(t_{s}\) implements the unitary squeezing operator
\[\hat{S}(\xi)\equiv\exp\left[\frac{1}{2}\left(\xi^{*}\hat{a}^{2}-\xi\hat{a}^{ \dagger 2}\right)\right], \tag{5}\]
where \(\xi(r,\theta)=r\exp(i\theta)\) and \(r=gt_{s}\). Along the squeezed axis the motional uncertainty is reduced by \(\exp(-r)\) and in the orthogonal direction the motional uncertainty is amplified by \(\exp(r)\), so the phase-space uncertainty area is preserved. For this sequence, the modulation at \(\omega_{p}=2\omega_{z}\sim 2\pi\times 3.18\,\)MHz is typically applied for a duration of about \(t_{s}=40\,\mu\)s, which squeezes the initial state into the squeezed motional state \(\ket{\Psi_{1}}=\ket{\uparrow}\hat{S}(\xi)\ket{n}\). A phase-space sketch of this squeezed state is shown in Fig. 2(a).
We define the relative phase of the parametric drive to the spin-dependent displacement as
\[\Delta\phi=\theta-2\phi_{\mathrm{ODF}}-\pi. \tag{6}\]
The factor of \(2\) is due to the parametric drive occurring at twice the frequency of the ODF beatnote, and the additional \(\pi\) is to define in phase (\(\Delta\phi=0\)) as when the squeezed axis and spin-dependent displacements are aligned in the same direction. This relative phase \(\Delta\phi\) is actively stabilized and controlled [14].
The amount of motional squeezing is experimentally determined by measuring, through a Ramsey sequence, the coherence of the spins under the application of a spin-dependent ODF as a function of the relative phase \(\Delta\phi\)[26]. A first microwave \(\pi/2\)-pulse rotates the spins to create
\[\ket{\Psi_{2}}=\hat{R}\left(\frac{\pi}{2},0\right)\ket{\Psi_{1}}=\frac{\ket{ \uparrow}+\ket{\downarrow}}{\sqrt{2}}\hat{S}(\xi)\ket{n}, \tag{7}\]
where we define the following qubit rotation matrix:
\[\hat{R}\left(\theta_{r},\phi_{r}\right)=\begin{pmatrix}\cos(\frac{\theta_{r}}{2})& -e^{-i\phi_{r}}\sin(\frac{\theta_{r}}{2})\\ e^{i\phi_{r}}\sin(\frac{\theta_{r}}{2})&\cos(\frac{\theta_{r}}{2})\end{pmatrix}. \tag{8}\]
The spin and motion are then entangled through a spin-dependent displacement \(\hat{D}_{\rm sd}(\alpha)\) created by applying the ODF beams for a duration \(\tau\). This separates the spin states in phase space as shown in Fig. 2(a) to form the state
\[\ket{\Psi_{3}} =\hat{D}_{\rm sd}(\alpha)\hat{R}\left(\frac{\pi}{2},0\right)\ket{ \Psi_{1}} \tag{9}\] \[=\frac{\ket{\uparrow}\hat{D}(\alpha)+\ket{\downarrow}\hat{D}(- \alpha)}{\sqrt{2}}\hat{S}(\xi)\ket{n},\]
where \(\hat{D}(\alpha)\equiv\exp(\alpha\hat{a}^{\dagger}-\alpha^{*}\hat{a})\).
A final \(\pi/2\)-pulse then creates
\[\ket{\Psi_{f}} =\hat{R}\left(\frac{\pi}{2},0\right)\hat{D}_{\rm sd}(\alpha)\hat{ R}\left(\frac{\pi}{2},0\right)\ket{\Psi_{1}} \tag{10}\] \[=\frac{1}{2}\ket{\uparrow}\left[\hat{D}(\alpha)-\hat{D}(-\alpha )\right]\hat{S}(\xi)\ket{n}\] \[+\frac{1}{2}\ket{\downarrow}\left[\hat{D}(\alpha)+\hat{D}(-\alpha )\right]\hat{S}(\xi)\ket{n},\]
and maps this entangled state into a population imbalance in the \(\ket{\uparrow}\) and \(\ket{\downarrow}\) spin states. The probability of measuring \(\ket{\uparrow}\) for state \(\ket{\Psi_{f}}\) is dependent on the overlap between the displaced states
\[P_{\uparrow}^{(n)}=\frac{1}{2}+\frac{1}{2}\bra{\Psi_{f}}\hat{\sigma}^{z}\ket{ \Psi_{f}}e^{-\Gamma\tau}, \tag{11}\]
where we have included a spin decoherence rate \(\Gamma\) due to off-resonant light scatter from the ODF laser beams. Taking a Boltzmann-weighted thermal average over all Fock states, we obtain
\[P_{\uparrow}=\frac{1}{2}-\frac{1}{2}e^{-\Gamma\tau}\exp\left[-\frac{1}{4N}|f \tau|^{2}\left(2\bar{n}_{z}+1\right)\chi(r,\Delta\phi)\right], \tag{12}\]
where
\[\chi(r,\Delta\phi)=e^{2r}\left[1+\cos(\Delta\phi)\right]+e^{-2r}\left[1-\cos (\Delta\phi)\right] \tag{13}\]
contains the dependence on the parametric drive parameters. Equation 12 reduces to Eq. (10) of Ref. [27] when no parametric drive is applied (\(r=0\)). The population in \(\ket{\uparrow}\) is measured by pulsing on the parallel Doppler cooling beam and counting the resulting fluorescence. This is referred to as the bright fraction throughout this paper.
Figure 2(b) shows the resulting bright fraction versus the relative phase \(\Delta\phi\) for a crystal of \(N=86\pm 10\) ions. Each data point (black circles) is an average over 50 experimental trials. Using Eqs. 12 and 13 to fit these data, we extract \(r=1.25\pm 0.2\), where the uncertainty in \(r\) results from the scatter of the data and the uncertainty in the temperature \(\bar{n}_{z}=0.38\pm 0.2\). This agrees well with the predicted \(r=1.3\pm 0.2\) from the applied parametric drive voltage and duration. The green solid curve shows this theory fit, where the shaded region represents the uncertainty in the measured \(\bar{n}_{z}\). When the spin-dependent displacement is along the squeezed axis (\(\Delta\phi=0,2\pi\)), the overlap between the displaced states is small and the bright fraction is high. In contrast, when \(\hat{D}_{\rm sd}(\alpha)\) is orthogonal to the squeezed axis, there is strong overlap between the states resulting in a low bright fraction. The motional uncertainty of the fitted squeezed state has been
Figure 2: (a) Motional squeezing resulting from parametric amplification is characterized by entangling the spin and motion of the ions through a spin-dependent displacement and then measuring the coherence of the spins, which is sensitive to the overlap of the displaced, squeezed states. The relative phase between the parametric drive (yellow block), which squeezes the motion, and the spin-dependent displacement (orange block) is controlled. To reduce spin decoherence from magnetic-field fluctuations, the spin-dependent displacement is performed in a spin-echo protocol where \(\phi_{\rm ODF}\) in the second arm is advanced by \(\pi\). This is equivalent to the Ramsey sequence shown in the block diagram. The phase-space diagram shows the c.m. motional mode in a frame rotating at \(\omega_{z}\). Axis labels represent c.m. momentum [\(p_{z}\propto\mathrm{Im}(\alpha)\)] and position [\(z\propto\mathrm{Re}(\alpha)\)] in the c.m. mode rotating frame. (b) The measured bright fraction (black points) depends on the relative phase \(\Delta\phi\) [see Eq. (6)], shown with a theoretical fit (green curve, Eq. (12)). Confidence intervals (green shaded region) represent uncertainty in the fitted \(\bar{n}_{z}\). When the displacement is parallel (perpendicular) to the squeezed axis, corresponding to \(\Delta\phi=0\) (\(\Delta\phi=\pi\)), the overlap of the displaced motional states is small (large) and the resulting bright fraction is high (low). The fitted value for the reduced motional uncertainty of the squeezed state is \(5.4\pm 0.9\) dB below the motional uncertainty of the ground state. The blue line is the signal that would be obtained from the c.m. motional ground state without squeezing, calculated from Eq. (12) with \(r,\bar{n}_{z}=0\). Error bars represent the standard deviation in the bright fraction over the 50 experimental trials.
attenuated by \(5.4\pm 0.9\) dB (10.8 dB in variance) below the ground-state motional uncertainty. Higher squeezing might be possible, as will be discussed in Sec. IV, but higher squeezing will also be more sensitive to noise in \(\Delta\phi\), providing a trade-off in the utility of employing higher squeezing.
### Improved Displacement Sensitivity
One application of this stroboscopic squeezing protocol is improving the experimental sensitivity for detecting small displacements. Experiments conducted with a single ion [16] demonstrated an enhancement of \(17.2\pm 0.3\) dB to the displacement sensitivity with parametric amplification. On the large 2D crystals confined in this device, a displacement sensitivity of \(8.8\pm 0.4\) dB below the standard quantum limit has recently been achieved [13]. In this section, we explore theoretically improving this sensitivity through the addition of parametric amplification in a crystal of ions.
Figure 3(a) shows the proposed experimental sequence for improved displacement sensing. When no parametric amplification is applied, this protocol is identical to the sequence used in previous sensing work [13]. The c.m. motion is entangled with the collective spin via a many-body echo, enabling detection of a small displacement through the resulting spin rotation while avoiding quantum back-action and thermal noise. By squeezing the motional mode before and antisqueezing after the applied displacement as shown in the phase-space diagram of Fig. 3(b), the displacement is amplified, resulting in a larger geometric phase, and ideally improving the detection sensitivity. The amplification obtained with the squeeze-displace-antisqueeze sequence is given by the identity
\[\hat{D}_{\mathrm{sd}}(\beta_{f})\equiv\hat{S}^{\dagger}(\xi)\hat{D}_{\mathrm{ sd}}(\beta_{i})\hat{S}(\xi), \tag{14}\]
where the initial displacement \(\beta_{i}\) is amplified by \(G=\exp(r)\) to \(\beta_{f}=G\beta_{i}\) when the displacement is along the direction of maximum parametric amplification. Ideally, this would improve the displacement sensitivity \((\Delta\beta)^{2}\), the variance with which \(\beta_{i}\) can be determined in a single measurement, by \(\exp(2r)\).
In Ref. [13], experimental measurements and theory showed that the displacement sensitivity of the protocol in Fig. 3 without parametric amplification was limited by \(\sigma/=40\) Hz frequency fluctuations of the c.m. mode. The analysis assumed the c.m. mode frequency was constant during a single experiment but exhibited Gaussian fluctuations with standard deviation \(\sigma\) from one experiment to the next. Expanding the theoretical treatment of Ref. [13] to include both c.m. frequency fluctuations and parametric amplification, we generalize Eq. (S48) of the Supplementary Information of Ref. [13] for the sensitivity \((\Delta\beta)^{2}\) of the protocol shown in Fig. 3(a). We obtain
\[\left(\Delta\beta\right)^{2} \approx\frac{e^{2\Gamma\tau}}{4f^{2}\tau^{2}e^{2r}}\Bigg{[}1+\frac {\sigma^{2}\tau^{2}}{3} \tag{15}\] \[+\frac{\sigma^{2}}{g^{2}}\left(r-\frac{1-e^{-2r}}{2}\right)+\frac {\sigma^{2}\tau}{g}\frac{1-e^{-2r}}{2}\Bigg{]}\] \[+\frac{\sigma^{2}\tau^{2}}{2e^{2r}}\left(1+\frac{\sinh(r)e^{r}}{ g\tau}\right)^{2}\left(\bar{n}_{z}+\frac{1}{2}\right)\] \[+\frac{f^{2}\sigma^{2}\tau^{4}}{9e^{2r}},\]
Figure 3: (a) and (b) After Doppler cooling and initializing the ions in the spin up state, a many-body spin echo enables detection of small displacements of the ion crystal caused by weak electric fields resonant with the c.m. motion (green block and arrows). The addition of the squeezing and antisqueezing operations (yellow blocks and arrows) amplifies this spin-independent displacement \(\beta\) (red and green arrows) by \(e^{r}\) (effective displacement, blue arrows). The amplified effective displacement \(\beta e^{r}\) (blue arrows) encloses a larger phase-space area compared to no squeezing, as shown by the red area without squeezing and the additional enclosed blue area from the effective amplified displacement. This larger enclosed phase is predicted to improve the sensitivity to small displacements. (c) Previous experimental (black points) and theoretical (red curve) work [13] without parametric amplification showed the displacement sensitivity to be limited by 40 Hz frequency fluctuations of the c.m. mode. Confidence bands represent c.m. frequency fluctuations between 20 and 60 Hz. The addition of motional squeezing will amplify this frequency noise, but displacement sensitivity nearly 19 dB below the SQL (orange line) is still predicted (blue curve), assuming the duration of the squeezing and displacement are short compared to \(\tau\).
where the duration of the squeeze and excitation of the spin-independent displacement are assumed to be short compared to \(\tau\). Here \(\tau\) is the duration of an ODF pulse in a single arm of the spin-echo protocol of Fig. 3. When no squeezing is applied (\(r=0\)), Eq. (15) recovers the prior displacement sensitivity results. The \(\exp(2r)\) term in the denominator expresses the ideal enhancement to the sensitivity if the squeezing only amplified the displacement. However, the squeezing also increases the noise from frequency fluctuations, which reduces the overall gain. For a more detailed discussion and physical interpretation of the individual terms of Eq. (15), see the Supplementary Information of Ref. [13].
In Fig. 3(c), we plot the prior experimental data (black circles) and theory (red curve) of the displacement sensitivity without parametric amplification [13]. Assuming the same experimental parameters as this previous experiment and the addition of the demonstrated \(r=1.25\) of motional squeezing, Eq. (15) predicts nearly 10-dB enhancement to our displacement sensitivity (blue curve in Fig. 3(c)). With the assumption that the duration of the squeezing and displacement are short compared to the duration of the spin-dependent force, the quantum enhancements from the many-body echo and the squeezing approximately add, resulting in a displacement sensitivity nearly 19 dB below the SQL. Further improvements would be possible by reducing the temperature of the ion crystal with EIT cooling, increased squeezing, and reduced frequency fluctuations of the c.m. mode.
## IV Continuous protocol
### Measuring \(g\) From Shifts in the Decoupling Points
Figure 4(a) shows the continuous squeezing protocol, where the spin-dependent force and the parametric amplification are applied simultaneously [16; 17]. In this protocol, the spin-dependent force is detuned from resonance with the c.m. motion by \(\delta\). The parametric amplification is applied at twice the frequency of the spin-dependent force. The total Hamiltonian of the system is now given by
\[\begin{split}\hat{H}_{\mathrm{T}}&=\frac{\hbar f}{ 2\sqrt{N}}\left(\hat{a}e^{-i\phi_{\mathrm{ODF}}}+\hat{a}^{\dagger}e^{i\phi_{ \mathrm{ODF}}}\right)\sum_{i}^{N}\hat{\sigma}_{i}^{z}\\ &-\hbar\delta\hat{a}^{\dagger}\hat{a}+i\hbar\frac{g}{2}(\hat{a}^{ 2}e^{-i\theta}-\hat{a}^{\dagger 2}e^{i\theta}).\end{split} \tag{16}\]
Under the condition \(0<g<\delta\), the total Hamiltonian can be written in a simple form as
\[\hat{H}_{\mathrm{T}}=\frac{\hbar}{2\sqrt{N}}\left(f^{\prime*}\hat{b}e^{-i\phi_ {\mathrm{ODF}}}+f^{\prime}\hat{b}^{\dagger}e^{i\phi_{\mathrm{ODF}}}\right) \sum_{i}^{N}\hat{\sigma}_{i}^{z}-\hbar\delta^{\prime}\hat{b}^{\dagger}\hat{b}, \tag{17}\]
by using a Bogoliubov transformation \(\hat{b}=\cosh r\hat{a}+ie^{i\theta}\sinh r\hat{a}^{\dagger}\) with \(r=\frac{1}{4}\ln(\frac{\delta+g}{\delta-g})\). Here the new effective detuning is \(\delta^{\prime}\equiv\sqrt{\delta^{2}-g^{2}}\) and the rescaled strength of the ODF interaction is \(f^{\prime}\equiv f\left(\cosh r+e^{i\Delta\phi_{c}}\sinh r\right)\). The relative phase between the parametric drive and the ODF in the continuous protocol is defined as \(\Delta\phi_{c}=\theta-2\phi_{\mathrm{ODF}}-\pi/2\), so \(\Delta\phi_{c}=0\) results in the largest amplified effective force \(f^{\prime}\). We note that this different definition gives rise to a \(\pi/2\) phase shift relative to the definition of \(\Delta\phi\) in Eq. (6) for the stroboscopic protocol. The state evolution under the total Hamiltonian Eq. (17) is provided in Appendix A.
The continuous squeezing protocol provides an alternative method for measuring the parametric coupling strength \(g\) that is insensitive to shot-to-shot noise in \(\Delta\phi_{c}\). With no parametric amplification, circular trajectories in phase space are driven. These trajectories are closed when \(\delta\) is a multiple of \(2\pi/\tau\), which results in a decoupling of the spin and motion of the ions [Fig. 4(b)]. Here \(\tau\) is the duration of the ODF application in each arm of the spin-echo sequence. The area enclosed in the phase-space loop is equal to the acquired geometric phase. With the parametric drive applied at twice the frequency of the spin-dependent force \(\mu\), the circular trajectories are distorted into ellipses as the c.m. motion is squeezed and antisqueezed along the path. In addition, the frequency of the first decoupling point [17] shifts to
\[\delta=\sqrt{(2\pi/\tau)^{2}+g^{2}}. \tag{18}\]
By measuring this frequency shift when the parametric drive is applied, we can extract the parametric coupling strength. The frequency offset \(\delta\) required to drive a closed loop in phase space only depends on \(\tau\) and \(g\) and is independent of the phase of the spin-dependent force relative to the phase of the parametric drive. Therefore this method of measuring \(g\) is insensitive to shot-to-shot fluctuations in the relative phase.
When the spins and motion are decoupled (closed loop in phase space), the measured bright fraction at the end of the sequence given in Fig. 4(a) will be at a minimum. To better resolve these decoupling points, the c.m. motion is heated. This improves the resolution by increasing the spin-motion entanglement signal, which depends on the motional temperature [27; 28]. The heating pulse is applied directly after Doppler cooling. It is white-noise with a 10-kHz bandwidth centered around \(\omega_{z}\). This noise heats the c.m. mode, but the other axial modes remain near the Doppler limit.
The measured bright fraction versus detuning away from the c.m. mode is shown in Fig. 4(c) when no parametric amplification is applied. As predicted for the ODF duration \(\tau=1.0\) ms, the minimum in the measured bright fraction occurs at multiples of \(1/\tau=1.0\) kHz. With only thermal motion, the bright fraction would saturate at 0.5. The higher observed bright fraction suggests the addition of a coherent displacement. This displacement arises from the heating burst not being purely random. The curve through the data of Fig. 4(c) is a fit used to extract \(\bar{n}_{z}=28.0\pm 13\) and the coherent displacement of amplitude \(\beta=13.0\pm 0.4\). The phase of the coherent displacement is assumed to be random when averaged
over many experiments (see Appendix A).
Figures 4(d) and 4(e) show equivalent scans, but with the addition of different strengths of parametric amplification. As the parametric drive voltage is increased, the decoupling points shift to higher frequency as predicted. From this frequency shift, we extract the parametric coupling strength by solving for \(g\) in Eq. (18). Experimentally, we find that for \(\delta<g\) the ions are heated significantly making recovery with Doppler cooling difficult.
The curves of Fig. 4(d) and 4(e) are theory assuming the \(\bar{n}_{z}\) and \(\beta\) obtained from the fit of Fig. 4(c) and \(g\) determined from the shift in the first decoupling frequency. The shaded region reflects the uncertainty in the relative phase between the ODF and parametric drive. For these experiments, we focus on the shift in the decoupling points, which are insensitive to the relative phase, so the relative phase at the ions is not measured prior to each experiment. Including this uncertainty, the theory confidence intervals largely encompass the measured values. The observed increase in the bright fraction at the first decoupling frequency results from spin squeezing (see Appendix A for a detailed theory) and a convolution of the narrow dip in the bright fraction at the decoupling point with the measured 40-Hz frequency fluctuations of the c.m. mode.
Figure 5 plots the measured \(g\) versus the applied parametric drive voltage \(V_{p}\) for three different values of \(\tau\). The coupling strength increases linearly with the voltage. We see no saturation in \(g/\) up to the highest measured value of \(15.5\,\mathrm{kHz}\) at \(51.0\,\mathrm{V}\), which suggests that even higher parametric coupling strengths may be achievable. The blue line of Fig. 5 is the predicted coupling strength from the applied voltage, which is in excellent agreement with the measurements.
### Spin Squeezing Limited by Frequency Fluctuations
As an application of the continuous squeezing protocol, we investigate how the increased spin-motion coupling will improve quantum spin squeezing in the presence of decoherence due to off-resonant light scatter. At the decoupling points, the spin-state evolution under the application of Eq. (3) simulates the Ising model,
\[\dot{H}_{\mathrm{I}}=\frac{1}{N}\sum_{i<j}J_{ij}\hat{\sigma}_{i}^{z}\hat{ \sigma}_{j}^{z}, \tag{19}\]
where the spin-spin interaction \(J_{ij}\approx\bar{J}=f^{2}/2\delta\) because the c.m. mode is primarily driven in our experimental set up [23, 6]. This leads to one-axis twisting and quantum spin squeezing [29, 30]. Using the continuous parametric amplification protocol, Eq. (3) transforms into an identical Hamiltonian with rescaled detunings and modified interaction strengths [Eq. (17)]. Under optimal relative phase (\(\Delta\phi_{c}=0\)), the spin-dependent force is amplified, resulting in an increased spin-spin interaction \(\bar{J}=f^{2}/2(\delta-g)\). In Ref. [17] the predicted improvement in quantum spin
Figure 4: (a) Pulse sequence implemented with the continuous squeezing protocol to measure the parametric coupling strength \(g\). The c.m. mode is initially heated to increase the contrast due to spin-motion entanglement. A spin echo sequence is then used to couple the spin and c.m. motion of the ions with the ODF and parametric drive applied simultaneously in each arm for a duration \(\tau\). (b) Phase-space trajectories for the state \(\ket{\uparrow}^{\otimes N}\) during the first (red) and the second (green) arm of the pulse sequence. Without parametric amplification, the spin and motion are decoupled in each arm (closed loop in phase space) when \(2\pi/\delta\) is an integer multiple of \(\tau\). Parametric amplification squeezes and antisquezes the motion along this trajectory (gray circles) and shifts the decoupling points to higher frequencies. A new effective detuning \(\delta^{\prime}=\sqrt{\delta^{2}-g^{2}}\) sets the condition for closing loops in phase space when \(2\pi/\delta^{\prime}\) is an integer multiple of \(\tau\). (c) Scan over several decoupling points without parametric amplification. The curve is a theory fit used to extract the elevated \(\bar{n}_{z}\) and coherent excitation \(\beta\) resulting from the heating pulse. (d) and (e) With parametric amplification the decoupling points (black, blue, and red arrows) are shifted to higher frequency as predicted by theory (curves). The shaded region reflects the uncertainty in \(\Delta\phi_{c}\) within \([0,2\pi]\). The theory curve then assumes an intermediate value of \(\Delta\phi_{c}=\pi/2\), and all curves are averaged over 40-Hz frequency fluctuations of the c.m. mode. Below \(\delta=g\) (dashed lines) is not scanned as the ions are rapidly heated.
squeezing from parametric amplification was explored. Here we extend this analysis to include frequency fluctuations of the c.m. mode, which will limit the potential enhancement.
Quantum spin squeezing is characterized by the Ramsey squeezing parameter \(\xi_{R}\), where \(\xi_{R}^{2}=1\) for coherent spin states and \(\xi_{R}^{2}<1\) for squeezed states [29] (see Appendix B). In previous experiments on 2D crystals of approximately 100 ions, \(4.0\pm 0.9\) dB of spin squeezing was measured, fundamentally limited by off-resonant light scatter from the optical dipole force laser beams [6]. In these experiments, \(\Gamma/\bar{J}\approx 0.05\) where \(\Gamma\) is the single spin decoherence rate due to off-resonant light scatter and \(\bar{J}\) is the spin-spin interaction strength obtained with a detuning of \(\delta/2\pi=1\) kHz. Parametric amplification will enhance the spin-spin interaction strength \(\bar{J}\) while keeping \(\Gamma\) fixed by the laser power, which will enable greater spin squeezing.
Figure 6(a) illustrates the potential enhancement in the spin squeezing with parametric amplification on a crystal containing 400 ions, cooled to \(\bar{n}_{z}=0.5\), and assuming no frequency fluctuations of the c.m. mode. Here \(\Gamma/\bar{J}=0.05\), where now \(\bar{J}\) is the strength of the spin-spin interaction with \(\delta/2\pi=0.83\) kHz and \(g=0\). When \(g=0\), the spin squeezing is limited by spin decoherence from off-resonant light scatter to 11.3 dB. We note this larger spin squeezing as compared to Ref. [6] is due in part to the larger ion number, as well as technical experimental limitations. As \(g\) is increased, the ratio \(\Gamma/\bar{J}\) is decreased, enabling greater spin squeezing. The optimal squeezing approaches the spin decoherence-free value of 16.4 dB for large \(g\).
However, frequency fluctuations of the c.m. mode will place a limit on the improvement to spin squeezing from parametric amplification. As the parametric coupling strength increases, the frequency separation between the ODF detuning \(\delta\) and \(g\) decreases as shown by Eq. (18) and Figs. 4(d) and 4(e). To realize the maximal optimal
Figure 5: From the shift in the decoupling frequency, the parametric coupling strength \(g\) is calculated for a range of drive voltages and ODF arm durations. As predicted by theory (blue line), \(g\) increases linearly with the amplitude of the parametric drive reaching \(g/=15.5\) kHz at 51 V. The 5% horizontal error bars of the experimental data represent the uncertainty in the applied voltage and the vertical error bars (smaller than points) reflect the frequency step size of the scans over the decoupling points. To calculate \(g\) from the applied voltages, the trap potential is numerically modeled. The shaded region is the 10% uncertainty in that model.
Figure 6: (a) Ramsey squeezing parameter \(\xi_{R}\) plotted versus the interaction duration \(\tau\) assuming 400 ions, no frequency fluctuations of the c.m. mode, and with a decoherence rate \(\Gamma\) due to off-resonant light scatter of \(\Gamma=0.05\bar{J}\) where \(\bar{J}\) is the strength of the spin-spin interaction [see Eq. (19)] with \(\delta/2\pi=0.83\) kHz and \(g=0\). Single-loop gates are assumed where for each interaction duration \(\tau\) a detuning \(\delta\) given by Eq. (18) is assumed. Larger values of \(g\) reduce the effective \(\Gamma/\bar{J}\) and produce large spin squeezing. (b) and (c) Predicted spin squeezing assuming the same parameters as in (a) but now including (b) 10-Hz and (c) 40-Hz frequency fluctuations of the c.m. mode. As the frequency fluctuations are increased, the gain in the quantum spin squeezing with parametric amplification is limited.
spin squeezing free from noise associated with the c.m. frequency fluctuations, \(\sigma\ll\delta-g\).
In Figs. 6(b) and 6(c) the predicted spin squeezing is plotted as a function of the parametric coupling strength for the same parameters as in Fig. 6(a) but including 10-Hz and 40-Hz frequency fluctuations of the c.m. mode. Spin decoherence from off-resonant light scatter and frequency fluctuations limit the spin squeezing in the absence of parametric amplification. Figure 6(c) shows a 1.1-dB gain in spin squeezing with a \(g/=4\,\)kHz and the current 40-Hz frequency fluctuation of the c.m. mode. However, at larger values of \(g\), the 40-Hz frequency fluctuations limit further improvements. In Fig. 6(b), we show that a 2.8-dB gain is possible if the frequency fluctuations are reduced to 10 Hz with a \(g/2\pi=40\,\)kHz.
## V Conclusion
In summary, we have shown experimental results characterizing parametric amplification using two different protocols. With the stroboscopic protocol, motional squeezing of \(5.4\pm 0.9\,\)dB below the ground-state motion was demonstrated corresponding to a squeezing parameter \(r=1.25\pm 0.2\). Phase noise between the ODF and parametric drive prevented larger amounts of squeezing from being achieved. Theory predicts that this level of squeezing will improve the sensitivity of measuring small displacements of large trapped-ion crystals by nearly 10 dB. With the continuous protocol, stronger parametric coupling strengths were demonstrated, even in the presence of shot-to-shot phase noise, by measuring the shift in the frequency of the spin-motion decoupling points. A linear increase in the measured parametric coupling strength \(g\) with the applied parametric drive voltage up to \(g/2\pi=15.5\pm 0.1\,\)kHz was observed. Measured values of \(g\) agreed well with the predicted strength from numerical modeling of the trap potentials. Stronger parametric coupling strengths may be possible with larger applied voltages. An improvement in quantum spin squeezing in the presence of decoherence due to off-resonant light scatter is predicted with the continuous squeezing protocol. The amount of improvement will be limited by the current 40-Hz frequency fluctuations of the c.m. mode. With reduced frequency fluctuations, further improvements to both the displacement sensitivity and spin squeezing might be possible.
###### Acknowledgements.
We thank Hannah Knaack and Sean Muleady for useful comments and discussions about this manuscript, and David Allcock for the design of the parametric drive circuit. This work was supported by a collaboration between the U.S. DOE and other agencies. This material was based upon work supported by the U.S. Department of Energy, Office of Science, NQI Science Research Centers, Quantum Systems Accelerator. Additional support is acknowledged from AFOSR Grant No. FA9550-201-0019, the DARPA ONISQ program, and NIST.
## Appendix A Derivation of the Bright Fraction Under the Continuous Protocol
Here we describe the calculation used to generate the theoretical curves of Figs. 4(c)-4(e). Because the continuous protocol of Fig. 4 generates correlations between the ion spins, a single-spin calculation like that described in Sec. III is no longer adequate.
Assume that the spin of the ions is initially in \(\ket{\psi}_{s}=\ket{\uparrow}^{\otimes N}\) and that the c.m. motional mode of interest is a thermal coherent state described by a thermal occupation number \(\bar{n}_{z}\) and coherent amplitude \(\beta\). The initial motional state density matrix is then given by \(\hat{\rho}_{m}=\hat{D}(\beta)\sum_{n}p_{n}|n\rangle\langle n|\hat{D}^{\dagger}(\beta)\), where \(p_{n}=\frac{1}{1+\bar{n}_{z}}\left(\frac{\bar{n}_{z}}{1+\bar{n}_{z}}\right)^{n}\) and \(\hat{D}(\beta)=\exp\left(\beta\hat{a}^{\dagger}-\beta^{*}\hat{a}\right)\) is the displacement operator with amplitude \(\beta\). To derive the effect of the motional thermal coherent state on the bright fraction, it is most convenient to write [31]
\[\hat{\rho}_{m}=\frac{1}{\pi\bar{n}_{z}}\int e^{-\left|\gamma\right|^{2}/\bar{n }_{z}}\left|\beta+\gamma\right\rangle\left\langle\beta+\gamma\right|d^{2}\gamma. \tag{10}\]
Because of the mixture of \(\rho_{m}\) in terms of coherent states \(\ket{\tilde{\gamma}}\) with \(\tilde{\gamma}=\beta+\gamma\), we can start with the initial system as
\[\ket{\psi_{0}}=\ket{\uparrow}^{\otimes N}\ket{\tilde{\gamma}}. \tag{11}\]
Then we can average out the contribution from different coherent mixtures.
Consider the protocol shown in Fig. 4(a). After the \(\frac{1}{2}\ket{}_{y}\) pulse, we have \(\ket{\psi_{1}}=\ket{+}^{\otimes N}\ket{\tilde{\gamma}}\), where \(\ket{+}=\frac{1}{\sqrt{2}}(\ket{\uparrow}+\ket{\downarrow})\). The spin-dependent ODF is then applied simultaneously with the parametric amplification for a duration \(\tau\). According to Refs. [17; 18], the effective interaction from Eq. (17) can be described by a product of two unitary operations of the spin-spin interaction and spin-motion coupling in the interaction picture of \(-\hbar\delta^{\prime}\hat{b}^{\dagger}\hat{b}\), which are
\[\hat{U}_{\rm ss}(t_{0},t_{1})=\exp\left(i\Phi(t_{0},t_{1})\sum_{i,j}\hat{ \sigma}_{i}^{z}\hat{\sigma}_{j}^{z}\right) \tag{12}\]
and
\[\hat{U}_{\rm sm}(t_{0},t_{1})=\hat{D}_{\rm sd}[\alpha(t_{0},t_{1})], \tag{13}\]
where
\[\alpha(t_{0},t_{1})=\frac{e^{i\phi_{\rm ODF}}}{2\sqrt{N}}\left[\tilde{\alpha} (t_{0},t_{1})\cosh r+e^{i\Delta\phi_{c}}\tilde{\alpha}^{*}(t_{0},t_{1})\sinh r\right] \tag{14}\]
and
\[\Phi(t_{0},t_{1})=-\frac{1}{4N}\left|\frac{f^{\prime}}{\delta^{\prime}}\right|^{2} \left[(t_{1}-t_{0})\delta^{\prime}-\sin\delta^{\prime}(t_{1}-t_{0})\right]. \tag{10}\]
Here \(f^{\prime}=f\left(\cosh r+e^{i\Delta\phi_{c}}\sinh r\right)\) is the enhanced spin-dependent force with parametric drive from Eq. (17), \(\tilde{\alpha}(t_{0},t_{1})=(f^{\prime}/\delta^{\prime})\left(e^{-i\delta^{ \prime}t_{1}}-e^{-i\delta^{\prime}t_{0}}\right)\) is the displacement of the Bogoliubov mode \(\hat{b}\), and \(\alpha(t_{0},t_{1})\) is the displacement in the original motional mode \(\hat{a}\).
After a simultaneous parametric amplification and ODF pulse for a duration \(\tau\), the state is given by
\[\left|\psi_{2}\right\rangle=\hat{U}_{\text{ss}}(0,\tau)\hat{U}_{\text{sm}}(0, \tau)\left|\psi_{1}\right\rangle. \tag{11}\]
The \(\pi\big{|}_{x}\) pulse for a duration \(t_{\pi}\) flips the state of the spins. After the second pulse of parametric amplification and ODF, the state is
\[\left|\psi_{3}\right\rangle =\hat{U}_{\text{ss}}(\tau+t_{\pi},2\tau+t_{\pi})\hat{U}_{\text{ sm}}(\tau+t_{\pi},2\tau+t_{\pi})\hat{R}_{X}(\pi)\left|\psi_{2}\right\rangle \tag{12}\] \[=\hat{R}_{X}(\pi)e^{i\Phi_{T}\sum_{i,j}\hat{\sigma}_{i}^{z}\hat{ \sigma}_{j}^{z}}\hat{D}_{\text{sd}}(\alpha_{T})\left|\psi_{1}\right\rangle,\]
where \(\hat{R}_{X}(\pi)=\hat{R}(\pi,\pi/2)^{\otimes N}\) [see Eq. (8)] and the expression has been simplified by commutating \(\hat{R}_{X}(\pi)\) with the operators to its right, combining the spin-spin interaction terms by recognizing \(\hat{U}_{\text{ss}}(\tau+t_{\pi},2\tau+t_{\pi})=\hat{U}_{\text{ss}}(0,\tau)\), and collecting the spin-spin interaction and the spin-motion coupling terms, respectively. Here the total displacement is
\[\alpha_{T} \equiv\alpha(0,\tau)-\alpha(\tau+t_{\pi},2\tau+t_{\pi}) \tag{13}\] \[=\frac{e^{i\phi_{\text{ODF}}}}{2\delta^{\prime}\sqrt{N}}\Big{[}f^ {\prime}(e^{-i\delta^{\prime}\tau}-1)(1-e^{-i\delta^{\prime}(\tau+t_{\pi})}) \cosh r\] \[+e^{i\Delta\phi_{c}}f^{\prime\ast}(e^{i\delta^{\prime}\tau}-1)(1- e^{i\delta^{\prime}(\tau+t_{\pi})})\sinh r\Big{]},\]
and the effective total geometric phase is
\[\Phi_{T} \equiv 2\Phi(0,\tau)-\text{Im}[\alpha(\tau+t_{\pi},2\tau+t_{\pi}) \alpha^{\ast}(0,\tau)]\] \[=\frac{|f^{\prime}|^{2}}{2\delta^{\prime 2}N}\Big{\{}\sin(\delta^{ \prime}\tau)-\delta^{\prime}\tau+[1-\cos(\delta^{\prime}\tau)]\sin\delta^{ \prime}(\tau+t_{\pi})\Big{\}}, \tag{14}\]
which recovers the results in Ref. [24] when we take \(r=0\). To measure the bright fraction, we rotate \(\left|\psi_{3}\right\rangle\) by another \(\frac{\pi}{2}\big{|}_{y}\) pulse and detect \(\hat{\sigma}_{k}^{z}\), which gives
\[\langle\hat{\sigma}_{k}^{z}\rangle_{\bar{\gamma}} =\left\langle\psi_{3}\right|\hat{R}_{Y}(\pi/2)^{\dagger}\hat{ \sigma}_{k}^{z}\hat{R}_{Y}(\pi/2)\left|\psi_{3}\right\rangle\] \[=-\frac{e^{-2\left|\alpha_{T}\right|^{2}}}{2}(e^{2\alpha_{T}\bar{ \gamma}^{\ast}-2\alpha_{T}^{\ast}\bar{\gamma}}+\text{c.c.})\cos(4\Phi_{T})^{N-1},\]
where \(\hat{R}_{Y}(\pi/2)=\hat{R}(\pi/2,0)^{\otimes N}\). Now we sum up the contributions from different components in the initial motional thermal coherent state via the integral
\[\langle\hat{\sigma}_{k}^{z}\rangle=\frac{1}{\pi\bar{n}_{z}}\int e^{-\left| \gamma\right|^{2}/\bar{n}_{z}}\left\langle\hat{\sigma}_{k}^{z}\right\rangle_{ \bar{\gamma}}d^{2}\gamma. \tag{15}\]
Only the factor \(e^{2\alpha_{T}\bar{\gamma}^{\ast}-2\alpha_{T}^{\ast}\bar{\gamma}}+\text{c.c.}=e^ {2\alpha_{T}\beta^{\ast}-2\alpha_{T}^{\ast}\beta}e^{2\alpha_{T}\gamma^{\ast} -2\alpha_{T}^{\ast}\gamma}+\text{c.c.}\) will be impacted by the integral. We realize that \(\frac{1}{\pi\bar{n}_{z}}\int e^{-\left|\gamma\right|^{2}/\bar{n}_{z}}e^{2 \alpha_{T}\gamma^{\ast}-2\alpha_{T}^{\ast}\gamma}d^{2}\gamma=e^{-4\left|\alpha _{T}\right|^{2}\bar{n}_{z}}\); therefore, we find
\[\langle\hat{\sigma}_{k}^{z}\rangle=-e^{-2\left|\alpha_{T}\right|^{2}(2\bar{n}_{ z}+1)}\cos(4\theta_{\beta})\cos(4\Phi_{T})^{N-1}, \tag{16}\]
where \(\theta_{\beta}=\text{Im}(\beta^{\ast}\alpha_{T})\). The coherent displacement \(\beta\) in this experiment is caused by the application of noise to heat the ions. We therefore assume that the phase of this displacement varies from shot to shot, so we average Eq. (16) over a random phase \(\theta_{c}\), giving \(\frac{1}{2\pi}\int\cos(4|\alpha_{T}||\beta|\sin\theta_{c})d\theta_{c}=J_{0}(4| \alpha_{T}||\beta|)\), where \(J_{0}\) is the zeroth Bessel function of the first kind. The spin decoherence factor \(\exp(-2\Gamma\tau)\) can be included independently. The bright fraction of the trapped-ion spins at the end of the pulse sequence is then given by
\[P_{\uparrow} =\frac{1}{2}\left(1+\langle\hat{\sigma}_{k}^{z}\rangle\right)\] \[=\frac{1}{2}-\frac{1}{2}e^{-2\left|\alpha_{T}\right|^{2}(2\bar{n}_{ z}+1)}e^{-2\Gamma\tau}J_{0}(4|\alpha_{T}||\beta|)\cos(4\Phi_{T})^{N-1}.\]
This expression was used to model the experimental measurements in Fig. 4. To include 40-Hz frequency fluctuations of the c.m. mode, we averaged this expression assuming Gaussian frequency fluctuations of the mode.
## Appendix B Spin Squeezing with Spin Decoherence
This appendix discusses the calculation of quantum spin squeezing in the presence of decoherence due to off resonant light scatter and due to frequency fluctuations of the c.m. mode. The calculation was used to generate the theory curves of Fig. 6.
Quantum spin squeezing in a direction rotated by \(\psi\) (from the \(+z\) direction) about the \(x\) axis is defined by \(\xi_{\text{v}}^{2}=N(\Delta\hat{S}_{y})^{2}/|\left\langle\mathbf{S}\right\rangle|^{2}\), where \(\hat{S}_{y}=\cos(\psi)\hat{S}_{z}-\sin(\psi)\hat{S}_{y}\), \((\Delta\hat{S}_{\psi})^{2}=\langle\hat{S}_{y}^{2}\rangle-\left\langle\hat{S}_{y} \right\rangle^{2}\), and \(\mathbf{S}=\frac{1}{2}\sum_{i}\left(\hat{\sigma}_{i}^{x},\hat{\sigma}_{i}^{y}, \hat{\sigma}_{i}^{z}\right)\). The Ramsey spin squeezing is obtained by minimizing \((\Delta\hat{S}_{\psi})^{2}\) over the angle \(\psi\),
\[\xi_{R}^{2} =\frac{N\min_{\psi}[(\Delta\hat{S}_{\psi})^{2}]}{|\left\langle \mathbf{S}\right\rangle|^{2}}\] \[=\frac{N}{2|\left\langle\mathbf{S}\right\rangle|^{2}}\big{\{}( \Delta\hat{S}_{y})^{2}+(\Delta\hat{S}_{z})^{2} \tag{11}\] \[-\sqrt{[(\Delta\hat{S}_{y})^{2}-(\Delta\hat{S}_{z})^{2}]^{2}+4 \text{Cov}\big{(}\hat{S}_{y},\hat{S}_{z}\big{)}^{2}}\big{\}},\]
where the optimal angle is
\[\psi_{\text{opt}}=\frac{1}{2}\arctan\{2\text{Cov}\big{(}\hat{S}_{y},\hat{S}_{z} \big{)}/[(\Delta\hat{S}_{y})^{2}-(\Delta\hat{S}_{z})^{2}]\} \tag{12}\]
with
\[\text{Cov}\big{(}\hat{S}_{y},\hat{S}_{z}\big{)}\equiv\frac{1}{2}\left\langle \hat{S}_{y}\hat{S}_{z}+\hat{S}_{z}\hat{S}_{y}\right\rangle-\left\langle\hat{S}_{y} \right\rangle\left\langle\hat{S}_{z}\right\rangle. \tag{13}\]
In the presence of dephasing at rate \(\Gamma_{\rm el}\) and spontaneous spin flips from \(\ket{\uparrow}\) to \(\ket{\downarrow}\) (\(\ket{\downarrow}\) to \(\ket{\uparrow}\)) at rate \(\Gamma_{\rm ud}\) (\(\Gamma_{\rm du}\)), spin dynamics due to an Ising interaction in the \(\hat{z}\) basis can be modeled by a master equation in the Lindblad form, which can be solved exactly [32]. For completeness, we quote the spin correlation functions in the case of uniform coupling, i.e. \(J_{ij}=J\) for all \(i\) and \(j\),
\[\langle\hat{\sigma}_{i}^{+}\rangle=\frac{e^{-\Gamma t}}{2}\Phi^{N -1}(J,t)e^{-2|\alpha|^{2}(2\bar{n}_{z}+1)},\] \[\langle\hat{\sigma}_{i}^{a}\hat{\sigma}_{j}^{b}\rangle=\frac{e^{ -2\Gamma t}}{4}\Phi^{N-2}((a+b)J,t)e^{-2|\alpha|^{2}(a+b)^{2}(2\bar{n}_{z}+1)},\] \[\langle\hat{\sigma}_{i}^{a}\hat{\sigma}_{j}^{z}\rangle=\frac{e^{ -\Gamma t}}{2}\Psi(aJ,t)\Phi^{N-2}(aJ,t)e^{-2|\alpha|^{2}(2\bar{n}_{z}+1)}, \tag{30}\]
where \(a,b\in\{+,-\}\),
\[\Phi(J,t) =e^{-(\Gamma_{\rm ud}+\Gamma_{\rm du})t/2}\Big{\{}\cos\big{[}t \sqrt{(2i\gamma+2J/N)^{2}-\Gamma_{\rm ud}\Gamma_{\rm du}}\big{]}\] \[+t\frac{\Gamma_{\rm ud}+\Gamma_{\rm du}}{2}{\rm sinc}[t\sqrt{(2i \gamma+2J/N)^{2}-\Gamma_{\rm ud}\Gamma_{\rm du}}]\Big{\}},\] \[\Psi(J,t) =e^{-(\Gamma_{\rm ud}+\Gamma_{\rm du})t/2}t\left[i(2i\gamma+2J/N) -2\gamma\right]\] \[\times\mathrm{sinc}\big{[}t\sqrt{(2i\gamma+2J/N)^{2}-\Gamma_{\rm ud }\Gamma_{\rm du}}\big{]} \tag{31}\]
and
\[J =\frac{f^{\prime 2}}{\delta^{\prime}}\left(1-\frac{\sin\delta^{ \prime}t}{\delta^{\prime}t}\right),\] \[\alpha =\frac{1}{\sqrt{N}}\frac{f^{\prime}}{\delta^{\prime}}\left\{\left[ \cos(\delta^{\prime}t)-1\right]e^{r}-i\sin(\delta^{\prime}t)e^{-r}\right\}. \tag{32}\]
Here \(r\), \(f^{\prime}\), and \(\delta^{\prime}\) are the same as defined in Appendix A. The above expressions are obtained assuming the initial state is a product state with all spins pointed along the \(x\) direction and the motional state is a thermal state. The factor \(e^{-2|\alpha|^{2}(2\bar{n}_{z}+1)}\) describes the effect of spin-motion entanglement when \(\alpha\neq 0\). Here \(\gamma=\left(\Gamma_{\rm ud}-\Gamma_{\rm du}\right)/4\), \(\Gamma=\left(\Gamma_{\rm r}+\Gamma_{\rm el}\right)/2\), and \(\Gamma_{\rm r}=\Gamma_{\rm ud}+\Gamma_{\rm du}\). The numerical results of the plots in Fig. 6 (a) are obtained using Eq. (30) together with Eqs. (30)-(32). The numerical results of the plots in Figs. 6(b) and 6(c) are obtained by averaging Eq. (30) for 4000 randomly Gaussian distributed frequencies at the respective frequency fluctuations.
|
2307.12257 | Vectorial analogues of Cauchy's surface area formula | Cauchy's surface area formula says that for a convex body $K$ in
$n$-dimensional Euclidean space the mean value of the $(n-1)$-dimensional
volumes of the orthogonal projections of $K$ to hyperplanes is a constant
multiple of the surface area of $K$. We prove an analogous formula, with the
volumes of the projections replaced by their moment vectors. This requires to
introduce a new vector-valued valuation on convex bodies. | Daniel Hug, Rolf Schneider | 2023-07-23T08:03:49Z | http://arxiv.org/abs/2307.12257v1 | # Vectorial analogues
###### Abstract
Cauchy's surface area formula says that for a convex body \(K\) in \(n\)-dimensional Euclidean space the mean value of the \((n-1)\)-dimensional volumes of the orthogonal projections of \(K\) to hyperplanes is a constant multiple of the surface area of \(K\). We prove an analogous formula, with the volumes of the projections replaced by their moment vectors. This requires to introduce a new vector-valued valuation on convex bodies.
_Keywords: Cauchy's surface area formula, moment vector, tensor-valued valuation_
2020 Mathematics Subject Classification: Primary 52A20, Secondary 53C65
## 1 Introduction
Cauchy's surface area formula is probably the first result in a branch later named integral geometry. The \(n\)-dimensional version of Cauchy's formula says that
\[S(K)=\frac{1}{\kappa_{n-1}}\int_{S^{n-1}}V_{n-1}(K|u^{\perp})\,\mathrm{d}u\]
for each convex body \(K\) (a nonempty, compact, convex set) in Euclidean space \(\mathbb{R}^{n}\). Here \(K|u^{\perp}\) is the image of \(K\) under orthogonal projection to the linear subspace \(u^{\perp}\) orthogonal to the unit vector \(u\). By \(S(K)\) we denote the surface area of \(K\) and by \(V_{k}\) the \(k\)-dimensional volume. Further, \(\int_{S^{n-1}}f(u)\,\mathrm{d}u\) always denotes integration over the unit sphere \(S^{n-1}\) with respect to the spherical Lebesgue measure, and \(\kappa_{k}\) is the volume of the \(k\)-dimensional unit ball (\(\omega_{k}=k\kappa_{k}\) is its surface area). For proofs of more general projection formulas we refer to [14, p. 301] and [16, p. 222] or [6, Chap. 5].
Classical integral geometry, as it can be found, e.g., in the books by Hadwiger [3, Chap. 6] and Santalo [11] (see also [14, Sect. 4.4]), exhibits, for example, intersection theorems such as the kinematic formulas and the Crofton formulas, mainly for real-valued valuations on convex bodies. Integral geometric formulas for vector-valued functionals were investigated in [4, 10, 12]. An extensive theory of integral geometric intersection formulas for tensor valued valuations, also with values in spaces of measures, was developed in [2, 5, 7, 8, 9, 13, 15]. We also refer to the survey article [1].
While the classical integral geometry of real-valued functionals also considered projections, the mentioned investigations of tensor valuations are restricted to intersections. It seems that not even such a simple question as the following vector-valued analogue of Cauchy's surface area formula has been answered. First we recall (e.g., from [14, Sect. 5.4.1]) that for a convex body \(K\subset\mathbb{R}^{n}\) and for \(k\in\{\dim K,\dots,n\}\) the moment vector \(z_{k+1}(K)\) is defined by
\[z_{k+1}(K):=\int_{K}x\,\mathcal{H}^{k}(\mathrm{d}x),\]
where \(\mathcal{H}^{k}\) is the \(k\)-dimensional Hausdorff measure (the lower index indicates the degree of homogeneity). The question is then whether the integral
\[\int_{S^{n-1}}z_{n}(K|u^{\perp})\,\mathrm{d}u\]
can be expressed by familiar functionals of \(K\). It turns out that, compared to the cases dealing with intersections, additional tensor valuations are required. After introducing these, we shall give an answer to the question in Theorem 1.
## 2 Preliminaries, and a result
We work in \(n\)-dimensional Euclidean vector space \(\mathbb{R}^{n}\) (with origin \(o\)) and use its scalar product \(\langle\cdot\,,\cdot\rangle\) to identify the space with its dual space. Thus, we identify a vector \(x\in\mathbb{R}^{n}\) with the linear functional \(\langle x,\cdot\rangle\). By \(\mathbb{T}^{r}\) we denote the real vector space (with its usual topology) of symmetric \(r\)-tensors (tensors of rank \(r\in\mathbb{N}_{0}\)) on \(\mathbb{R}^{n}\). The elements of \(\mathbb{T}^{r}\) are symmetric \(r\)-linear functionals on \(\mathbb{R}^{n}\). The symmetric tensor product of \(a\in\mathbb{T}^{r}\) and \(b\in\mathbb{T}^{s}\) is denoted by \(a\cdot b=ab\) and is an element of \(\mathbb{T}^{r+s}\). In particular, for \(x\in\mathbb{R}^{n}\) we write \(x^{r}:=x\cdots x\) (\(r\) factors); we then have \(x^{r}(a_{1},\dots,a_{r})=\langle x,a_{1}\rangle\cdots\langle x,a_{r}\rangle\) for \(a_{1},\dots,a_{r}\in\mathbb{R}^{n}\).
Let \(K\in\mathcal{K}^{n}\), where \(\mathcal{K}^{n}\) denotes the set of convex bodies in \(\mathbb{R}^{n}\). Its volume can be written as
\[V_{n}(K)=\int_{K}\mathrm{d}x, \tag{1}\]
where \(\int f(x)\,\mathrm{d}x\) indicates integration with respect to the Lebesgue measure on \(\mathbb{R}^{n}\). This motivates one to define a symmetric \(r\)-tensor by
\[\Psi_{r}(K):=\frac{1}{r!}\int_{K}x^{r}\mathrm{d}x,\quad K\in\mathcal{K}^{n}.\]
The factor before the integral is for convenience, and we have \(\Psi_{1}(K)=z_{n+1}(K)\). For \(z_{n+1}\), we get the polynomial expansion
\[z_{n+1}(K+\lambda B^{n})=\sum_{j=0}^{n}\binom{n}{j}\lambda^{j}q_{j}(K)\]
(see [14, (5.95)]), where \(B^{n}\) is the unit ball of \(\mathbb{R}^{n}\) and \(\lambda\geq 0\). The vector-valued coefficients \(q_{j}\) can be represented by
\[q_{j}(K)=\frac{1}{n}\int_{\partial K}x\,C_{n-j}(K,\mathrm{d}x)\]
for \(j\geq 1\) (whereas \(q_{0}=z_{n+1}\)). See [14, (5.98)], and note that the curvature measure \(C_{m}(K,\cdot)\) is concentrated on the boundary of \(K\). In particular (by [14, (4.31)]), if \(K\) has nonempty interior, then
\[q_{1}(K)=\frac{1}{n}\int_{\partial K}x\,\mathcal{H}^{n-1}(\mathrm{d}x). \tag{2}\]
Let \(\mathcal{K}^{n}_{o}\) denote the set of convex bodies \(K\in\mathcal{K}^{n}\) with \(o\in\mathrm{int}\,K\). Alternatively to (1), for \(K\in\mathcal{K}^{n}_{o}\) the volume of \(K\) can also be written as the integral of the cone-volume measure of \(K\) over the unit sphere \(S^{n-1}\). The cone-volume measure of \(K\in\mathcal{K}^{n}_{o}\) is defined by
\[V_{K}(\omega):=\mathcal{H}^{n}\left(\bigcup_{x\in\tau(K,\omega)}[o,x]\right), \quad\omega\in\mathcal{B}(S^{n-1}).\]
Here \(\tau(K,\omega)\) is the set of boundary points of \(K\) at which there exists an outer unit normal vector of \(K\) belonging to \(\omega\), \([o,x]\) is the closed segment with endpoints \(o\) and \(x\), and \(\mathcal{B}(S^{n-1})\) denotes the \(\sigma\)-algebra of Borel sets in \(S^{n-1}\). It is known (see, e.g., [14, p. 501]) that
\[V_{K}(\omega) =\frac{1}{n}\int_{\omega}h_{K}(u)\,S_{n-1}(K,\mathrm{d}u)\] \[=\frac{1}{n}\int_{\mathbb{R}^{n}\times\omega}\langle x,u\rangle\, \Theta_{n-1}(K,\mathrm{d}(x,u)),\]
where \(h_{K}\) is the support function of \(K\), \(S_{n-1}(K,\cdot)\) is its surface area measure and \(\Theta_{n-1}(K,\cdot)\) is a support measure of \(K\) (see [14, Sect. 4.2]). For the last equality, we refer to [14, (4.11)] and the simple observation that \(h_{K}(u)=\langle x,u\rangle\) for \((x,u)\) in the support of the measure \(\Theta_{n-1}(K,\cdot)\). From these representations it is clear that \(V_{K}\) is a measure-valued valuation (see [14, Thm. 4.2.1]). Moreover, the integral representations are well defined for all convex bodies.
The preceding discussion suggests to define a symmetric \(r\)-tensor functional by
\[\Upsilon_{r}(K):=\int_{S^{n-1}}u^{r}\,V_{K}(\mathrm{d}u),\quad K\in\mathcal{K }^{d}_{o}.\]
For general convex bodies \(K\in\mathcal{K}^{d}\), it is consistent to define \(\Upsilon_{r}:\mathcal{K}^{n}\to\mathbb{T}^{r}\) by
\[\Upsilon_{r}(K): =\frac{1}{n}\int_{S^{n-1}}h_{K}(u)u^{r}\,S_{n-1}(K,\mathrm{d}u) \tag{3}\] \[=\frac{1}{n}\int_{\mathbb{R}^{n}\times S^{n-1}}\langle x,u\rangle u ^{r}\,\Theta_{n-1}(K,\mathrm{d}(x,u)).\]
Just as the \(\Psi_{r}\), also the \(\Upsilon_{r}\) are rotation covariant, continuous tensor-valued valuations. But the translation behavior is different. We know (see [14, (5.104)]) that, for \(t\in\mathbb{R}^{n}\),
\[\Psi_{r}(K+t)=\sum_{j=0}^{r}\frac{1}{j!}\Psi_{r-j}(K)t^{j},\]
where \(\Psi_{r-j}(K)t^{j}\) is a symmetric tensor product. On the other hand, if we define a translation invariant \(r\)-tensor by
\[\Xi_{r}(K):=\frac{1}{n}\int_{S^{n-1}}u^{r}\,S_{n-1}(K,\mathrm{d}u),\]
for \(K\in\mathcal{K}^{n}\), then
\[\Upsilon_{r}(K+t)=\Upsilon_{r}(K)+\Xi_{r+1}(K)(t),\]
where the symmetric \((r+1)\)-tensor \(\Xi_{r+1}(K)\) is applied to the vector \(t\), which results in a tensor of rank \(r\).
Considering that \(\Upsilon_{0}(K)\) is the volume of \(K\) and that
\[\frac{1}{n}\int_{S^{n-1}}h_{K_{1}}(u)\,S(K_{2},\ldots,K_{n},\mathrm{d}u)=V(K_ {1},K_{2},\ldots,K_{n})\]
(where \(S(K_{2},\ldots,K_{n},\cdot)\) is the mixed area measure of \(K_{2},\ldots,K_{n}\)) is the mixed volume of \(K_{1},\ldots,K_{n}\) and hence is symmetric in its arguments, one may ask whether also
\[\int_{S^{n-1}}h_{K_{1}}(u)u^{r}S(K_{2},\ldots,K_{n},\mathrm{d}u) \tag{4}\]
is symmetric in \(K_{1},\ldots,K_{n}\). That this is in general not the case, can be seen from the fact that, for \(r=1\), the vector
\[\int_{S^{n-1}}h_{K}(u)u\,S(B^{n},\ldots,B^{n},\mathrm{d}u)\]
is proportional to the Steiner point of \(K\) (as follows from [14, (1.31)]), whereas
\[\int_{S^{n-1}}h_{B^{n}}(u)u\,S(K,B^{n},\ldots,B^{n},\mathrm{d}u)\]
is always equal to \(o\), by [14, (5.30)].
Since (4) is not symmetric in \(K_{1},\ldots,K_{n}\), we introduce a symmetric tensor functional \(\Upsilon^{(r)}:(\mathcal{K}^{n})^{n}\to\mathbb{T}^{r}\) by
\[\Upsilon^{(r)}(K_{1},\ldots,K_{n}):=\frac{1}{n}\sum_{i=1}^{n}\frac{1}{n}\int_ {S^{n-1}}h_{K_{i}}(u)u^{r}\,S(K_{1},\ldots,\check{K_{i}},\ldots,K_{n},\mathrm{ d}u),\]
where \(\check{K_{i}}\) means that \(K_{i}\) is omitted.
We have mentioned tensor valuations of rank higher than necessary for the following, since we hope to come back to them later.
We can now state the vectorial counterpart to Cauchy's surface area formula.
**Theorem 1**.: _For \(K\in\mathcal{K}^{n}\),_
\[\int_{S^{n-1}}z_{n}(K|u^{\perp})\,\mathrm{d}u=\frac{n\kappa_{n-1}}{n+1}(nq_{1}(K) -\Upsilon_{1}(K)).\]
The functional \(q_{1}\) is a special mixed moment vector, that is, we have
\[\frac{n}{n+1}q_{1}(K)=z(K[n],B^{n}).\]
The mixed moment vectors \(z:(\mathcal{K}^{n})^{n+1}\to\mathbb{R}^{n}\) are symmetric and multilinear in each of their \(n+1\) arguments (see [14, Section 5.4.1]. The choice \(K=\lambda_{1}K_{1}+\cdots+\lambda_{n}K_{n}\) with \(\lambda_{1},\ldots,\lambda_{n}\geq 0\) in Theorem 1, polynomial expansion and comparison of the coefficients of \(\lambda_{1}\cdots\lambda_{n}\) yields the following consequence.
**Corollary 1**.: _Let \(K_{1},\ldots,K_{n}\in\mathcal{K}^{n}\). Then_
\[\int_{S^{n-1}}z_{n}(K_{1}|u^{\perp},\ldots,K_{n}|u^{\perp})\, \mathrm{d}u\] \[=n\kappa_{n-1}\left(z(K_{1},\ldots,K_{n},B^{n})-\frac{1}{n+1} \Upsilon^{(1)}(K_{1},\ldots,K_{n})\right).\]
## 3 Proof of Theorem 1
Both sides of the asserted equation depend continuously on \(K\). Hence we can assume that \(K\in\mathcal{K}^{n}\) has nonempty interior. Let \(\partial^{\prime}K\) denote the set of all \(x\in\partial K\) at which the outer unit normal vector \(\nu_{K}(x)\) of \(K\) at \(x\) is unique. Then \(\partial^{\prime}K\) is a Borel set, \(\nu_{K}\) is continuous on \(\partial^{\prime}K\) and \(\partial K\setminus\partial^{\prime}K\) has \(\mathcal{H}^{n-1}\)-measure zero. Moreover, for \(y\in\partial^{\prime}K\) and \(u\in S^{n-1}\) the projection map \(p_{u}:\partial K\to u^{\perp}\), \(y\mapsto y-\langle y,u\rangle u\), has the Jacobian \(Jp_{u}(y)=|\langle\nu_{K}(y),u\rangle|\). For \(x\in\mathrm{relint}\,(K|u^{\perp})\) we have \(p_{u}^{-1}(\{x\})=\{y_{u}^{-}(K,x),y_{u}^{+}(K,x)\}\) and \(p_{u}(y_{u}^{\pm}(K,x))=x\). Therefore, we can apply the coarea formula and then Fubini's theorem to obtain
\[2\int_{S^{n-1}}z_{n}(K|u^{\perp})\,\mathrm{d}u = 2\int_{S^{n-1}}\int_{K|u^{\perp}}x\,\mathcal{H}^{n-1}(\mathrm{d }x)\,\mathrm{d}u \tag{5}\] \[= \int_{S^{n-1}}\int_{K|u^{\perp}}\int_{p_{u}^{-1}(\{x\})}p_{u}(y) \,\mathcal{H}^{0}(\mathrm{d}y)\,\mathcal{H}^{n-1}(\mathrm{d}x)\,\mathrm{d}u\] \[= \int_{S^{n-1}}\int_{\partial K}|\langle\nu_{K}(y),u\rangle|(y- \langle y,u\rangle u)\,\mathcal{H}^{n-1}(\mathrm{d}y)\,\mathrm{d}u\] \[= \int_{\partial K}y\int_{S^{n-1}}|\langle\nu_{K}(y),u\rangle|\, \mathrm{d}u\,\mathcal{H}^{n-1}(\mathrm{d}y)\] \[-\int_{\partial K}\int_{S^{n-1}}|\langle\nu_{K}(y),u\rangle| \langle y,u\rangle u\,\mathrm{d}u\,\mathcal{H}^{n-1}(\mathrm{d}y).\]
For the first integral in (5) we can use that
\[\int_{S^{n-1}}|\langle v,u\rangle|\,\mathrm{d}u=2\kappa_{n-1}\]
for all \(v\in S^{n-1}\) and obtain
\[\int_{\partial K}y\int_{S^{n-1}}|\langle\nu_{K}(y),u\rangle|\,\mathrm{d}u\, \mathcal{H}^{n-1}(\mathrm{d}y)=2n\kappa_{n-1}q_{1}(K),\]
by (2). For the second integral, we need a lemma. For this, we denote by \(Q\) the metric \(2\)-tensor on \(\mathbb{R}^{n}\), that is, \(Q(x,y)=\langle x,y\rangle\) for \(x,y\in\mathbb{R}^{n}\).
**Lemma 1**.: _Let \(n\geq 2\) and \(v\in S^{n-1}\). Then_
\[\int_{S^{n-1}}|\langle v,u\rangle|u^{2}\,\mathrm{d}u=\frac{2\kappa_{n-1}}{n+1 }(v^{2}+Q).\]
Proof.: We use a decomposition of spherical Lebesgue measure to get
\[\int_{S^{n-1}}|\langle v,u\rangle|u^{2}\,\mathcal{H}^{n-1}( \mathrm{d}u)\] \[=\int_{-1}^{1}\int_{S^{n-1}\cap v^{\perp}}(1-\tau^{2})^{\frac{n-3 }{2}}\left|\left\langle v,\tau v+\sqrt{1-\tau^{2}}\,w\right\rangle\right| \left(\tau v+\sqrt{1-\tau^{2}}\,w\right)^{2}\mathcal{H}^{n-2}(\mathrm{d}w)\, \mathrm{d}\tau\] \[=\int_{-1}^{1}\int_{S^{n-1}\cap v^{\perp}}(1-\tau^{2})^{\frac{n-3 }{2}}|\tau|\left(\tau^{2}v^{2}+2\tau\sqrt{1-\tau^{2}}\,vw+(1-\tau^{2})w^{2} \right)\mathcal{H}^{n-2}(\mathrm{d}w)\,\mathrm{d}\tau\] \[=\int_{-1}^{1}|\tau|\tau^{2}(1-\tau^{2})^{\frac{n-3}{2}}\mathrm{d }\tau\,\omega_{n-1}v^{2}+\int_{-1}^{1}|\tau|(1-\tau^{2})^{\frac{n-1}{2}} \mathrm{d}\tau\int_{S^{n-1}\cap v^{\perp}}w^{2}\,\mathcal{H}^{n-2}(\mathrm{d}w), \tag{6}\]
since
\[\int_{S^{n-1}\cap v^{\perp}}w\,\mathcal{H}^{n-2}(\mathrm{d}w)=o.\]
Let \(B(\cdot\,,\cdot)\) denote the Beta function. Then
\[\int_{-1}^{1}|\tau|\tau^{2}(1-\tau^{2})^{\frac{n-3}{2}}\mathrm{d}\tau = 2\int_{0}^{1}\tau^{3}(1-\tau^{2})^{\frac{n-3}{2}}\mathrm{d}\tau= \int_{0}^{1}s(1-s)^{\frac{n-3}{2}}\mathrm{d}s \tag{7}\] \[= B\left(2,\frac{n-1}{2}\right)=\frac{\Gamma(2)\Gamma(\frac{n-1}{2 })}{\Gamma(\frac{n+3}{2})}=\frac{4}{n^{2}-1}\]
and
\[\int_{-1}^{1}|\tau|(1-\tau^{2})^{\frac{n-1}{2}}\mathrm{d}\tau=\int_{0}^{1}(1- s)^{\frac{n-1}{2}}\mathrm{d}s=\frac{2}{n+1}.\]
It is known from [15, (24)] that
\[\int_{S^{n-1}\cap v^{\perp}}w^{2}\,\mathcal{H}^{n-2}(\mathrm{d}w)=2\frac{ \omega_{n+1}}{\omega_{3}}Q_{v^{\perp}}=\kappa_{n-1}(Q-v^{2}), \tag{8}\]
where \(Q_{v^{\perp}}=Q-v^{2}\) is the metric tensor of \(v^{\perp}\). Combination of (6)-(8) yields the assertion.
Using Lemma 1, we can write the second integral in (5) as
\[\int_{\partial K}\int_{S^{n-1}}|\langle u,\nu_{K}(y)\rangle|\langle y,u\rangle u\,\mathrm{d}u\,\mathcal{H}^{n-1}(\mathrm{d}y)\] \[=\int_{\partial K}\int_{S^{n-1}}|\langle u,\nu_{K}(y)\rangle|u^{2 }(y)\,\mathrm{d}u\,\mathcal{H}^{n-1}(\mathrm{d}y)\] \[=\frac{2\kappa_{n-1}}{n+1}\int_{\partial K}(\nu_{K}(y)^{2}+Q)(y) \,\mathcal{H}^{n-1}(\mathrm{d}y)\] \[=\frac{2\kappa_{n-1}}{n+1}\int_{\partial K}\langle\nu_{K}(y),y \rangle\nu_{K}(y)\,\mathcal{H}^{n-1}(\mathrm{d}y)+\frac{2\kappa_{n-1}}{n+1} \int_{\partial K}y\,\mathcal{H}^{n-1}(\mathrm{d}y)\] \[=\frac{2\kappa_{n-1}}{n+1}\int_{S^{n-1}}h_{K}(u)u\,S_{n-1}(K, \mathrm{d}u)+\frac{2n\kappa_{n-1}}{n+1}q_{1}(K)\] \[=\frac{2n\kappa_{n-1}}{n+1}\left[\Upsilon_{1}(K)+q_{1}(K)\right].\]
We have used that the surface area measure \(S_{n-1}(K,\cdot)\) is the image measure of \(\mathcal{H}^{n-1}\) under the Gauss map, together with the transformation theorem for integrals, and we have again used (2), as well as (3). Taking both integrals of (5) together, we complete the proof of Theorem 1.
## 4 Another analogue of Cauchy's formula
We mention briefly another way of obtaining an analogue of Cauchy's surface area formula. For this, we consider a convex body \(K\in\mathcal{K}_{o}^{n}\) and a measurable, bounded function \(f:\partial K\to\mathbb{T}^{r}\). For \(x\in K|u^{\perp}\) the intersection \(K\cap(x+\mathbb{R}u)\) is a segment \([y_{u}^{-}(K,x),y_{u}^{+}(K,x)]\) with boundary points \(y_{u}^{-}(K,x),y_{u}^{+}(K,x)\in\partial K\) satisfying, say, \(\langle y_{u}^{-}(K,x),u\rangle\leq\langle y_{u}^{+}(K,x),u\rangle\). Defining the \(r\)-tensor
\[F_{f}(K,u):=\int_{K|u^{\perp}}f(y_{u}^{+}(K,x))\,\mathcal{H}^{n-1}(\mathrm{d}x),\]
we state that
\[\int_{S^{n-1}}F_{f}(K,u)\,\mathrm{d}u=\kappa_{n-1}\int_{\partial K}f(x)\, \mathcal{H}^{n-1}(\mathrm{d}x). \tag{9}\]
For \(r=0\) and \(f=1\) this is Cauchy's surface area formula. While (9) involves integration over projections, the relation is rather a statement about real-valued than tensor-valued functions.
For the proof of (9), we write
\[\partial_{u}^{\pm}K := \{y_{u}^{\pm}(K,x):x\in K|u^{\perp}\}.\]
By the coarea formula we then have
\[\int_{\partial_{u}^{\pm}K}f(x)|\langle\nu_{K}(x),u\rangle|\,\mathcal{H}^{n-1} (\mathrm{d}x)=\int_{K|u^{\perp}}f(y_{u}^{\pm}(K,x))\,\mathcal{H}^{n-1}( \mathrm{d}x). \tag{10}\]
Since \(\langle\nu_{K}(x),u\rangle=0\) for \(\mathcal{H}^{n-1}\)-almost all \(x\in K\cap(\operatorname{relbd}\,(K|u^{\perp})+\mathbb{R}u)\) (here relbd refers to the boundary relative to \(u^{\perp}\)), we have
\[\int_{K\cap(\operatorname{relbd}\,(K|u^{\perp})+\mathbb{R}u)}f(x)|\langle\nu_{K }(x),u\rangle|\,\mathcal{H}^{n-1}(\mathrm{d}x)=0. \tag{11}\]
It follows from \(y^{\perp}_{-u}(K,x)=y^{-}_{u}(K,x)\), (10) and (11) that
\[F_{f}(K,u)+F_{f}(K,-u)=\int_{\partial K}f(x)|\langle\nu_{K}(x),u\rangle|\, \mathcal{H}^{n-1}(\mathrm{d}x).\]
Fubini's theorem gives
\[\int_{S^{n-1}}F_{f}(K,u)\,\mathrm{d}u = \frac{1}{2}\int_{S^{n-1}}\int_{\partial K}f(x)|\langle\nu_{K}(x), u\rangle|\,\mathcal{H}^{n-1}(\mathrm{d}x)\,\mathrm{d}u\] \[= \frac{1}{2}\int_{\partial K}f(x)\int_{S^{n-1}}|\langle\nu_{K}(x ),u\rangle|\,\mathrm{d}u\,\mathcal{H}^{n-1}(\mathrm{d}x)\] \[= \kappa_{n-1}\int_{\partial K}f(x)\,\mathcal{H}^{n-1}(\mathrm{d}x).\]
This completes the proof of (9).
We add a final remark. Tsukerman and Veomett [17], in their Section 3, aim at an extension of Cauchy's formula to moment vectors, but in an irritating way. Their notion \(z_{n}(K_{u})\) has different meanings in the formulation of Theorem 3 and in its proof. According to the definitions given on page 927, \(z_{n}(K_{u})=\int_{K_{u}}x\,\mathcal{H}^{n-1}(\mathrm{d}x)\), where \(K_{u}\) is a part of the boundary of \(K\). But this contradicts the third displayed formula on page 928, which should read
\[D_{[o,u]}(z_{n+1})(K)=\lim_{\epsilon\to 0^{+}}\frac{z_{n+1}((K+\epsilon[o,u]) \setminus K)}{\epsilon}=(n+1)z(K,\ldots,K,[o,u]),\]
and this is different from \(z_{n}(K_{u})\). With the notations used above, we have
\[(n+1)z(K,\ldots,K,[o,u])=\int_{K|u^{\perp}}y^{+}_{u}(K,x)\,\mathcal{H}^{n-1}( \mathrm{d}x).\]
Therefore, with corrections, the argument on p. 928 of [17] can be interpreted to yield the special case \(f(x)=x\) of formula (9).
**Acknowledgements.** D. Hug was supported by DFG research grant HU 1874/5-1 (SPP 2265).
|
2310.06835 | Scalable Semantic Non-Markovian Simulation Proxy for Reinforcement
Learning | Recent advances in reinforcement learning (RL) have shown much promise across
a variety of applications. However, issues such as scalability, explainability,
and Markovian assumptions limit its applicability in certain domains. We
observe that many of these shortcomings emanate from the simulator as opposed
to the RL training algorithms themselves. As such, we propose a semantic proxy
for simulation based on a temporal extension to annotated logic. In comparison
with two high-fidelity simulators, we show up to three orders of magnitude
speed-up while preserving the quality of policy learned. In addition, we show
the ability to model and leverage non-Markovian dynamics and instantaneous
actions while providing an explainable trace describing the outcomes of the
agent actions. | Kaustuv Mukherji, Devendra Parkar, Lahari Pokala, Dyuman Aditya, Paulo Shakarian, Clark Dorman | 2023-10-10T17:59:26Z | http://arxiv.org/abs/2310.06835v2 | # Scalable Semantic Non-Markovian Simulation
###### Abstract
Recent advances in reinforcement learning (RL) have shown much promise across a variety of applications. However, issues such as scalability, explainability, and Markovian assumptions limit its applicability in certain domains. We observe that many of these shortcomings emanate from the simulator as opposed to the RL training algorithms themselves. As such, we propose a semantic proxy for simulation based on a temporal extension to annotated logic. In comparison with two high-fidelity simulators, we show up to three orders of magnitude speed-up while preserving the quality of policy learned. In addition, we show the ability to model and leverage non-Markovian dynamics and instantaneous actions while providing an explainable trace describing the outcomes of the agent actions.
Logic Programming, Neuro Symbolic Reasoning, Scalable Simulation, Reinforcement Learning, Non-Markovian Dynamics, AI Tools.
## I Introduction
Recent advances in reinforcement learning (RL) have yielded remarkable progress across various domains, including healthcare [1], autonomous driving [2], and gaming environments such as Atari games [3]. However, scalability concerns hinder RL's capacity to handle complex environments and interactions, while the lack of modularity and portability impedes its adaptability to diverse contexts. Additionally, issues related to explainability, the inherent drawbacks of the Markov assumption, and difficulty implementing safety constraints limit RL's broader applicability in domains demanding rigorous simulation fidelity and reliability. It is crucial to note that the majority of these drawbacks primarily originate from the limitations of the simulation environment employed to train RL agents, rather than intrinsic deficiencies in the underlying RL algorithms themselves. Addressing these challenges necessitates advancements in simulator fidelity and realism.
In this work, we propose a semantic proxy to replace the simulator based on formal logic. We show that this approach offers a three order of magnitude speedup over using the native simulation environment. Further, we train agents in the semantic proxy using standard Deep Q Learning and show that they attain comparable performance to two high-fidelity simulation environments in terms of win-rate and reward. We also demonstrate advanced capabilities of this framework such as non-Markovian reasoning (which can improve agent performance) as well as how our framework provides an explainable trace of the simulation that is amenable to further symbolic reasoning. The main contributions of this paper are as follows.
1. _The introduction of the use of open world temporal logic as a semantic proxy for a simulator._ We show that by using open world temporal logic programming we can successfully create proxies for game environments. We implemented our approach in PyReason [4] which allows us to leverage a temporal variant of annotated (first order) logic [5, 6]. The use of a logic program to model a simulation environment is inherently modular and allows direct support for the addition of constraints on agent behavior - without requiring modifications to the RL training regime or reward function. This allows PyReason to leverage abstraction layers (like ROS [7] in robotic applications) to enhance versatility. Similarly, we support adding logic shielding not just within the RL agent like [8, 9], but also directly within the simulator, detaching it altogether from the RL algorithm and preventing agents from ever executing an unsafe action in any given environment.
2. _We demonstrate a three order of magnitude improvement in runtime over simulation environments while maintaining agent performance._ The ability to scale while maintaining performance is paramount for accommodating the escalating computational demands of complex environments. PyReason shows up to three orders of magnitude speedup and significantly better memory efficiency over the widely popular simulators Starcraft II (SC2) [10] and AFSIM [11]. PyReason-trained policies consistently excel in single-agent and multi-agent scenarios, with less than 10% reward variance and less than 3% win rate variance in both SC2 and AFSIM.
3. _We demonstrate that our framework can model non-Markovian and instantaneous actions and that the RL training regime can leverage these capabilities for improved agent performance._ We show that by removing the Markov assumption and by introducing immediate rules in logic, we are able to capture similar environments to real world applications. We illustrate that employing a non-Markovian simulator for training a DQN in a basic wargame context results in a notable 26% improvement in the win rate compared to adhering to the
Markovian assumption.
4) _Our semantic proxy provides a symbolic explainable trace describing the simulation._ Explainability is essential when observing RL simulation outcomes to gain insights into agent decision-making processes and ensure their alignment with intended objectives. PyReason produces fully explainable traces of inference, which can be used in reward shaping and debugging.
The rest of the paper is outlined as follows. In Section II we review our open world temporal logic, which is based on annotated logic [5, 12] and implemented in PyReason [4] - this is the foundation of the semantic proxy. In Section III we describe how our semantic proxy replaces the simulator in an otherwise standard reinforcement learning pipeline and point out key extensions to PyReason introduced in this paper to enable this workflow. This is followed by a description of our experimental setup in Section IV to include details on the simulators we examined and the design of each experiment. Section V discusses the results for scalability, portability, non-Markovian dynamics, and explainability. This is followed by a section covering related work (VI) and thoughts on future work (VII).
**Codebase:**[https://github.com/lab-v2/pyreason-rl-sim](https://github.com/lab-v2/pyreason-rl-sim)
## II Background
**Open World Temporal Logic.** We now describe the open world temporal logic we use to build our semantic proxy. For this task, we leverage Generalized Annotated Logic programs (GAPs) with lower-lattice and temporal extensions from [5, 6, 12, 13]. The use of GAPs with a lower lattice enables the modeling of open-world scenarios as it allows for the atoms to be associated "true", "false", or "no knowledge" while the temporal extensions are required to model the simulation environments. Further, the key semantic structure and fixpoint semantics allow for explainable description of the environment's dynamics.
**Syntax.** We consider first order logical language with an infinite set \(\mathcal{C}\) of constant symbols, a finite set \(\mathcal{P}\) of predicate symbols, and an infinite set \(\mathcal{V}\) of variable symbols. Each predicate symbol \(pred\in\mathcal{P}\) has an arity. We shall assume that \(\mathcal{C},\mathcal{P},\mathcal{V}\) are discrete and finite. In general, we shall use capital letters for variable symbols and lowercase letters for constants. Similar to previous work [14, 15], we assume that all elements of \(\mathcal{P}\) have an arity of either \(1\) or \(2\).
Atoms and ground atoms are formed in the normal way, e.g. for predicate \(pred\), constant \(c\in\mathcal{C}\), and variable \(V\in\mathcal{V}\), \(pred(c)\) is a ground atom while \(pred(V)\) is a non-ground atom.
Following [12], we define a lattice structure \(\mathcal{M}\) where elements consist of subsets of the unit interval where \([0,1]\) (representing total uncertainty) is the lowest element of the lattice while the upper elements are all intervals \([l,u]\) where \(l=u\). The top elements of the lattice include \([1,1]\) (total truth) and \([0,0]\) (total falsehood). We depict such a lattice in Figure 1. In annotated logic, atoms are associated with elements of the lattice structure - which is how we enable open-world reasoning (i.e., due to atoms being associated with the bottom lattice). However, as per [6, 13] we have to extend the definition of the atom to an _annotated atom_. Given a ground literal \(l\) and n element of the lattice \(\mu\), \(l:\mu\) is an annotated atom. Functions and variables are also permitted in the annotations (see [6, 13] for further details).
We propose a modified version on GAP rule definied in [6]:
**Definition II.1** (GAP Rule): _If \(\ell_{0}:\mu_{0},\ell_{1}:\mu_{1},\dots,\ell_{m}:\mu_{m}\) are annotated literals (such that for all \(i,j\in 1,...,m\), \(\ell_{i}\not\equiv\ell_{j}\)), then_
\[r\equiv\ell_{0}:\mu_{0}\underset{\Delta t}{\longleftarrow}\ell_{1}:\mu_{1} \,\wedge\,\dots\wedge\,\ell_{m}:\mu_{m}\hskip 14.226378pt\Delta t\geq 0\]
_is called a GAP rule. We will use the notations \(head(r)\), \(delay(r)\) and \(body(r)\) to denote \(\ell_{0}\), \(\Delta t\) and \(\{\ell_{1},\dots,\ell_{m}\}\) respectively. When \(m=0\) (\(body(r)=\emptyset\)), the above GAP-rule is called a _fact_. A GAP-rule is _ground_ iff there are no occurrences of variables from \(\mathcal{V}\) in it. \(\Delta t\) is the temporal gap between when the rule is fired and when it's effects are applied. If body(r) is satisfied at time t, then the annotation of \(\ell_{0}\) changes to \(\mu_{0}\) at time \(t+\Delta t\). A temporal logic program \(\Pi\) is a finite set of GAP rules.
Our key intuition is that a program \(\Pi\) can be used to capture the dynamics of an environment. In practice, a program \(\Pi\) is comparable to code written in languages like PROLOG allowing for flexible environmental definitions that can align precisely with constructs in a simulation. We provide an example in Table II. We can think of a program as consisting of two subsets of rules: one dictating the dynamics of the environment and the other dictating agent actions. The former would be generated as part of the game design while the later can be the policy produced by a reinforcement learning algorithm.
**Semantic Interpretation.** An annotated logic program \(\Pi\) is associated with a semantic interpretation that maps literal-time point pairs to annotations. Our intuition is that this structure, which is produced as output of deductive inference can directly describe the change in the environment resulting from a set of rules and an agent's actions. Notably, the interpretation is entirely symbolic, hence fully explainable in terms of the logical language. We provide a formal definition of an interpretation and associated satisfaction relationship below.
Fig. 1: Example of a lower semi-lattice structure where the elements are intervals in \([0,1]\).
**Definition II.2** (Semantic Interpretation): _Let us assume a sequence of timepoints \(T=t_{1},...,t_{max}\). Then, an interpretation \(I\) is any mapping \(\mathcal{G}\times T\rightarrow\mathcal{M}\) such that for all literals \(l\), we have \(I(l,t)=\neg(I(\neg l,t))\). Here, \(\mathcal{G}\) is the set of all ground literals. The set \(\mathcal{I}\) of all interpretations can be partially ordered via the ordering: \(I_{1}\preceq I_{2}\) iff for all ground literals \(g\in\mathcal{G}\) and time t, \(I_{1}(g,t)\sqsubseteq I_{2}(g,t)\). \(\mathcal{I}\) forms a complete lattice under the \(\preceq\) ordering._
**Definition II.3** (Satisfaction for annotated ground literal): _An interpretation \(I\) at time \(t\) satisfies annotated ground literal \(g:\mu\), denoted \(I\models_{t}g:\mu\), iff \(\mu\sqsubseteq I(g,t)\)._
**Definition II.4** (Satisfaction of GAP rule): \(I\) _satisfies the ground GAP-rule_
\[r\equiv g_{0}:\mu_{0}\quad\underset{\Delta t}{\longleftarrow}\quad g_{1}: \mu_{1}\land\ldots\land g_{m}:\mu_{m}\]
_denoted \(I\models r\), iff for \(t\leq t_{max}-\Delta t\) where for all \(g_{i}:\mu_{i}\in\text{body(r)}\), if \(I\models_{t}g_{i}:\mu_{i}\) then \(I\models_{t+\Delta t}\text{head(r)}\). \(I\) satisfies a non-ground literal or rule iff \(I\) satisfies all ground instances of it._
**Fixpoint-based Inference in Annoated Logic.** In [6, 13] the authors present a fixpoint operator for identifying the logical outcome of a logic program. Our intuition is that the fixpoint operator essentially performs a simulation - all the while recording the changes. We note that under the assumption of consistency, this operator produces an exact result in polynomial time (see Theorem 3.2 and 3.4 of [5]) and recent implementation provides practical speed-ups and consistency checking while maintaining these guarantees [4]. We define it formally below:
**Definition II.5** (Fixpoint operator): _Suppose \(\Pi\) is any GAP and \(I\) an interpretation. The fixpoint operator \(\Gamma\) is a map from interpretations to interpretations and is defined as_
\[\Gamma(I)(g_{0},t)=\mathbf{sup}(annoSet_{\Pi,I}(g_{0},t)),\]
_where \(annoSet_{\Pi,I}(g_{0},t)=\{I(g_{0},t)\}\cup\{\mu_{0}\text{ such that for all ground rules r}\in\Pi\), where head(r)=\(g_{0}:\mu_{0}\), for all \(g_{i}:\mu_{i}\in\text{body(r)}\) and delay(r) \(\leq t\) and \(I\models_{t-delay(r)}g_{i}:\mu_{i}\}\). Here delay(r) is the delay associated with specific rule r._
Given natural number \(i>0\), interpretation \(I\), and program \(\Pi\), we define \(\Gamma^{i}(I)\), then multiple applications of \(\Gamma\):
\(\Gamma^{i}(I)=\Gamma(I)\) if \(i=1\) and \(\Gamma^{i}(I)=\Gamma(\Gamma^{i-1}(I))\) otherwise.
We note that the fixpoint operator maps _all_ time-point-literal pairs to time-point literal pairs - so essentially revising the entire sequence of timepoints at once. This contrasts with approaches such as MDPs which produce a new state at each time-point. This allows for direct modeling of non-Markovian dynamics in the framework.
## III Approach
In this section we detail our approach to using logic as a simulator and describe PyReason, our software implementation. Then we introduce new enhancements to PyReason, including the ability to interface with RL agents.
**Logic as simulator for Reinforcement Learning.** Deep Reinforcement Learning (RL) algorithms typically require a simulator to learn an agent policy. However, traditional simulators have several drawbacks like speed and data efficiency, lack of explainability and modularity, inextensibility without retraining. We propose annotated logic (implemented in PyReason) to address these issues and compare it with some well established simulation environments.
**The PyReason software1 (Recap of prior work).** PyReason [4] offers a comprehensive and flexible framework for reasoning based on generalized annotated logic. It supports various extensions, including temporal, graphical, and uncertainty-related features, which enable the capture of a wide range of logics, such as fuzzy, real-valued, interval, and temporal logics.
Footnote 1: PyReason github: [https://github.com/lab-v2/pyreason](https://github.com/lab-v2/pyreason)
Built on modern Python, PyReason is specifically designed to handle graph-based data structures efficiently, making it compatible with data exported from popular graph databases like Neo4j and GraphML.
The core of PyReason lies in its rule-based reasoning, which enables handling uncertainty, open-world novelty, non-ground rules, quantification, and other diverse requirements seamlessly. The system remains agnostic to the selection of t-norm, providing flexibility in utilizing different logical connectives.
One of the key strengths of PyReason is its speed and machine-level optimized fixpoint-based deduction approach. This ensures efficient and scalable reasoning capabilities, even when dealing with large graphs with over 30 million edges. Consequently, PyReason facilitates explainable AI reasoning, providing valuable insights into the decision-making process and the logic behind reaching specific conclusions.
Our description of the world as a knowledge graph (KG) is notable as it adds support to applications where a policy must be learnt via reasoning over context related KGs such as [16]. Additionally, recent progress in developing Knowledge Graphs (KGs) for probabilistic reasoning, as demonstrated by studies such as [17, 18, 19], highlight the potential role of our framework in a wide range of practical applications.
The logic based approached used in PyReason is also inherently modular allowing for independently trained or created components. Finally, the logic in PyReason can be extended simply by adding symbols to an existing logic program.
**Immediate Rules (New in this paper).** We introduce a feature called immediate rules. Immediate rules are applied immediately and make the program search for new applicable rules whose clauses might now be satisfied because of the immediate rule. Previously it was impossible for two rules with the same \(\Delta t\) to influence each other. This is required when the shooting action (see Section IV) is brought into the picture because there are multiple events occurring with the same \(\Delta t\) but they're all interconnected. We note that this is possible without any extensions to annotated logic as the temporal extensions we use (based on [4, 5, 13]) have
no requirement that two time units be uniformly separated in actual time.
**Implementation Improvements.** For this work, we also modified various aspects of PyReason to improve memory management particularly to better support the analysis of graphical structures representing geospatial areas as well as generally mature the software.
**Interfacing with an RL agent (New in this paper).** We introduce PyReason-gym2, an OpenAI Gym wrapper that allows easy interfacing with a grid world that uses PyReason as the simulation and dynamics engine. We use logical rules to dictate how the agents move around through the grid world and how bullets and obstacles interact with the agents. RL agents can use our gym environment as a simulator, as action(s) chosen by agent's policy is processed and world state and reward(s) are returned by PyReason-gym. It also has the capability of outputting a trace of all events that happened and when they happened because that is a core functionality of PyReason. PyReason-gym has several settings that allow it to be very efficient and consume a constant amount of memory.
Footnote 2: PyReason-gym github: [https://github.com/lab-v2/pyreason-gym](https://github.com/lab-v2/pyreason-gym)
## IV Experimental Setup
In this section we introduce the two popular simulators we benchmark our approach against, outline the two game scenarios we use in our experiments, analyze the limitations of Markov assumptions and discuss the RL training methodology adopted.
**Popular Simulators.** To justify PyReason to be an appropriate simulator, we must first compare it to established simulators in the field. For this we choose two simulators:
1. **Starcraft II** (SC2) is a popular real-time strategy (RTS) video game developed by Blizzard Entertainment and has a competitive multiplayer aspect that involves managing resources, building armies, and engaging in tactical battles. Due to its complex gameplay and emphasis on strategic decision-making, it has been considered as a potential tool for military simulations. We extended Deepmind's PySC2 [10] to utilize the Starcraft II environment in our experiments3.
Footnote 3: Extensions to PySC2: [https://github.com/lab-v2/pysc2-labv2](https://github.com/lab-v2/pysc2-labv2)
2. **Advanced Framework for Simulation, Integration, and Modeling software** (AFSIM) [11] is a powerful simulation tool used by the United States Department of Defense (DoD) for various purposes, including training, analysis, experimentation, and mission planning. AFSIM is developed by the Air Force Research Laboratory (AFRL) and is utilized primarily by the United States Air Force (USAF) as well as other branches of the military and defense organizations. AFSIM is a high-fidelity modeling and simulation software designed to provide realistic representations of aerial warfare scenarios and environments. It enables the USAF to assess and analyze the performance of various systems, strategies, and tactics in simulated combat situations.
In order to compare PyReason with SC2 and AFSIM, we design the scenarios and game dynamics in all three simulators.
**Game Setup.** We design a simple grid world war game as shown in Fig. 2. The basic scenario has two teams (red and blue) of one agent each. Each team has a base, and there are also a few obstacles (shown as mountains) in the environment which are impenetrable and impassable. For this base scenario, the objective of the game is to capture (reach) the rival base before the enemy can do the same. The red team follows our learnt RL policy (the agent(s)), whereas the blue team follows a pre-defined base policy (the opponent(s)) described later in this section. Later on we build upon this basic scenario by adding more agents and then extending the action and observation spaces.
**Comparison with baseline simulation environments.** Allowing the agents to take random actions in the grid world, we compare the scaling capability of our software against other simulators by comparing the runtime and memory utilization over a large number of actions for different number of agents per team.
Next we wanted to verify if a Reinforcement Learning (RL) agent trained in PyReason (PR) can provide comparable performance to AFSIM (AFS) and PySC2 (SC2). For this, we considered two cases: single agent and multi (five) agents per team. At certain intervals during the training process, policies were extracted and were used to play the base scenario described earlier 500 times in each of the three simulators (PyReason, AFSIM, and PySC2) and the outcomes were compared.
**Extending the action space with shooting in PyReason.** Some simulations (e.g., Starcraft II) do not separate movement and shooting (i.e., the agent always shoots when in line of sight with an enemy). This however, is undesirable in any military sim looking to emulate real battlefield scenarios. Strategies are often pragmatic, with shooting often limited and highly tactical. Practical issues such as limited ammunition and avoiding exposure are important considerations here. Hence, we build upon the basic scenario by integrating shooting into PyReason, independent from movement actions - allowing RL agents to learn varied and in-depth strategies - and in the process ensuring our implementation fits our eventual goal of a faithful miliary simulation. For this advanced scenario, each agent is provided with three bullets and at each timepoint they may either choose to move or shoot. They may also choose to not take any action. Other than capturing the enemy base, a team can win by eliminating all enemy agents.
**Learning policies with RL.** Our approach is agnostic to any specific RL algorithm. Hence for this work we chose to use the widely popular and versatile Deep Q learning (DQN) algorithm [3] for all of our experiments. Based on a specific application or domain, a suitable algorithm can be seamlessly used in place of DQN. In our implementation, we combine a shallow Q-Net architecture with techniques discussed in [3] such as experience replay, stable learning and hard updates for target network. In our architecture we use one hidden layer between input and output layers; 64 state variables (one for each grid cell) and an action space of 5 (for base scenario) or
9 (for advanced scenario). Observation state space available to the agent was symbolic in nature and its size varied between experimental setups as follows:
(i) Four for single agent in the base scenario. Two each for the current positions of the agent and the opponent.
(ii) Seven for single agent in the advanced scenario. One for the number of opponent bullets in the environment, two for the nearest bullet position, in addition to, two each for the current positions of the agent and the opponent.
For multi-agent setups, the observation space is multiplied by the number of agents in each team. For the special non-Markovian setup described later, observation space is doubled as observations from previous timestep are considered. For experiments in multi-agent environments we learn non co-operative single agent policies using multi-agent sampling. We use the widely adopted Smooth L1 loss function, instead gradient clipping as described in the seminal DQN work.
We use the following reward function (rewards related to shooting actions only applicable for the advanced scenario):
(i) Terminal state rewards: +250 for a win, -250 for a loss, +400 for shooting an opponent, -200 for getting shot.
(ii) Non-terminal state rewards: -2 for a valid action, -200 for an unsafe or illegal action, -10 for an invalid action such as trying to shoot after exhausting ammunition.
We define the behavior of the opponent using a stochastic base policy. At each timestep it tries to move closer to the enemy base by reducing the manhattan distance with a probability of 0.7, or chooses a random action from the action space with a probability of 0.3. In the advanced scenario, shooting is prioritized over movement until ammo is exhausted.
All RL policies described in this paper were learnt on a NVIDIA A100 GPU with 80GB memory and 40 cores of AMD EPYC 7413 with 378GB memory.
**Shielding in RL.** As discussed in Section I, we incorporate logic shielding within the reward function, as well as, the simulation environment itself. In the reward function, the agent is heavily penalized for taking an unsafe action, such as, trying to move through the mountains or choosing an action that takes it out of bounds of the map. While this approach encourages the agent to learn policies that avoid unsafe actions, it provides no guarantees. Adding shielding in the simulator itself ensures that even if the agent was to choose an unsafe action, our rule based environment dynamics can detect and stop the execution of such actions in runtime. Furthermore, we can leverage these dynamics to prevent illegal actions such as, shooting when ammo has already been exhausted.
**Exploring limitations of Markov assumption.** The Markov assumption in RL is the assumption that the future state of an agent only depends on its current state and action, and not on the history of states and actions that led to the current state. As this simplifies the problem and enables the use of techniques like Markov Decision Processes (MDPs) and the Bellman equation, many well established simulators make this assumption. However, many real-world environments are not truly Markovian. In some cases, the current state may not contain all the relevant information for decision-making. This is especially important for simulators replicating realistic military combat environments where various key factors like logistical support, conflict history, long-term intelligence data, patterns in surveillance reports - that go into tactical decision making are non-Markovian in nature.
PyReason does not make a Markov assumption and we exhibit it's capability by creating a simple experiment with non-Markovian dynamics. We consider a two-agents per team, advanced scenario as described earlier. We introduce a modification to one agent within each team, constraining its ability to execute actions to once every two timesteps, with the added stipulation that each of its movement actions require two timesteps to complete. We learn to play the game in two different ways. In the initial approach, the player adheres to a Markov assumption, utilizing solely the current state information. Conversely, in the second approach, the player gains access not only to the present state data but also to observations from the preceding time step. We compare the success of the two methods by evaluating learnt policies over 500 games after every 32,000 training epochs.
## V Results
In this section, we present experimental results comparing our approach's scalability in Starcraft II and AFSIM, along with its ability to learn policies in PyReason and port them to other simulators. We then explore whether incorporating non-Markovian dynamics in the simulation can improve RL algorithms' ability to learn effective policies for complex games. Additionally, we demonstrate the explainability of our
Fig. 2: Grid map for the scenario. Red (bottom-right) and Blue (top-left) squares are fixed base locations for each team. All agents start at their respective base locations. Obstacles (mountains) are shown with black triangles. Bottom left quadrant of the grid map is marked with indices, to aid the understanding of the explainable trace in Table III.
approach using a rule trace, highlighting its potential in reward shaping during training.
**Scalability.** Figure 3 show the scaling capability of different simulators tested. The experiments were performed on an AWS EC2 container with 96 vCPUs (48 cores) and 384GB memory. We noted that, among the two established simulation environments, AFSIM generally performed better with 5 agents per team. With 20 agents per team, AFSIM is overtaken by SC2 as the actions per agent increases. This would be expected as AFSIM is designed as a high-fidelity simulation environment, so we would expect greater computational cost with more complex situations. PyReason consistently outperformed SC2, achieving anywhere from a one to nearly three orders of magnitude improvement. Though PyReason performs comparably to AFSIM for lower actions per agent (which are arguably the least important in practice), it also achieved comparable multiple order-of-magnitude improvement in terms of runtime as the number of actions per agent increased. This suggests that PyReason will scale to large environments where the traditional use of simulators would otherwise prohibit model training.
Additionally, we examined memory consumption (Figure 3). PyReason uses considerably lower memory over SC2 over all configurations while having sub-linear (\(R^{2}=.84\)) growth with action and agent space. AFSIM's strength as a large-scale military simulator is shown here with little effect on memory consumption with change in agents or actions, however it has a large base memory cost which was still significantly higher than that of PyReason for the largest case considered (40,000 actions in total).
**Portability.** When policies learnt in PyReason played the base scenario, comparable numbers were observed for all three simulators as shown in Table I. Variance can be attributed to inherent randomness in learnt policies. These results suggest that the approach is generalizable as an agent trained in PyReason can be ported to various simulation environments and achieve comparable reward and win percentage.
**Non-Markovian Dynamics.** Evolution of the performance of policies learnt with and without the Markov assumption is shown in Fig. 4. Both agents underwent training for a duration of up to 1.6 million epochs, with policy evaluations conducted at intervals of 32,000 epochs. Each policy was used to play the advanced scenario 500 times to obtain a win percentage. Evaluations were carried out on 48 cores of AMD EPYC 7413 with 378GB memory. Markovian policies obtained a peak performance of 59%, significantly lower than 85% achieved by the non-Markovian policies. However we observe that policies learnt in Markovian setting attained decent performance with noticeably less training, which is unsurprising given the doubling of the observation space in the non-Markovian case. When examining the most effective policy within each category, the removal of the Markov assumption resulted in an increase in the average number of actions per agent required to secure a single victory, rising from 15.51 to 18.01. This observation suggests the acquisition of a policy characterized by greater complexity, yet one that exhibits enhanced reliability. Despite the relative simplicity of our experiment, a noteworthy performance enhancement was observed. This underscores the essentiality and significance of accommodating non-Markovian dynamics within simulation environments.
trace is a direct result of the semantic structure of logic. This makes our approach completely explainable and allows the user to understand system behavior and helps in debugging errors.
Two examples of how we leveraged this to improve our reward function given in section IV are:
(i) Initially we had set the penalty for getting shot at 400. However, from rule traces we observed that the agent was learning to prioritize hiding behind impenetrable mountains and take a safety first approach, instead of trying to win the game. Halving the penalty to 200 produced a more balanced policy.
(ii) The penalty for trying to shoot after exhausting ammunition was set to a minor value of 10 after observing that higher values led to the agent avoiding shooting altogether.
An excerpt of a rule trace is shown in Table III. Excerpt shown begins at timestep 16 of one of our experiments. Initial conditions are as depicted in Fig. 2. 'R' and 'B' respectively show the location of the red and blue agents at the beginning of this example. As the red agent moves downward from it's starting location (from '24' to '0' through '16' and '8'), the blue agent decides to shoot to the left so as to intercept red (at '0'). However, red has seemingly learnt to predict the bullet path and evade it. So it backtracks (to '16').
Rule **m_Down_on** presented in Table II is fired at timestep 16 (and also, 17, 18) and is pictorially shown with a red arrow in Fig. 2 and in bold in Table III.
## VI Related Work
A lifelong learner AI suggested in [20] starts from a hand-crafted knowledge base in the form of symbolic rules and then employs deep learning techniques to grow its knowledge base through experience. Due to costs, risk, reliability and availability of real life data, such experience is often gained using simulators. PyReason, designed to support logically defined environments, qualifies as an ideal candidate for emerging AI agents of this kind. It is to be noted that temporal logic programming is different from temporal logic. The main difference being that temporal logic relies (typically) on an MDP as the underling structure and the rules are just used
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Rule Identifier & Rule & Natural Language \\ \hline m\_Down\_on & \(moveDown(A):[1,1]\leftarrow\_{\Delta t=0}\;\;agent(A):[1,1]\land moveDir(A,down):[1,1]\land atLoc(A,X):[1,1]\land downLoc(X,X):[1,1]\land downLoc(Y):[1,1]\land blocked(Y):[0,0]\) & If \(A\) is an agent (annotated \([1,1]\)) at location \(X\), chooses to move in downward direction to \(Y\) which is not blocked, then interpretation(label) \(moveDown(A)\) is updated to \([1,1]\). \\ s\_Left\_on & \(shootLeftB(A):[1,1]\leftarrow\_{\Delta t=0}\;\;agent(A):[1,1]\land team(A,blue):[1,1]\wedge health(A):[0.1,1]\land ammo(A):[0.1,1]\land ammo(A):[0.1,1]\land shootLeft(A):[1,1]\) & If \(A\) is an agent of the blue team, chooses to shoot left, then label \(shootLeftB(A)\) is updated to \([1,1]\) iff \(A\) has non-zero health and remaining ammo. \\ \hline \hline \end{tabular}
\end{table} TABLE II: Example rules in first order logic and descriptions in natural language.
Fig. 4: Win percentage for policies learnt with Markovian and non-Markovian dynamics.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline t & Node/Edge & Label & Old & New & Rule fired \\ & & & Bound & Bound & \\ \hline
0 & 26 & blocked & [0.0,1.0] & [1.0,1.0] & - \\
0 & 27 & blocked & [0.0,1.0] & [1.0,1.0] & - \\ \hline
**16** & **red-agent-1** & **moveDown [0.0,0.0]** & **[1.0,1.0]** & **m\_Down\_on** \\ \hline
17 & red-agent-1 & moveDown [1.0,1.0] & [0.0,0.0] & m\_Down\_off \\
17 & (red-agent-1,16) & atLoc & [0.0,1.0] & [1.0,1.0] & m\_Set\_location \\
17 & red-agent-1,24) & atLoc & [1.0,1.0] & [0.0,0.0] & m\_Rem\_location \\
17 & red-agent-1 & moveDown [0.0,0.0] & [1.0,1.0] & m\_Down\_on \\ \hline
18 & red-agent-1 & moveDown [1.0,1.0] & [0.0,0.0] & m\_Down\_off \\
18 & (red-agent-1,8) & atLoc & [0.0,1.0] & [1.0,1.0] & m\_Set\_location \\
18 & (red-agent-1,16) & atLoc & [1.0,1.0] & [0.0,0.0] & m\_Rem\_location \\
18 & blue-agent-1 & shootLeftB [0.0,1.0] & [1.0,1.0] & s\_Left\_on \\
18 & (blue-bullet-1,3) & atLoc & [0.0,1.0] & [1.0,1.0] & s\_Set\_location \\
18 & (blue-bullet-1,1erf) & direction & [0.0,1.0] & [1.0,1.0] & s\_Set\_dir \\
18 & red-agent-1 & moveDown & [0.0,0.0] & [1.0,1.0] & m\_Down\_on \\ \hline
19 & red-agent-1 & moveDown [1.0,1.0] & [0.0,0.0] & m\_Down\_off \\
19 & (red-agent-1,0) & atLoc & [0.0,1.0] & [1.0,1.0] & m\_Set\_location \\
19 & red-agent-1,8) & atLoc & [1.0,1.0] & [0.0,0.0] & m\_Rem\_location \\
19 blue-agent-1 & shootLeftB & [1.0,1.0] & [0.0,0.0] & s\_Left\_off \\
19 & blue-bullet-1,3) & atLoc & [1.0,1.0] & [0.0,0.0] & s\_Rem\_location \\
19 & blue-bullet-1,2) & atLoc & [0.0,1.0] & [1.0,1.0] & s\_Set\_location \\
19 & red-agent-1 & moveUp & [0.0,0.0] & [1.0,1.0] & m\_Up\_on \\ \hline
20 & red-agent-1 & moveUp & [1.0,1.0] & [0.0,0.0] & m\_Up\_off \\
20 & red-agent-1,8) & atLoc & [0.0,0.0] & [1.0,1.0] & m\_Set\_location \\
20 & red-agent-1,0) & atLoc & [1.0,1.0] & [0.0,0.0] & m\_Rem\_location \\
20 & blue-bullet-1,2) & atLoc & [1.0,1.0] & [0.0,0.0] & s\_Rem\_location \\
20 & blue-bullet-1,1) & atLoc & [0.0,1.0] & [1.0,1.0] & s\_Set\_location \\
20 & red-agent-1 & moveUp & [0.0,0.0] & [1.0,1.0] & m\_Up\_on \\ \hline
21 & red-agent-1 & moveUp & [1.0,1.0] & [0.0,0.0] & m\_Up\_off \\
21 & red-agent-1,16) & atLoc & [0.0,0.0] & [1.0,1.0] & m\_Set\_location \\
21 & red-agent-1,8) & atLoc & [1.0,1.0] & [0.0,0.0] & m\_Rem\_location \\
21 & blue-bullet-1,1) & atLoc & [1.0,1.0] & [0.0,0.0] & s\_Rem\_location \\
21 & blue-bullet-1,0) & atLoc & [0.0,1.0] & [1.0,1.0] & s\_Set\_location \\ \hline \hline \end{tabular}
\end{table} TABLE III: An extract of a rule trace produced by the PyReason software.
for specification checking (shielding can be viewed as an application of this). We use temporal logic programming [21], which is the notion of a collection of temporal logic rules to specify the environmental dynamics. Another thing to note here is that, Portability and Transfer are different. Transfer learning in RL [22] involves leveraging knowledge gained from one task or environment to improve learning and performance on a related but different task or environment. What we show here is portability whereby we leverage a fast, scalable simulation environment in PyReason to learn policies which are then used for an identical task in a slower simulation environment which would have been prohibitively slow to carry out the same number of training epochs. Although the slower simulator models the same environment, it may lack several of PyReason's capabilities like explainability and logical shielding. Like our approach, hierarchical reinforcement learning (HRL) [23] also offers a semantic coarsening to improve agent performance. However, unlike our approach, HRL coarsens the action space by creating a hierarchy - where our approach is coarsening the environment itself. As our approach is agnostic to the RL training regime, HRL and our approach are actually complementary and a represent a promising avenue for future research.
## VII Conclusions and Future Work
In this paper we presented a logic-based semantic proxy for the simulator in an RL pipeline. We attained significant speedup while providing comparable agent performance. While the policy produced by our approach can be considered as a a set of rules, the rule bodies consist of all ground atoms - hence we seek to leverage frameworks such as [25] to produce more compact policy rules. Another area of exploration is the use of this framework to identify issues relating to the sim-to-real gap. Finally, the description of the environment using natural language is also an area that can be explored due to recent advances in translating natural language to temporal logic formulas using LLMs [26].
## Acknowledgments
Some of the authors were funded by the Arizona New Economic Initiative MADE STC as well as funding from SSCI.
|
2310.01182 | Multivariate Singular Spectrum Analysis by Robust Diagonalwise Low-Rank
Approximation | Multivariate Singular Spectrum Analysis (MSSA) is a powerful and widely used
nonparametric method for multivariate time series, which allows the analysis of
complex temporal data from diverse fields such as finance, healthcare, ecology,
and engineering. However, MSSA lacks robustness against outliers because it
relies on the singular value decomposition, which is very sensitive to the
presence of anomalous values. MSSA can then give biased results and lead to
erroneous conclusions. In this paper a new MSSA method is proposed, named
RObust Diagonalwise Estimation of SSA (RODESSA), which is robust against the
presence of cellwise and casewise outliers. In particular, the decomposition
step of MSSA is replaced by a new robust low-rank approximation of the
trajectory matrix that takes its special structure into account. A fast
algorithm is constructed, and it is proved that each iteration step decreases
the objective function. In order to visualize different types of outliers, a
new graphical display is introduced, called an enhanced time series plot. An
extensive Monte Carlo simulation study is performed to compare RODESSA with
competing approaches in the literature. A real data example about temperature
analysis in passenger railway vehicles demonstrates the practical utility of
the proposed approach. | Fabio Centofanti, Mia Hubert, Biagio Palumbo, Peter J. Rousseeuw | 2023-10-02T13:20:45Z | http://arxiv.org/abs/2310.01182v1 | # Multivariate Singular Spectrum Analysis
###### Abstract
Multivariate Singular Spectrum Analysis (MSSA) is a powerful and widely used nonparametric method for multivariate time series, which allows the analysis of complex temporal data from diverse fields such as finance, healthcare, ecology, and engineering. However, MSSA lacks robustness against outliers because it relies on the singular value decomposition, which is very sensitive to the presence of anomalous values. MSSA can then give biased results and lead to erroneous conclusions. In this paper a new MSSA method is proposed, named _RObust Diagonalwise Estimation of SSA_ (RODESSA), which is robust against the presence of cellwise and casewise outliers. In particular, the decomposition step of MSSA is replaced by a new robust low-rank approximation of the trajectory matrix that takes its special structure into account. A fast algorithm is constructed, and it is proved that each iteration step decreases the objective function. In order to visualize different types of outliers, a new graphical display is introduced, called an enhanced time series plot. An extensive Monte Carlo simulation study is performed to compare RODESSA with competing approaches in the literature. A real data example about temperature analysis in passenger railway vehicles demonstrates the practical utility of the proposed approach.
_Keywords:_ Casewise outliers; Cellwise outliers; Iteratively reweighted least squares; Multivariate time series; Robust statistics.
1
Footnote 1: Department of Industrial Engineering, University of Naples Federico II, Naples, Italy, [email protected]
2
Footnote 2: Section of Statistics and Data Science, Department of Mathematics, KU Leuven, Belgium, {mia.hubert,peter.rousseeuw}@kuleuven.be
Introduction
Time series analysis plays a crucial role in understanding and predicting the behavior of sequential data across a wide range of disciplines. It has proven to be an invaluable tool in fields such as finance, healthcare, ecology, engineering, and more. Singular spectrum analysis (SSA) has emerged as a powerful nonparametric tool for extracting valuable insights from time-dependent data. A comprehensive overview of SSA can be found in the books Golyandina et al. (2001), Golyandina and Zhigljavsky (2013), and Golyandina et al. (2018). Numerous examples showcasing the success of SSA can be found in the literature. Multivariate singular spectrum analysis, referred to as MSSA (Broomhead and King, 1986), enables the simultaneous analysis and interpretation of multiple time series, by exploiting the dependencies between variables.
A crucial step of SSA is the low-rank approximation of the so-called trajectory matrix, which will be described in the next section. The most often used tool for this is the singular value decomposition (SVD), which is however sensitive to the presence of outliers in the data. Several authors have proposed outlier-robust SSA methods by replacing the SVD by more robust versions. However, none of these low-rank approximations took the special diagonal structure of the trajectory matrix into account. The main contribution of our work is a new robust low-rank approximation method tailored to this situation.
The paper is organized as follows. Section 2 briefly surveys existing work and introduces the new approach, called _RObust Diagonalwise Estimation of SSA_ (RODESSA), including its algorithm, implementation, and forecasting method. It is proved that each step of the algorithm reduces the objective function. Section 3 proposes an enhanced time series plot in which two types of outliers are represented by colors, in order to assist with outlier detection. In Section 4, the performance of RODESSA is assessed by an extensive Monte Carlo simulation study. Section 5 presents a real data example regarding temperature analysis in passenger railway vehicles. Section 6 concludes the paper.
## 2 Multivariate singular spectrum analysis
### Classical multivariate SSA
Consider a \(p\)-variate time series \(\mathbb{X}=\left(\mathbb{X}^{(1)},\ldots,\mathbb{X}^{(p)}\right)\), i.e., a collection \(\left\{\mathbb{X}^{(j)}=(x_{i}^{(j)})_{i=1}^{N}\,,\,j=1,\ldots,p\right\}\) of \(p\) time series of length \(N\). Multivariate SSA then proceeds by the following four successive steps.
**1. Embedding.** In the embedding step, the multivariate time series \(\mathbb{X}\) is mapped into a big trajectory matrix \(\boldsymbol{X}\). Let \(L\) be an integer called _window length_, \(1<L<N\). For each time series \(\mathbb{X}^{(j)}\) we then form \(K_{u}=N-L+1\) lagged vectors \(X_{i}^{(j)}=\left(x_{i}^{(j)},\ldots,x_{i+L-1}^{(j)}\right)^{T}\) for \(1\leqslant i\leqslant K_{u}\). The trajectory matrix of the multivariate series \(\mathbb{X}\) is a matrix of size \(L\times K\) with \(K=pK_{u}\,,\) and has the form
\[\mathcal{T}_{\text{MSSA}}(\mathbb{X})=\boldsymbol{X} =\left[X_{1}^{(1)}:\ldots:X_{K_{u}}^{(1)}:\ldots:X_{1}^{(p)}: \ldots:X_{K_{u}}^{(p)}\right]=\left[\boldsymbol{X}^{(1)}:\ldots:\boldsymbol{ X}^{(p)}\right] \tag{1}\] \[=\begin{bmatrix}x_{1}^{(1)}&x_{2}^{(1)}&x_{3}^{(1)}&\ldots\\ x_{2}^{(1)}&x_{3}^{(1)}&x_{4}^{(1)}&\ldots\\ x_{3}^{(1)}&x_{4}^{(1)}&x_{5}^{(1)}&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{bmatrix}\begin{bmatrix}x_{1}^{(2)}&x_{2}^{(2) }&x_{3}^{(2)}&\ldots\\ x_{2}^{(2)}&x_{3}^{(2)}&x_{4}^{(2)}&\ldots\\ x_{3}^{(2)}&x_{4}^{(2)}&x_{5}^{(2)}&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{bmatrix}\begin{bmatrix}x_{1}^{(3)}&x_{2}^{(3) }&x_{3}^{(3)}&\ldots\\ x_{2}^{(3)}&x_{3}^{(3)}&x_{4}^{(3)}&\ldots\\ x_{3}^{(3)}&x_{4}^{(3)}&x_{5}^{(3)}&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{bmatrix}\]
where \(\boldsymbol{X}^{(j)}=\mathcal{T}_{\text{SSA}}\left(\mathbb{X}^{(j)}\right)= \left[X_{1}^{(j)}:\ldots:X_{K_{u}}^{(j)}\right]\) is the trajectory matrix of the one-dimensional series \(\mathbb{X}^{(j)}\). Note the (anti-)diagonal structure of each \(\boldsymbol{X}^{(j)}\), making it a so-called _Hankel matrix_. The entire matrix \(\boldsymbol{X}\) is thus a _stacked Hankel matrix_. The notations \(\mathcal{T}_{\text{SSA}}\,\) and \(\mathcal{T}_{\text{MSSA}}\,\) stand for the univariate and multivariate embedding operators that map \(\mathbb{X}^{(j)}\) and \(\mathbb{X}\) to the corresponding trajectory matrices.
**2. Decomposition.** This step performs the SVD of the trajectory matrix \(\boldsymbol{X}\), yielding
\[\boldsymbol{X}=\sum_{r=1}^{d}\beta_{r}\boldsymbol{u}_{r}\boldsymbol{v}_{r}^{T} \tag{2}\]
where \(d=\text{rank}(\boldsymbol{X})\), and \(\boldsymbol{u}_{1},\ldots,\boldsymbol{u}_{d}\) and \(\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{d}\) are the left and right singular vectors. The ordered singular values are \(\beta_{1}\geqslant\cdots\geqslant\beta_{d}>0\), and the matrices \(\boldsymbol{X}_{r}=\beta_{r}\boldsymbol{u}_{r}\boldsymbol{v}_{r}^{T}\) all have rank 1. The triple \((\beta_{r},\boldsymbol{u}_{r},\boldsymbol{v}_{r})\) is called the \(r\)-th eigentriple of the matrix \(\boldsymbol{X}\).
**3. Grouping.** The grouping step corresponds to splitting the terms of (2) into several disjoint groups and summing the matrices within each group. In this paper we focus on the case where two main groups are created, and we write
\[\mathbf{X}=\widehat{\mathbf{X}_{q}}+\mathbf{R} \tag{3}\]
where the estimate of the signal \(\widehat{\mathbf{X}_{q}}\) is a sum like (2) but only for the first \(q\) eigentriples, whereas the residual matrix \(\mathbf{R}=\mathbf{X}-\widehat{\mathbf{X}_{q}}\) is associated with the noise. We can see \(\widehat{\mathbf{X}_{q}}\) as a low-rank approximation of the trajectory matrix.
**4. Reconstruction.** In this step, the fitted matrix \(\widehat{\mathbf{X}_{q}}\) is transformed back to the form of the input object \(\mathbb{X}\) in (1). In each submatrix \(\widehat{\mathbf{X}_{q}}^{(j)}\) we compute average entries as follows. Denote an anti-diagonal as \(A_{i}=\{(l,k):l+k=i+1,1\leqslant l\leqslant L,1\leqslant k\leqslant K_{u}\}\) with its cardinality \(n_{i}=|A_{i}|\). Each matrix \(\widehat{\mathbf{X}_{q}}^{(j)}=(\widehat{X}_{lk}^{(j)})_{lk}\) is turned into a new series \(\widehat{\mathbb{X}}^{(j)}=(\hat{x_{i}}^{(j)})_{i=1,\ldots,N}\) of length \(N\), with
\[\hat{x}_{i}^{(j)}:=\frac{1}{n_{i}}\sum_{(l,k)\in A_{i}}\widehat{X}_{lk}^{(j)}. \tag{4}\]
The reconstructed multivariate time series is then given by \(\widehat{\mathbb{X}}=\left(\widehat{\mathbb{X}}^{(1)},\ldots,\widehat{ \mathbb{X}}^{(p)}\right)\).
### Outliers in multivariate time series
Multivariate time series may contain outliers, that is, observations that deviate from the expected patterns or trends. They can be caused by a variety of factors such as measurement errors, data entry mistakes, sensor malfunctions, or rare and unexpected events. Outliers can bias statistical measures, affect parameter estimation, and lead to inaccurate forecasting, resulting in erroneous conclusions and flawed decision-making. As in the multivariate setting (Alqallaf et al., 2009; Raymaekers and Rousseeuw, 2023), we distinguish between a _cellwise outlier_, which is an outlying value \(x_{i}^{(j)}\) in the \(j\)-th univariate time series only, and a _casewise outlier_, where at a time \(i\) several or all of the values \(x_{i}^{(j)}\) deviate simultaneously.
Figure 1 illustrates this for a 3-variate time series. The purple triangle is a cellwise outlier which affects only the first univariate time series at time \(i=3\), whereas the orange squares indicate a casewise outlier at time point \(i=5\). The effect of both types of outliers on the trajectory matrix (1) is shown in the bottom part of the figure. The cellwise outlier corresponds to an antidiagonal in the leftmost Hankel matrix only, whereas the casewise outlier affects all three stacked Hankel matrices.
### Existing robust SSA methods
Classical MSSA, and its univariate version SSA, are sensitive to outliers because they are based on the SVD which is highly susceptible to outlying values. In order to remedy this, there has been research into the construction of outlier-robust SSA methods. In general, the purpose of robust methods is to limit the effect of outliers on the results, after which the outliers can be detected by their residuals from the robust fit, see e.g. Rousseeuw and Leroy (1987), Maronna et al. (2019).
When constructing a more robust version of SSA one needs to replace the classical SVD by something less sensitive to outliers. One approach is to start from an outlier-robust principal
Figure 1: The effect of a cellwise outlier (purple triangle) and a casewise outlier (orange squares) on the diagonals of the trajectory matrix \(\mathbf{X}\).
component analysis (PCA) method. A PCA fit of rank \(q\) is of the type
\[\widehat{\mathbf{X}}=\mathbf{1}_{n}\mathbf{\hat{\mu}}^{T}+\mathbf{U}\mathbf{V}^{T} \tag{5}\]
where \(\mathbf{\hat{\mu}}\) is the estimated center, and the matrix of scores \(\mathbf{U}\) as well as the loadings matrix \(\mathbf{V}\) have \(q\) columns. Many PCA methods have an option to set \(\mathbf{\hat{\mu}}=\mathbf{0}\), and then (5) yields an approximation of rank \(q\) to \(\mathbf{X}\) as in (3). Afterward one can carry out the reconstruction step of SSA. De Klerk (2015) applied the robust PCA method ROBPCA (Hubert et al., 2005) to the trajectory matrix, and then centered the original data \(\mathbf{X}\) as well as \(\widehat{\mathbf{X}}\) by subtracting \(\mathbf{1}_{n}\mathbf{\hat{\mu}}^{T}\). In this way he obtained a robust centered SSA method.
A limitation of this approach is that ROBPCA and most other robust PCA methods are built to withstand outlying rows of the data matrix, but not outlying cells. But we have seen in the bottom part of Figure 1 that a single outlying value in a time series can affect many cells of the trajectory matrix \(\mathbf{X}\), especially when the window length \(L\) is high. Therefore a relatively small number of outliers in the time series can affect over half the rows of \(\mathbf{X}\), which ROBPCA might not withstand. In such situations a cellwise robust PCA method like MacroPCA (Hubert et al., 2019) could be used instead.
In the nonrobust setting, expression (3) can equivalently be seen as a problem of low-rank approximation of the trajectory matrix \(\mathbf{X}\) by a matrix \(\widehat{\mathbf{X}}_{L^{2}}\), which minimizes
\[||\mathbf{X}-\mathbf{S}||_{F}^{2}\]
over all \(L\times K\) matrices \(\mathbf{S}\) of rank \(q\). Writing the unknown \(\mathbf{S}\) as a product \(\mathbf{S}=\mathbf{U}\mathbf{V}^{T}\), where \(\mathbf{U}=[\mathbf{u}_{1},\ldots,\mathbf{u}_{q}]=\left(\left[\mathbf{u}^{1},\ldots,\mathbf{u}^{L }\right]\right)^{T}\) is \(L\times q\) with \(\mathbf{u}_{i}=\left(u_{i1},\ldots,u_{iL}\right)^{T}\), and \(\mathbf{V}=\left[\mathbf{v}_{1},\ldots,\mathbf{v}_{q}\right]=\left(\left[\mathbf{v}^{1}, \ldots,\mathbf{v}^{K}\right]\right)^{T}\) is \(K\times q\) with \(\mathbf{v}_{i}=\left(v_{i1},\ldots,v_{iK}\right)^{T}\), this is equivalent to minimizing
\[||\mathbf{X}-\mathbf{U}\mathbf{V}^{T}||_{F}^{2}=\sum_{\ell=1}^{L}\sum_{k=1}^{K}\left(X_{ \ell k}-\sum_{r=1}^{q}u_{\ell r}v_{kr}\right)^{2}=\sum_{\ell=1}^{L}\sum_{k=1} ^{K}R_{\ell k}^{2} \tag{6}\]
with the residuals \(R_{\ell k}:=X_{\ell k}-\sum_{r=1}^{q}u_{\ell r}v_{kr}\). We then put \(\widehat{\mathbf{X}}_{L^{2}}:=\widehat{\mathbf{U}}_{L^{2}}\widehat{\mathbf{V}}_{L^{2}}^{T}\). The solution of the optimization (6) is easily obtained through the SVD decomposition \(\mathbf{X}=\widetilde{\mathbf{U}}\mathbf{D}\widetilde{\mathbf{V}}^{T}\) where \(\mathbf{D}\) is the diagonal matrix of singular values. Restricting this to the \(q\) leading singular values \(\beta_{1}\geqslant\ldots\geqslant\beta_{q}>0\) we obtain \(\widehat{\mathbf{X}}=\widetilde{\mathbf{U}}_{q}\mathbf{D}_{q}\widetilde{\mathbf{V}}_{q}^{T}\). We can then absorb the singular values by putting \(\widehat{\mathbf{U}}_{L^{2}}=\widetilde{\mathbf{U}}_{q}\mathbf{D}_{q}^{1/2}\)
and \(\widehat{\mathbf{V}}_{L^{2}}=\widehat{\mathbf{V}}_{q}\mathbf{D}_{q}^{1/2}\), so that indeed \(\widehat{\mathbf{X}}=\widehat{\mathbf{U}}_{L^{2}}\widehat{\mathbf{V}}_{L^{2}}^{T}\). But the quadratic loss function in (6) makes this a least squares fit, which is very sensitive to outliers.
To remedy this, De la Torre and Black (2003) proposed a more robust low-rank approximation by replacing (6) by the minimization of the loss function
\[L_{\rho}\left(\mathbf{X}-\mathbf{U}\mathbf{V}^{T}\right):=\sum_{\ell=1}^{L}\sum_{k=1}^{K} \rho\left(\frac{X_{\ell k}-\sum_{r=1}^{q}u_{\ell r}v_{kr}}{\hat{\sigma}}\right) =\sum_{\ell=1}^{L}\sum_{k=1}^{K}\rho\left(\frac{R_{\ell k}}{\hat{ \sigma}}\right), \tag{7}\]
where \(\hat{\sigma}\) is a fixed scale estimate. The function \(\rho\) must be continuous, even, non-decreasing for positive arguments, and satisfy \(\rho(0)=0\). De la Torre and Black (2003) used
\[\rho(t)=\frac{t^{2}}{t^{2}+1}\]
that goes to \(1\) as \(t\to\infty\). Therefore an outlying \(t\) has much less effect on \(\rho(t)\) than with the \(\rho(t)=t^{2}\) in (6). They constructed an iterative algorithm for \(\widehat{\mathbf{U}}\widehat{\mathbf{V}}^{T}\). For \(\hat{\sigma}\) they took the median absolute deviation of the residuals from an initial estimate \(\mathbf{U}_{0}\mathbf{V}_{0}^{T}\).
Note that here and in the sequel the estimation target is not the pair \((\mathbf{U},\mathbf{V})\) because that is not uniquely defined. Indeed, if we take a nonsingular \(q\times q\) matrix \(\mathbf{A}\) we see that \((\mathbf{U}\mathbf{A},\mathbf{V}(\mathbf{A}^{-1})^{T})\) yields the same product \(\mathbf{U}\mathbf{A}\mathbf{A}^{-1}\mathbf{V}^{T}=\mathbf{U}\mathbf{V}^{T}\) as \((\mathbf{U},\mathbf{V})\), and hence the same objective (7). The actual estimation target is the product \(\mathbf{U}\mathbf{V}^{T}\).
Chen and Sacchi (2015) applied this general low-rank approximation method to the decomposition step of SSA. They replaced the function \(\rho\) by Tukey's biweight function
\[\rho_{c}(t)=\begin{cases}1-\left(1-\frac{t^{2}}{c^{2}}\right)^{3}&|t|\leqslant c \\ 1&|t|>c\end{cases} \tag{8}\]
with tuning constant \(c=4.685\), and used the method to filter seismic noise.
Rodrigues et al. (2018) also constructed a robust SSA from the low-rank approximation method of De la Torre and Black (2003), but replaced the function \(\rho\) by \(\rho(t)=|t|\) as in Croux et al. (2003), yielding an \(L^{1}\) objective. An advantage of the latter \(\rho\) function is that \(\hat{\sigma}\) can be moved out of (7), so one does not need to estimate \(\sigma\) in advance. In subsequent work, Rodrigues et al. (2020) carried out robust SSA based on the low-rank approximation of Zhang et al. (2013) which used the Huber function
\[\rho_{b}(t)=\frac{t^{2}}{2}I(|t|\leqslant b)+\left(b|t|-\frac{b^{2}}{2}\right) I(|t|>b) \tag{9}\]
with \(b=1.345\). Cheng et al. (2015) performed a robust SSA analysis by applying a different robust low-rank approximation method due to Candes et al. (2011).
### The RODESSA objective function
The existing robust low-rank approximation methods described above are less sensitive to outliers than the classical SVD. But none of them are tailored to the (possibly stacked) Hankel structure of the trajectory matrix \(\mathbf{X}\). As we saw in the bottom part of Figure 1, an outlier in one of the time series corresponds to an entire diagonal of the corresponding trajectory matrix. In the least squares low-rank approximation setting, algorithms have been developed that incorporate the Hankel structure of \(\mathbf{X}\)(Markovsky, 2008), but no such approach exists in the robust setting yet.
To fill this gap we propose a new robust method for MSSA, named RObust Diagonalwise Estimation of SSA (RODESSA), which explicitly takes into account the way outliers and their residuals occur in the stacked Hankel structure of the trajectory matrix. It is meaningful to talk about anomalous diagonals rather than anomalous rows of the trajectory matrix \(\mathbf{X}\). Therefore we propose to approximate the trajectory matrix \(\mathbf{X}\) by \(\widehat{\mathbf{X}}=\mathbf{U}\mathbf{V}^{T}\) obtained by minimizing
\[L_{\rho_{1},\rho_{2}}\left(\mathbf{X}-\mathbf{U}\mathbf{V}^{T}\right):=\sum_{i=1}^{N}pn_{i }\hat{\sigma}_{2}^{2}\rho_{2}\left(\frac{\sum_{j=1}^{p}n_{i}\hat{\sigma}_{1,j} ^{2}\rho_{1}\left(\sum_{a=1}^{n_{i}}(x_{i}^{(j)}-\hat{x}_{ia}^{(j)})^{2}/(n_{ i}\hat{\sigma}_{1,j}^{2})\right)}{pn_{i}\hat{\sigma}_{2}^{2}}\right) \tag{10}\]
over \((\mathbf{U},\mathbf{V})\) with \(\text{rank}(\mathbf{U}\mathbf{V}^{T})=q\). Here \(p\) is again the number of univariate time series and \(n_{i}\) is the length of the diagonal corresponding to \(x_{i}^{(j)}\) as in (4). The predicted cell \(\hat{x}_{ia}^{(j)}\) is given by \(\sum_{r=1}^{q}u_{i^{*}r}v_{a^{*}r}\) with \(i^{*}=\min\{L,i\}+1-a\) and \(a^{*}=\min\{K_{u},i\}-(n_{i}-a)+K_{u}(j-1)\). The fixed scales \(\hat{\sigma}_{1,j}^{2}\) standardize the squared norms of the diagonal residuals of series \(j\), given by
\[r_{i}^{(j)}:=\frac{1}{n_{i}}\sum_{a=1}^{n_{i}}(x_{i}^{(j)}-\hat{x}_{ia}^{(j)} )^{2}\;. \tag{11}\]
The overall scale \(\hat{\sigma}_{2}^{2}\) standardizes the quantities
\[r_{i}:=\frac{1}{p}\sum_{j=1}^{p}\hat{\sigma}_{1,j}^{2}\rho_{1}\left(\frac{r_{ i}^{(j)}}{\hat{\sigma}_{1,j}^{2}}\right) \tag{12}\]
based on all \(p\) coordinates. The functions \(\rho_{1}\) and \(\rho_{2}\) are defined for nonnegative arguments and must be continuous and non-decreasing. In our implementation both are of the form \(\rho(t):=\rho_{c}(\sqrt{t})\)
where \(\rho_{c}\) is Tukey's biweight (8). [For \(\rho_{1}(t)=\rho_{2}(t)=|t|\), (10) would reduce to (6).] The goal of the normalization by \(n_{i}\hat{\sigma}_{1j}^{2}\) inside \(\rho_{1}\) in (10) is to give the \(r_{i}^{(j)}\) similar average sizes. Indeed, if the \(x_{i}^{(j)}-\hat{x}_{ia}^{(j)}\) would be independent normal random variables with variance \(\hat{\sigma}_{1j}^{2}\), then \(n_{i}r_{i}^{(j)}\) would follow a Gamma distribution with parameters \(n_{i}/2\) and \(2\hat{\sigma}_{1j}^{2}\) and thus with mean equal to \(n_{i}\hat{\sigma}_{1j}^{2}\). Therefore the average of the \(r_{i}^{(j)}/\hat{\sigma}_{1j}^{2}\) would be close to 1. A similar argument applies for the normalization by \(pn_{i}\hat{\sigma}_{2}^{2}\) inside \(\rho_{2}\).
The functions \(\rho_{1}\) and \(\rho_{2}\) reduce the effect of cellwise and casewise outliers on the final estimates, because large values of \(r_{i}^{(j)}\) and/or \(r_{i}\) contribute less to the objective function. We can say that \(r_{i}^{(j)}\) measures how prone the \(i\)-th value \(x_{i}^{(j)}\) of time series \(j\) is to be a cellwise outlier. Analogously, \(r_{i}\) reflects how prone the multivariate \(\mathbf{x}_{i}=(x_{i}^{(1)},\ldots,x_{i}^{(p)})^{T}\) is to be a casewise outlier. Note that in the computation of \(r_{i}\) the effect of cellwise outliers is tempered by the presence of \(\rho_{1}\), to avoid that a single cellwise outlier would always result in a large casewise \(r_{i}\).
### The IRLS algorithm
We now address the optimization problem (10). The scale estimates \(\hat{\sigma}_{1,j}\) and \(\hat{\sigma}_{2}\) are constants, whose computation will be described in Section 2.6. Because \(L_{\rho_{1},\rho_{2}}\) is continuously differentiable, its solution must satisfy the first-order necessary conditions for optimality. They are obtained by setting the gradients of \(L_{\rho_{1},\rho_{2}}\) with respect to \(\mathbf{u}^{1},\ldots,\mathbf{u}^{L}\) and \(\mathbf{v}^{1},\ldots,\mathbf{v}^{K}\) to zero, yielding
\[\mathbf{V}^{T}\mathbf{W}^{\ell}(\mathbf{ V}\mathbf{u}^{\ell}-\mathbf{X}^{\ell}) =\mathbf{0},\quad\ell=1,\ldots,L,\] \[\mathbf{U}^{T}\mathbf{W}_{k}(\mathbf{ U}\mathbf{v}^{k}-\mathbf{X}_{k}) =\mathbf{0},\quad k=1,\ldots,K, \tag{13}\]
where \(\mathbf{X}^{1},\ldots,\mathbf{X}^{L}\) and \(\mathbf{X}_{1},\ldots,\mathbf{X}_{K}\) are the rows and columns of \(X\). Here \(\mathbf{W}^{\ell}\) is a \(K\times K\) diagonal matrix, whose diagonal entries are equal to the \(\ell\)th row of the \(L\times K\) weight matrix
\[\mathbf{W}=\{w_{\ell k}\}=\widetilde{\mathbf{W}}_{c}\odot \widetilde{\mathbf{W}}_{r} \tag{14}\]
where the Hadamard product \(\odot\) multiplies matrices entry by entry. Analogously, \(\mathbf{W}_{k}\) is an \(L\times L\) diagonal matrix, whose diagonal entries are the \(k\)th column of the matrix \(W\). The matrix \(\widetilde{\mathbf{W}}_{c}\) in (14) is given by \(\widetilde{\mathbf{W}}_{c}=\left[\widetilde{\mathbf{W}}_{c} ^{(1)}:\ldots:\widetilde{\mathbf{W}}_{c}^{(p)}\right]\) with \(\widetilde{\mathbf{W}}_{c}^{(j)}=\mathcal{T}_{\rm SSA}\left(w_{c,1}^ {(j)},\ldots,w_{c,N}^{(j)}\right)\), containing the _cellwise weights_
\[w_{c,i}^{(j)}=\rho_{1}^{\prime}\left(r_{i}^{(j)}/\hat{\sigma}_{1j}^{2}\right), \quad i=1,\ldots,N,\quad j=1\ldots,p, \tag{15}\]
in which \(\rho_{1}^{\prime}\) is the derivative of \(\rho_{1}\). The matrix \(\widetilde{\mathbf{W}}_{r}\) is given by \(\widetilde{\mathbf{W}}_{r}=\left[\widetilde{\mathbf{W}}_{r}^{(1)}:\ldots:\widetilde{\bm {W}}_{r}^{(p)}\right]\), with each \(\widetilde{\mathbf{W}}_{r}^{(j)}=\mathcal{T}_{\text{SSA}}\left(w_{r,1},\ldots,w_{r,N}\right)\) containing the same _casewise weights_
\[w_{r,i}=\rho_{2}^{\prime}\left(r_{i}/\hat{\sigma}_{2}^{2}\right),\quad i=1, \ldots,N\,. \tag{16}\]
Outlying cells \(x_{i}^{(j)}\) should get a small cellwise weight \(w_{c,i}^{(j)}\) and outlying cases \(\mathbf{x}_{i}\) should get a small casewise weight \(w_{r,i}\).
The system (13) is nonlinear because the weight matrices depend on the estimate, and the estimate depends on the weight matrices. In such a situation one can resort to an iteratively reweighted least squares (IRLS) algorithm. Note that for a fixed weight matrix \(\mathbf{W}\), the system (13) coincides with the first-order necessary condition of the weighted least squares problem of minimizing
\[\sum_{\ell=1}^{L}\sum_{k=1}^{K}w_{\ell k}\left(X_{\ell k}-\widehat{X}_{\ell k }\right)^{2} \tag{17}\]
where \(\widehat{X}_{\ell k}=\sum_{r=1}^{q}u_{\ell r}v_{kr}\). The optimization of (17) can be performed by alternating least squares (Gabriel, 1978). This minimizes (17) with respect to \(\mathbf{u}^{1},\ldots,\mathbf{u}^{L}\) where \(\mathbf{v}^{1},\ldots,\mathbf{v}^{K}\) and the weights are fixed, and alternates this with minimizing (17) with respect to \(\mathbf{v}^{1},\ldots,\mathbf{v}^{K}\) where \(\mathbf{u}^{1},\ldots,\mathbf{u}^{L}\) and the weights are fixed.
The overall algorithm starts from initial estimates \(\mathbf{U}_{0},\mathbf{V}_{0}\) of \(\mathbf{U},\mathbf{V}\) that will be described in Section 2.6. New matrices \(\mathbf{U}_{t+1}\), \(\mathbf{V}_{t+1}\) are obtained from \(\mathbf{U}_{t}\), \(\mathbf{V}_{t}\) by
\[\mathbf{v}_{t+1}^{k}=\left(\mathbf{U}_{t}^{T}\mathbf{W}_{k,t}\mathbf{U}_{t}\right) ^{-1}\mathbf{U}_{t}^{T}\mathbf{W}_{k,t}\mathbf{X}_{k}\quad\text{for}\quad k=1,\ldots,K,\] \[\mathbf{u}_{t+1}^{\ell}=\left(\mathbf{V}_{t+1}^{T}\mathbf{W}_{t}^{\ell}\mathbf{V} _{t+1}\right)^{-1}\mathbf{V}_{t+1}^{T}\mathbf{W}_{t}^{\ell}\mathbf{X}^{\ell}\quad\text{for} \quad\ell=1,\ldots,L. \tag{18}\]
Then the weight matrix is updated to \(\mathbf{W}_{t+1}\) as in (14). The iterative process continues until convergence, as summarized in Algorithm 1.
**Proposition 1**.: _Each iteration of Algorithm 1 decreases the objective function, that is, \(L_{\rho_{1},\rho_{2}}(\mathbf{X}-\widehat{\mathbf{X}}_{t+1})\leqslant L_{\rho_{1},\rho_{ 2}}(\mathbf{X}-\widehat{\mathbf{X}}_{t}).\)_
The proof is given in section A.1 of the Supplementary Material. Since the objective function is decreasing and it has a lower bound of zero, it must converge. Note that Proposition 1 is not restricted to the functions \(\rho_{1}\) and \(\rho_{2}\) used in RODESSA, which are of the type \(\rho(t)=\rho_{c}(\sqrt{t})\) where \(\rho_{c}\) is Tukey's biweight (8). All that is needed is that the function \(\rho(t)\) is differentiable and concave. For this purpose \(\rho_{c}\) could be replaced by Huber's \(\rho_{b}\) of (9), or the function \(\rho_{b,c}\) used in the wrapping transform (Raymaekers and Rousseeuw, 2021).
### Matters of implementation
This section describes several implementation specifics: the initialization strategy to select \(\mathbf{U}_{0}\) and \(\mathbf{V}_{0}\) in the IRLS algorithm, the scale estimates \(\hat{\sigma}_{1,j}\) and \(\hat{\sigma}_{2}\), and how to select the loss function tuning constants, the rank \(q\), and the window length \(L\).
**Scale estimates.** Given initial estimates \(\mathbf{U}_{0}\) and \(\mathbf{V}_{0}\) (see below), the scale estimates \(\hat{\sigma}_{1,j}^{2}\) and \(\hat{\sigma}_{2}^{2}\) are computed as M-scales of the quantities \(r_{i}^{(j)}\) from (11) and the \(r_{i}\) of (12), all with respect to the fit \(\mathbf{U}_{0}\mathbf{V}_{0}^{T}\). A scale M-estimator of a univariate sample \((z_{1},\ldots,z_{n})\) is the solution \(\hat{\sigma}\) of the equation
\[\frac{1}{n}\sum_{i=1}^{n}\rho\left(\frac{z_{i}}{\sigma}\right)=\delta \tag{19}\]
where \(0<\delta<1\). In our implementation we chose \(\rho\) to be Tukey's biweight \(\rho_{c}\) of (8). We set \(\delta=0.5\), which ensures a \(50\%\) breakdown value, and \(c=1.548\), which is the solution of \(\text{E}[\rho_{c}(z)]=0.5\) for \(z\sim\text{N}(0,1)\), to obtain consistency at the normal model.
**Initialization.** Since the objective function \(L_{\rho_{1},\rho_{2}}\) of (10) is not convex when \(\rho_{1}\) are \(\rho_{2}\) are biweight functions, \(\mathbf{U}_{0}\) and \(\mathbf{V}_{0}\) should be carefully selected to avoid that the IRLS algorithm ends up in a nonrobust solution. We compute several candidate initial fits, and select the best one among them. One candidate fit is given by the first \(q\) terms of the standard nonrobust SVD as in (6), yielding the rank-\(q\) matrix \(\widehat{\mathbf{X}}_{1}\). The second candidate \(\widehat{\mathbf{X}}_{2}\) is the fit obtained from (7) for \(\rho(t)=|t|\), and the third candidate \(\widehat{\mathbf{X}}_{3}\) is the fit of Candes et al. (2011). For each of these candidate solutions \(\widehat{\mathbf{X}}_{k}\) we apply the scale M-estimate (19) to all the values \(r_{i}^{(j)}\) given by (11). Then we select the \(\widehat{\mathbf{X}}_{k}\) with the lowest M-scale.
**Tuning constants.** The functions \(\rho_{1}\) and \(\rho_{2}\) in (10) use the biweight formula (8), by \(\rho_{1}(t)=\rho_{c_{1}}(\sqrt{t})\) and \(\rho_{2}(t)=\rho_{c_{2}}(\sqrt{t})\). Now we need to choose the tuning constants \(c_{1}\) and \(c_{2}\). Following Aeberhard et al. (2021) we set these tuning constants using a measure of downweighting at the reference model. The idea is to select tuning constants such that the average of the weights \(w_{c,i}^{(j)}\) of (15) matches a target value \(\delta_{c}\) and the average of the weights \(w_{r,i}\) of (16) matches a target value \(\delta_{r}\). For this computation the weights are standardized to range from 0 to 1. The target values determine how much \(r_{i}^{(j)}\) and \(r_{i}\) are downweighted on average, and by default we set \(\delta_{c}=\delta_{r}=0.9\). The corresponding values of \(c_{1}\) and \(c_{2}\) are obtained by Monte Carlo. This computation pretends that the data are clean, with i.i.d. errors following the standard normal distribution, and that the fitted \(\mathbf{U}\mathbf{V}^{T}\) equals the true value. We can then simulate the distribution of all the weights \(w_{c,i}^{(j)}\) and \(w_{r,i}\) for the given values of \(N\), \(p\), and \(L\), and choose \(c_{1}\) and \(c_{2}\).
**Selecting the rank \(q\).** In classical MSSA, a popular way to select the rank \(q\) is to make a plot of the unexplained variance. This is the expression \(||\mathbf{X}-\widehat{\mathbf{X}}_{r}||_{F}^{2}\) as in (6), where \(\widehat{\mathbf{X}}_{r}\) is the best approximation of \(\mathbf{X}\) of rank \(r\). The unexplained variance is monotone decreasing in \(r\), and one wants to select a value \(q\) where the curve has an 'elbow'. For the RODESSA method we inspect the analogous curve of the objective function (10) at the rank-\(r\) fit \(\widehat{\mathbf{X}}_{r}\), that is, we plot the curve of
\[L_{\rho_{1},\rho_{2}}\left(\mathbf{X}-\widehat{\mathbf{X}}_{r}\right)\]
versus \(r\).
**Selecting the window length.** The choice of the window length \(L\) in MSSA is a complex issue that has been addressed by Hassani and Mahmoudvand (2013) and Golyandina et al. (2018). In general, \(L\) should be selected to benefit either separability of the signal and the noise, or forecasting accuracy. Following Golyandina et al. (2018) we use \(L\simeq pN/(p+1)\) for the analysis of a small number \(p\) of time series, and \(L\simeq N/2\) otherwise.
### Forecasting with Rodessa
The MSSA model assumes a linear recurrent relation (Golyandina et al., 2018). This implies that an observation can be predicted by a linear combination of the previous \(L-1\) observations. In classical MSSA the coefficients of this linear combination are derived from the unique SVD fit of rank \(q\) to the trajectory matrix \(\mathbf{X}\), see e.g. Danilov (1997). For RODESSA the rank-\(q\) fit \(\widehat{\mathbf{X}_{q}}\) can similarly be decomposed by the exact SVD, yielding the \(L\times q\) matrix \(\widetilde{\mathbf{U}}=[\mathbf{\tilde{u}}_{1},\ldots,\mathbf{\tilde{u}}_{q}]\) of left singular vectors and the \(K\times q\) matrix \(\widetilde{\mathbf{V}}=[\mathbf{\tilde{v}}_{1},\ldots,\mathbf{\tilde{v}}_{q}]\) of right singular vectors. Then the coefficient vector \(\hat{\mathbf{a}}=(\hat{a}_{L-1},\ldots,\hat{a}_{1})^{T}\) is obtained as
\[\hat{\mathbf{a}}=\frac{\sum_{r=1}^{q}\tilde{u}_{r,L}(\tilde{u}_{r,1},\ldots, \tilde{u}_{r,L-1})^{T}}{1-\sum_{r=1}^{q}\tilde{u}_{r,L}^{2}}\]
where \(\tilde{u}_{r,1},\ldots,\tilde{u}_{r,L}\) are the \(L\) components of \(\mathbf{\tilde{u}}_{r}\,\). Let \((\hat{x}_{i}^{(j)})_{i=1}^{N}\) be the reconstructed multivariate time series associated with the rank-\(q\) approximate trajectory matrix \(\widehat{\mathbf{X}_{q}}\). Then the \(h\)-step ahead recurrent MSSA forecasts \(\hat{x}_{N+1}^{(j)},\ldots,\hat{x}_{N+h}^{(j)}\) for \(j=1,\ldots,p\) are given by
\[\hat{x}_{i}^{(j)}=\sum_{l=1}^{L-1}\hat{a}_{l}\hat{x}_{i-l}^{(j)}\qquad\text{ for}\quad i=N+1,\ldots,N+h\,. \tag{20}\]
## 3 Outlier detection
RODESSA implicitly provides information about outliers in multivariate time series. The cellwise weight \(w_{c,i}^{(j)}\) given by (15) reflects how much faith the algorithm has in the reliability of \(x_{i}^{(j)}\), the value of the \(j\)-th time series at time \(i\). The casewise weight \(w_{r,i}\) given by (16) does the same for the entire case \(\mathbf{x}_{i}=(x_{i}^{(1)},\ldots,x_{i}^{(p)})^{T}\) at time \(i\). A small weight means that the corresponding cell or case was deemed less trustworthy, and was only allowed to have a small effect on the fit.
We propose a new graphical representation, called _enhanced time series plot_, which facilitates outlier detection by visualizing these weights in a single plot. Since the cellwise weights \(w_{c,i}^{(j)}\) are a monotone function of the cellwise squared norms \(r_{i}^{(j)}\) of (11), this can be seen as an extension of the cellmap for multivariate data (Rousseeuw and Van den Bossche, 2018; Hubert et al., 2019) to time series.
We illustrate the enhanced time series plot on the publicly available Electricity Load Diagrams 2011-2014 dataset (Trindade, 2015). It contains the electricity consumption of 370 clients from January 2011 to December 2014. We plot the data of 4 clients from November 27th to December 3rd 2014, with consumption observed every 2 hours. For the purpose of illustration we inserted some outliers.
Figure 2 shows its enhanced time series plot. The solid black lines correspond to the reconstructed multivariate time series. The original cells \(x_{i}^{(j)}\), connected by yellow solid lines, are represented by circles filled with a color. Those with cellwise weight \(w_{c,i}^{(j)}\) close to 1 are filled with white. Cells with a low weight and positive residual are shown in increasingly intense red, and those with a negative residual in increasingly intense blue. Moreover, \(x_{i}^{(j)}\) is flagged as a cellwise outlier when \(w_{c,i}^{(j)}<q_{c,\alpha}\) where \(q_{c,\alpha}\) is the \(\alpha\)-quantile of the simulated distribution of cellwise weights at the reference model, described in Section 2.6. Cells flagged as cellwise outliers are shown as solid squares, in red when the residual is positive and in blue when it is negative.
At the top of the plot we see circles reflecting the casewise weights \(w_{r,i}\,\) with weight 1 shown with a white interior and lower weights in increasingly darker grey. A case \(\mathbf{x}_{i}\) is flagged as a casewise outlier if \(w_{r,i}\) is below the \(\alpha\)-quantile \(q_{r,\alpha}\) of the simulated distribution of casewise weights at the reference model. Casewise outliers are indicated by a black solid circle and a vertical dashed grey line. The last one in the plot would not have been flagged in the cellwise way.
In this example we also want to plot the forecasts, which is not part of the default enhanced time series plot. The forecasts are shown as green triangles connected by green solid lines. The true values (which the algorithm did not know about) were added as black crosses connected by yellow solid lines, illustrating the forecasting performance.
## 4 Simulation study
In this section, the performance of the proposed RODESSA method in the presence of cellwise and casewise outliers is assessed through a Monte Carlo simulation study. The data generation process is inspired by the simulated example in Section 3.3 of Golyandina et al. (2018). Specifically, the multivariate time series is generated through the following signal plus noise model
\[x_{i}^{(j)}=s^{(j)}(i)+\varepsilon_{i}\qquad\text{for }i=1,\ldots,N,\quad j=1, \ldots,p\,,\]
with
\[s^{(j)}(i)=A^{(j)}\cos(2\pi i/10+C^{(j)})\,,\]
Figure 2: Enhanced time series plot obtained by applying RODESSA to the Electricity Load Diagrams data.
where \(\varepsilon_{i}\sim N(0,\sigma^{2})\), \(\sigma=20\), \(p=4\), and \(N=70\). The constants \(A^{(j)}\) and \(C^{(j)}\) are set according to the three scenarios in Table 1.
In each scenario we consider cellwise and casewise contamination settings, with the fraction of outliers \(\varepsilon\) equal to \(0.1\) and \(0.2\). In the cellwise contamination setting, outliers are introduced by adding the number \(\gamma\sigma\) to \(\varepsilon pN\) randomly chosen values \(x_{i}^{(j)}\). We let \(\gamma\) range over \(\gamma=0,1,\ldots,8\), so \(\gamma=0\) corresponds with the uncontaminated setting. In the casewise contamination setting, we add \(\gamma\sigma\) to all values of \(\varepsilon N\) randomly chosen \(\mathbf{x}_{i}=(x_{i}^{(1)},\ldots,x_{i}^{(4)})^{T}\).
Our proposed RODESSA method is compared with several competing approaches. We run the classical version of MSSA (labeled CMSSA) and several robust versions described in Section 2.3, which perform the decomposition step by (7) with \(\rho(t)=|t|\) as in Rodrigues et al. (2018) (labeled RLM), as well as the method of Cheng et al. (2015) (labeled CHENG), and that of Chen and Sacchi (2015) (labeled CS).
RODESSA is implemented as described in Section 2 with \(q=2\), which corresponds to the true rank since for any angle \(\varphi\) the signal \(A\cos(2\pi i/10+\varphi)\) equals the linear combination \(A\cos(2\pi i/10)\cos(\varphi)-A\sin(2\pi i/10)\sin(\varphi)\) of the functions \(\cos(\varphi)\) and \(\sin(\varphi)\). Figure 3 plots the objective function for increasing rank, which also indicates that most of the variability is explained by two components. We consider both \(L=56\simeq pN/(p+1)\) and \(L=35=N/2\), and choose the tuning constants by setting \(\delta_{c}=\delta_{r}=0.90\).
The competing approaches are implemented with the same values of \(q\) and \(L\). To guarantee a fair comparison between RODESSA and CS, the tuning parameter in the CS method is chosen by means of the procedure in Section 2.6. For each combination of scenario, contamination setting,
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{2}{c}{Scenario 1} & \multicolumn{2}{c}{Scenario 2} & \multicolumn{2}{c}{Scenario 3} \\ \cline{2-6} \(j\) & \(A^{(j)}\) & \(C^{(j)}\) & \(A^{(j)}\) & \(C^{(j)}\) & \(A^{(j)}\) & \(C^{(j)}\) \\ \hline
1 & 20 & 0 & 35 & 0 & 20 & 0 \\
2 & 30 & 0 & 35 & \(\pi/5\) & 30 & \(\pi/5\) \\
3 & 40 & 0 & 35 & 0 & 40 & 0 \\
4 & 50 & 0 & 35 & \(\pi/5\) & 50 & \(\pi/5\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameters \(A^{(j)}\) and \(C^{(j)}\) for scenarios 1, 2 and 3.
magnitude \(\gamma\) and outlier fraction, 2000 replications are carried out. The performance is assessed by means of the _reconstruction error_ (RE) and the 20-step ahead _forecasting error_ (FE) that are defined as
\[\text{RE}=\frac{1}{pN}\sum_{j=1}^{p}\sum_{i=1}^{N}\left(\hat{x}_{i}^{j}-s^{j}(i) \right)^{2}\qquad\qquad\text{FE}=\frac{1}{20p}\sum_{j=1}^{p}\sum_{i=N+1}^{N+20 }\left(\hat{x}_{i}^{j}-s^{j}(i)\right)^{2}\;. \tag{21}\]
In the first formula the \(\hat{x}_{i}^{j}\) are the reconstructions (\(i\leqslant N\)). In the second formula they are the forecasts (\(i>N\)) as defined in (20). The RE and FE are then averaged over the replications.
We report the results for the most challenging Scenario 3, with \(L=N/2\). The supplementary material contains the simulation results for the other settings, which yield qualitatively similar conclusions.
For cellwise contamination, Figure 4 shows the average RE and FE as a function of the contamination magnitude \(\gamma\) for both outlier fractions. Without contamination (\(\gamma=0\)), CMSSA is the best method in terms of average RE and FE, as expected in this situation. But the errors of RODESSA and CS are similarly small, whereas those of CHENG and RLM are larger.
When looking at increasing \(\gamma>0\) it appears that far outlying cells have an unbounded effect on CMSSA, a bounded effect on CHENG and RLM, and a small effect on RODESSA and CS, due to their bounded biweight \(\rho\) function. In that sense RODESSA and CS are similar. But RODESSA does better than CS, due to the fact that its decomposition step takes the diagonal structure of the Hankel matrix into account. The performance difference is largest for the higher outlier fraction \(\varepsilon=0.2\).
Figure 5 compares the same methods in the presence of casewise outliers. Here the differences
Figure 3: Graph of the objective function for increasing values of the rank \(r\).
are larger, with RODESSA outperforming the other methods more strongly. This is due to the fact that RODESSA can share information about diagonals across the stacked structure. This helps because we saw in Figure 1 that casewise outliers affect several diagonals in the trajectory matrix simultaneously. Unlike the other methods, RODESSA combines information across such diagonals through the loss function \(\rho_{2}\) in (10), which makes it more robust for casewise outliers.
Figure 4: Cellwise outliers. Average RE (top) and FE (bottom) attained by CMSSA, CHENG, RLM, CS, and RODESSA for Scenario 3, contamination probability \(\varepsilon=0.1\) (left) and \(\varepsilon=0.2\) (right), as a function of \(\gamma\).
## 5 A real data example: temperature analysis in passenger railway vehicles
In this section, a real data example from the railway industry illustrates the applicability and potential of RODESSA.
In recent years, railway transportation in Europe has emerged as a viable alternative to other modes of transport, leading to intense competition between operators to enhance passenger satisfaction (Kallas, 2011). A particular challenge in this context is ensuring optimal thermal comfort inside passenger rail coaches (Ye et al., 2004). To address this challenge, new European standards, such as UNI EN 14750 (British Standards Institution, 2006), have been developed. These stan
Figure 5: Casewise outliers. Average RE (top) and FE (bottom) attained by CMSSA, CHENG, RLM, CS, and RODESSA for Scenario 3, contamination probability \(\varepsilon=0.1\) (left) and \(\varepsilon=0.2\) (right), as a function of \(\gamma\).
dards focus on regulating air temperature, relative humidity, air speed, and overall comfort and air quality in passenger rail coaches, taking into account the diverse operating requirements of trains. Consequently, railway companies are proactively installing sensors to gather and analyze data from on-board heating, ventilation, and air conditioning (HVAC) systems (Homod, 2013). HVAC systems play a crucial role in maintaining passenger thermal comfort, and their performance is being improved based on the insights obtained from the collected data.
We analyze operational data from HVAC systems installed on a passenger 6-coach train operating during the summer season (Lepore et al., 2022). The data are publicly available at [https://github.com/unina-sfere/NN4MSP](https://github.com/unina-sfere/NN4MSP). Specifically, the inside temperature of each coach was recorded about every four minutes from 10:00 to 22:00, yielding \(N=176\) observations of a multivariate time series with \(p=6\) components. We applied RODESSA in its default form as described in Section 2.6, with \(L=151\simeq pN/(p+1)\). We selected \(q=7\) based on the values of the objective function. This yields the enhanced time series plot shown in Figure 6.
The vertical dashed lines in Figure 6 indicate casewise outliers. It seems that the temperature inside the six coaches is significantly higher than expected in an almost periodic fashion. This behavior is particularly severe for the group of casewise outliers between 17:00 and 18:00. Discussions with domain experts revealed that those measurements were acquired during train stops at terminal stations. In this situation the coach doors were left open, and due to the summer season this increased the temperature inside the coaches. From this we can conclude that those measurements are not representative of the operating conditions of the train and are not useful to characterize temperature dynamics. Moreover, the reconstructed time series shows that the temperature inside coaches does increase and decrease periodically, which is in accordance with the on/off control of the HVAC system. Note that not all casewise outliers are labeled as cellwise outliers. For instance, the casewise outlier shortly after 16:00 has no component that is flagged as a cellwise outlier, but the temperatures were relatively low in all six coaches simultaneously. The ability to detect such effects is a feature of the proposed method.
To assess the forecasting performance of RODESSA relative to the competing methods described in Section 4, we used subsequences of the multivariate time series. Specifically, starting from the first 120 observations, \(h\)-step ahead forecasts of all methods are computed for \(h=5,10,20\). Similarly, forecasts are obtained by considering the first 121, 122,... observations and so on. For
each subsequence, the \(h\)-step ahead forecasts are compared to the observed values of the time series, by computing the median of their absolute differences, denoted as mFE. Note that here we do not use the forecasting error formula in (21) because the data we are predicting contains outliers as well.
Moreover, to assess the effect of the window length \(L\) on the forecasting performance we compare the two lengths given in Section 2.6, namely \(1/2\) and \(6/7\) of the subsequence length \(N_{sub}\). Figure 7 shows boxplots of mFE for \(h=5,10,20\) and \(L=N_{sub}/2\) (top) as well as \(L=6N_{sub}/7\) (bottom). The RODESSA method outperforms its competitors overall, which is in line with the simulation study.
Figure 6: Enhanced time series plot from applying RODESSA to the multivariate time series of temperatures.
## 6 Conclusions
Multivariate singular spectrum analysis (MSSA) is a technique for fitting vector time series and making forecasts. It starts by constructing a matrix with a special structure. It consists of several submatrices stacked side by side, one for each component of the multivariate time series. A cell of the time series, that is, the value of one of its coordinates at a given timepoint, corresponds to a diagonal in the corresponding submatrix. A case of the time series, that is, all of the coordinates at a given time point, corresponds to diagonals in all of the submatrices.
Sometimes cells are outlying, and sometimes entire cases. But the classical MSSA method can be strongly affected by outliers, because it is based on a low-rank singular value decomposition, which is a least squares fit. Several more robust methods have been proposed in the literature, based on versions of the SVD that are less sensitive to outliers. However, these versions do not take the diagonal structure into account, and even a small number of outliers can create a large number of outlying diagonal entries that affect many rows and columns of the matrix, thereby overwhelming the fit. To resolve this issue we propose the RODESSA method, which explicitly takes the diagonal structure into account when decomposing the matrix. This makes it more robust than its predecessors, as illustrated in the extensive simulation study reported in Section 4 and the Supplementary Material. Moreover, it loses little efficiency when there are no outliers.
Figure 7: Boxplots of mFE for \(h=5,\,10,\,20\) and \(L=N_{sub}/2\) (top) as well as \(L=6N_{sub}/7\) (bottom).
The RODESSA method is performed by a fast algorithm based on iteratively reweighted least squares. In Section 2 we prove a proposition stating that each step of the algorithm decreases the objective function. We also propose a new graphical display, called the enhanced time series plot, which visualizes the weights of the cells as well as the cases, making outliers stand out. We have applied the RODESSA method to a real multivariate time series about temperatures in passenger railway vehicles. This illustrates its good forecasting performance, as well as the convenient outlier detection by the enhanced time series plot.
Software availability.The R code and example scripts, and the data of the example in Section 5, are publicly available on the webpage [https://wis.kuleuven.be/statdatascience/robust](https://wis.kuleuven.be/statdatascience/robust).
Funding Details.This work was supported by the Flanders Research Foundation (FWO) under Grant for a scientific stay in Flanders V505623N; and by Piano Nazionale di Ripresa e Resilienza (PNRR) - Missione 4 Componente 2, Investimento 1.3-D.D. 1551.11-10-2022, PE00000004 within the Extended Partnership MICS (Made in Italy - Circular and Sustainable).
|
2307.13149 | Discovering interpretable elastoplasticity models via the neural
polynomial method enabled symbolic regressions | Conventional neural network elastoplasticity models are often perceived as
lacking interpretability. This paper introduces a two-step machine learning
approach that returns mathematical models interpretable by human experts. In
particular, we introduce a surrogate model where yield surfaces are expressed
in terms of a set of single-variable feature mappings obtained from supervised
learning. A post-processing step is then used to re-interpret the set of
single-variable neural network mapping functions into mathematical form through
symbolic regression. This divide-and-conquer approach provides several
important advantages. First, it enables us to overcome the scaling issue of
symbolic regression algorithms. From a practical perspective, it enhances the
portability of learned models for partial differential equation solvers written
in different programming languages. Finally, it enables us to have a concrete
understanding of the attributes of the materials, such as convexity and
symmetries of models, through automated derivations and reasoning. Numerical
examples have been provided, along with an open-source code to enable
third-party validation. | Bahador Bahmani, Hyoung Suk Suh, WaiChing Sun | 2023-07-24T22:22:32Z | http://arxiv.org/abs/2307.13149v4 | Discovering interpretable elastoplasticity models via the neural polynomial method enabled symbolic regressions
###### Abstract
Conventional neural network elastoplasticity models are often perceived as lacking interpretability. This paper introduces a two-step machine-learning approach that returns mathematical models interpretable by human experts. In particular, we introduce a surrogate model where yield surfaces are expressed in terms of a set of single-variable feature mappings obtained from supervised learning. A post-processing step is then used to re-interpret the set of single-variable neural network mapping functions into mathematical form through symbolic regression. This divide-and-conquer approach provides several important advantages. First, it enables us to overcome the scaling issue of symbolic regression algorithms. From a practical perspective, it enhances the portability of learned models for partial differential equation solvers written in different programming languages. Finally, it enables us to have a concrete understanding of the attributes of the materials, such as convexity and symmetries of models, through automated derivations and reasoning. Numerical examples have been provided, along with an open-source code to enable third-party validation.
Keywords:quadratic neural model; neural additive model; symbolic regression; level-set model; computational plasticity +
Footnote †: journal:
## 1 Introduction
In the last decade, the number of machine learning constitutive models has increased significantly Ghaboussian et al. (1991); Pernot and Lamarque (1999); Mozaffar et al. (2019); Liu et al. (2021). Among those machine learning models, neural networks trained with experimental or simulation data have been one of the most popular choices (Li et al., 2019; Wang and Sun, 2018; Vlassis et al., 2020; Vlassis and Sun, 2021; Flaschel et al., 2022). Despite the recent popularity of these neural network models, the adaptation of these models on high consequent engineering applications is slow. A potential issue could be attributed to the inherent lack of reproducibility of the neural network models (Suh et al., 2023) and interpretability/explainability. In the former case, the reproducibility of the performance can be achieved by open-sourcing the training codes and systematic benchmarking (e.g. Wen et al. (2020)). In the latter case, due to the complexity and nonlinearity of the utilized black-box deep neural network models, understanding input-output relationships becomes challenging, making them not interpretable.
To address this black-box issue,(Vlassis and Sun, 2022) adopt a component-based design where the ingredients of elastoplasticity models are represented by individual neural networks to distinguish the properties of the elasticity models from the plastic responses, a departure from the recurrent neural network approach or multi-step feed-forward neural network approaches in Ghaboussi et al. (1991); Mozaffar et al. (2019). Coincidentally, this approach has also been used in Fuhg et al. (2023) and reintroduced as modular machine learning elastoplasticity. Other related efforts to introduce more interpretable models include the
incorporation of knowledge graphs (Wang and Sun, 2019; He and Chen, 2022), causal discovery for constitutive responses (Sun et al., 2022). These approaches may provide relational and structural knowledge about the learned material models and facilitate the model validation against physical constraints, as such thermodynamic consistency (Vlassis and Sun, 2021). However, the lack of analytical expression makes it difficult to provide a definite proof of basic properties such as convexity and material symmetry.
Another approach that has been attempted in the literature involves performing symbolic regression directly to learn a portion or all of the plasticity models (Versino et al., 2017; Wang et al., 2022; Bomarito et al., 2021). The advantage of this approach is that it may lead to a mathematical expression of the learned function that is much shorter than the neural network counterparts and hence suitable for analysis. However, as the symbolic regression requires solving a combinatorial optimization to find the optimal equation expressed as a tree, the number of possible combinations of symbolic expression grows rapidly with the dimensionality of the input and output; it is an NP-hard problem (cf. Mundhenk et al. (2021)).
On a related note, (Linka and Kuhl, 2023; Linka et al., 2023; Tao et al., 2023) apply a different symbolic regression approach in which a set of prior hyperelasticity models are chosen as the basis functions for biological tissues, and an optimization problem is solved to determine the coefficients of the learned models. In principle, this interpolation technique can also be used for learning yield functions or hardening laws. However, since the modeling space is spanned by the chosen basis models, the selection of the basis models and how well they correspond to the mechanical behaviors are all crucial factors that may determine the success of this model.
### Interpretability vs. expressivity
Multilayer perceptrons (MLP) have been demonstrated to be good candidates for supervised regression tasks where a single fully-connected deep neural network is utilized as the model class (Szegedy et al., 2013). MLP is also robust in learning nonlinear models that can distinguish data that is not linearly separable. Due to the high expressivity of deep neural networks and the inherent nonlinear interactions among the input features, a consistently high accuracy has been reported by those successfully trained MLP models. However, it could become challenging to interpret/extract the input-output relationships for neural
Figure 1: The trade-off between accuracy and interpretability in various machine learning models. We introduced Quadratic Neural Model (QNM) (see Fig. 2 enhances the expressivity of the Neural Additive Model (NAM) at the expense of reducing interpretability. However, by obtaining analytical expression of the feature space mapping, we can achieve a better trade-off between expressivity and interpretability (see Fig. 5)
network models with multivariate inputs because they may combine the input features in a highly non-linear manner; this entanglement makes it difficult to isolate the effect of each feature on the output (Peng et al., 2019).
This black-box issue is not only a technical barrier to computational mechanics, but also critical for many other disciplines, such as drug delivery where the interpretability of the solution is critical. Doran et al. (2017), for instance, define interpretable systems: _"A system where a user cannot only see, but also study and understand how inputs are mathematically mapped to outputs."_Gilpin et al. (2018) define intelligibility as a combination of explainability (being able to provide a rationale for the results, sometimes through posthoc analysis ) and interpretability (the logic that delivers the learned results can be comprehended by humans.) In many cases, explainability could be achieved by model-agnostic methods developed to explain the predictions of black-box models via the feature importance and local approximation, as pointed out by Xu et al. (2022). Meanwhile, models that are inherently interpretable, such as decision-tree-based models, often introduce mechanisms (e.g., hierarchical decisions or rules) such that the rationale of the trained model can be understood.
Nevertheless, as pointed out by Agarwal et al. (2021), machine learning techniques that exhibit high interpretability often lack the level of expressivity (the ability to express an arbitrary function - a necessary but not sufficient condition for accuracy) to yield accurate predictions for complex tasks. Fig. 1 (modified from Agarwal et al. (2021)) illustrates the trade-off between interpretability and expressivity for a variety of common machine learning techniques, which include deep neural nets, boosted trees, and random forest. As supported by the universal approximation theorem Hornik et al. (1989), the deep neural network is often considered a machine learning tool with high expressivity but also difficult to interpret. Meanwhile, linear regression is easy to interpret but often lack the expressivity for more complex tasks.
Interestingly, a successful symbolic regression may achieve both the desirable level of expressivity and interpretability. However, the NP-hard nature of the symbolic regression problem in for the multi-dimensional data makes it difficult to predict the probability of yielding a successful training. Petersen et al. (2019), for instance, benchmark the performance of 6 state-of-the-art symbolic regression software packages and found a zero percent success in recovering the following equation among all of the state-of-the-art software packages,
\[(fx,y)=x^{4}.x^{3}+\frac{1}{2}y^{2}-y \tag{1}\]
### Neural Additive Models: Trade-off for Interpretability and expressivity
Agarwal et al. (2021) propose the Neural Additive Model (NAM) in which a set of independent neural networks are co-trained to generate a set of nonlinear scalar features \(f_{i}(x_{i})\), one for each input \(x_{i}\) where \(i=1,2,...,D\) and \(D\) is the number of input dimensions. The model structure is the linear combination of these scalar features, i.e.,
\[\tilde{\phi}(\mathbf{x};\mathbf{\beta},\mathbf{\omega})=\sum_{i=1}^{D}w_{i}f_{i}(x_{i};\bm {\beta}_{i}), \tag{2}\]
where each feature function \(f_{i}\) is parameterized by a multilayer perceptron (MLP) with parameters \(\mathbf{\beta}_{i}\in\mathbb{R}^{M_{i}}\), and \(M_{i}\) is the total number of trainable parameters of the \(i\)-th MLP. These single-variable MLP functions are referred to as **shape functions**. The contribution of each shape function is controlled by the trainable parameters \(w_{i}\in\mathbb{R}\). Concatenation of neural network parameters and shape function weights are denoted by \(\mathbf{\beta}=\{\mathbf{\beta}_{i}\}_{i=1}^{D}\) and \(\mathbf{w}\in\mathbb{R}^{D}\), respectively.
Agarwal et al. (2021) argues that this approach is interpretable in the sense that importance of each feature can be ranked through examining the coefficients of the feature \(w_{i}\). In other words, the NAM approach maintains the interpretability of the linear regression (in the feature space) with enhanced an expressivity afforded by the neural networks. However, Agarwal et al. (2021) also points out that NAM exhibits less expressivity of the fully connected neural network, especially when expressing the ground-truth function requires bases independent of the feature basis functions.
### Proposed strategy for interpretable model recovery
Given the fact that constitutive laws are often used for high-consequent engineering applications, making the machine learning generated constitutive laws interpretable is necessary but not sufficient to make those models trustworthy and hence has the potential to have impacts on engineering applications.
The purpose of this article is to propose a two-step machine learning approach that strikes a balance between the expressivity and interpretability (see Fig. 1). We first propose a modification of the neural additive model in which we continue to use neural networks to parametrize a set of scalar-valued shape functions. However, instead of directly using these shape functions as the bases for the yield function, we introduce additional quadratic terms of the feature shape function to further enhance the expressivity and hence the accuracy of the machine learning model (see Section 2.2). To further enhance the interpretability, we design a new symbolic regression step introduced after the neural network training. Here we leverage the special neural network architecture of the quadratic neural model in which the set of neural networks that generates the feature space are all scalar-valued scalar functions. This setting enables us to break down the NP-hard high-dimensional symbolic regression problem into a series of separated one-dimensional symbolic regressions (see Section 2.3), which have consistently been successful in discovery yield surface of 3 to 5 dimensions in our numerical experiments (see Section 3.)
## 2 Method
We begin by introducing the general problem statement of finding yield surface as a supervised regression problem in Section 2.1. We then introduce the quadratic neural model (QNM) in Section 2.2. We explain our choice of neural network architecture in Section 2.2.1. Additionally, in Section 2.2.2, we specify a sparsity-promoting constraint used during the training process to enhance the simplicity and interpretability of the model. For completeness, we provide details of the genetic programming algorithm that conduct the symbolic regression for the feature bases in Section 2.3.
### Problem statement
Our learning task is to find a mapping function \(\phi(\mathbf{x}):\mathbb{R}^{D}\rightarrow\mathbb{R}\) from any element of \(D\)-dimensional Euclidean space onto real number, where \(\mathbf{x}\) is the state variables for the yield function, including the Cauchy stress and internal variables, \(D\) is the dimension of the inputs, and \(\phi(\mathbf{x})\) is the yield function. Given \(N\) data points stored as a point cloud \(\mathcal{C}=\{\mathbf{x}^{l},\phi^{l}\}_{l=1}^{N}\), we approximate such a function by the parametric function \(\bar{\phi}(\mathbf{x};\mathbf{\beta},\mathbf{w})\) where the best estimator for its unknown parameters \(\mathbf{\beta}\) and \(\mathbf{w}\) are found via, in the sense of least square,
\[\mathbf{\beta},\mathbf{w}=\underset{\mathbf{\beta},\mathbf{w}}{\arg\min}\frac{1}{N}\sum_{l=1} ^{N}(\bar{\phi}(\mathbf{x}^{l};\mathbf{\beta},\mathbf{w})-\phi^{l})^{2}+\mathcal{L}_{ \text{sparsity}}(\mathbf{w}), \tag{3}\]
where \(\mathcal{L}_{\text{sparsity}}\) is a regularization term for sparsity control (see Section 2.2.2). In the conventional setting, one may use a multivariate fully connected neural network as the model class. A major departure of our approach is that we will instead postulate the existence of a feature space spanned by basis functions \(f_{i}\) learned by univariate neural networks (see Section 2.2). This configuration enables us to obtain sufficiently expressive models \(\bar{\phi}(\mathbf{x}^{l})\) while interpretability is guaranteed via symbolic regressions that replace the trained neural networks that parametrize basis functions of the feature space.
### Quadratic Neural Model for enhanced expressivity
As mentioned in Section 1.2, the original neural additive model enhances the interpretability of the learned model through the generation of feature basis. The resultant model then becomes linear in the feature space. This enhanced interpretability is, nevertheless, achieved at the expense of expressivity. To circumvent this limitation, we generalize the formulation of the neural additive model to introduce additional quadratic terms (see Fig. 2.) As such, we refer to this revised approach as Quadratic Neural Model (QNM).
The resultant learned yield function expressed with the additional enhancement bases reads:
\[\tilde{\phi}(\mathbf{x};\mathbf{\beta},\mathbf{\omega})=\sum_{i=1}^{D}w_{i}f_{i}(x_{i};\mathbf{ \beta}_{i})+\sum_{i=1}^{D}\sum_{j=i}^{D}w_{ij}f_{i}(x_{i};\mathbf{\beta}_{i})f_{j}( x_{j};\mathbf{\beta}_{j}), \tag{4}\]
where \(w_{ij}\) are additional trainable parameters that control contributions of the second order interactions between \(i\)-th and \(j\)-th shape functions. \(\mathbf{w}\in\mathbb{R}^{D+D(D-1)/2}\) denotes the concatenation of all parameters \(\mathbf{w}=\{w_{i},w_{ij}\}\). Notice that the same functions as those used in the first-order term are utilized for the second-order term; in total, there are \(D\) numbers of different shape functions to be learned simultaneously.
#### 2.2.1 Neural network architecture for shape functions
The functional forms in Eqs. 2 and 4 are limited in terms of the level of interaction among different input features. Therefore, the choice of parametric function for each shape function becomes crucial to avoid any failure due to a lack of expressivity or flexibility. Previous studies have shown that neural networks with low-dimensional input spaces are "lazy learners" (Tancik et al., 2020; Rahaman et al., 2019), meaning their convergence to capture high-frequency content becomes slower or even impossible.
As our learning task involves a univariate neural network, we must ensure that each shape function can handle such scenarios. In the NAM work by Agarwal et al. (2021), the authors propose a new activation function to address this issue. However, in our work, we build upon recent developments in enriching classical neural networks with Fourier layers, as described in (Rahimi and Recht, 2007; Tancik et al., 2020). This approach can better capture high-frequency content and improve the overall performance of our model.
Each shape functions \(f_{i}(x_{i})\) shown in Fig. 3 is parameterized by an MLP enriched with the Fourier layer as follows:
\[f_{i}(x_{i};\mathbf{\beta}_{i})=h(\mathbf{W}_{L}^{i}\cdots g(\mathbf{W}_{2}^{i}g(\mathbf{W}_{1} ^{i}\mathbf{\gamma}(x_{i})+\mathbf{b}_{1}^{i})+\mathbf{b}_{2}^{i})\cdots+\mathbf{b}_{L}^{i}), \tag{5}\]
where \(\mathbf{W}_{k}^{i}\) and \(\mathbf{b}_{k}^{i}\) are the weight and bias of the \(k\)-th hidden layer and \(g(\cdot)\) and \(h(\cdot)\) are hidden and output activation functions, respectively. In this MLP function, the input layer \(\mathbf{\gamma}(x_{i})\) is the Fourier mapping of the
Figure 2: The neural quadratic method for enhanced expressibility. Instead of using a fully connected neural network with a multi-dimensional input layer, the proposed univariate neural networks are trained to create feature space, forming the basis to express the yield function analytically. To enhance expressivity, additional bases founded by the product of features are introduced.
input feature \(x_{i}\) with random frequency vector \(\mathbf{v}\in\mathbb{R}^{M}\) as follows:
\[\mathbf{\gamma}(x_{i})=\sin(\mathbf{v}x_{i})\oplus\cos(\mathbf{v}x_{i})=[\sin(\mathbf{v}x_{i})^{ T},\cos(\mathbf{v}x_{i})^{T}]^{T}\in\mathbb{R}^{2M}, \tag{6}\]
here \(\oplus\) indicates vector concatenation operation. Components of the random vector \(\mathbf{v}\) are sampled from zero-mean normal distributional with standard deviation \(\sigma_{v}\), i.e., \(v_{m}\sim\mathcal{N}(0,\sigma_{v})\). In this work, we keep the random Fourier features fixed during training, although they can be considered trainable parameters. Previous research has demonstrated that optimizing them may not improve the approximation power and may increase the computational cost. All trainable parameters associated with the \(i\)-th shape function are denoted by \(\mathbf{\beta}_{i}=\{\mathbf{W}_{i},\mathbf{b}_{i}\}_{l=1}^{L}\).
To demonstrate the effectiveness of our architecture choice, we conducted an educational example following Agarwal et al. (2021). We generated a training dataset with high-frequency content using purely random noise and then empirically examined whether our architecture choice could overfit the training data.
The results, shown in Fig. 4, confirm that the neural network enriched with the spectral layer is capable of overfitting the data. We note that overfitting is generally not desirable, but in this particular example, our goal was to measure the expressivity and flexibility of the architecture by its ability to memorize the entire training dataset.
In this framework, the contribution of each shape functions \(f_{i}\) in Eq. 4 is balanced by the associated weight \(w_{i}\). For example, \(w_{i}\gg w_{j}\) is intended to mean that the \(i\)-th shape function effect is much more important than the \(j\)-th shape function in the final prediction. The necessary condition for this argument to be meaningful is that shape functions should have the same scale. To achieve this, we apply the tanh activation function in the last layer of the neural network architecture, which restricts the output to a range between -1 and 1.
#### 2.2.2 Regularization of polynomial function in feature space
In terms of interpretability, one may prefer only the linear combination of shape functions than having higher-order terms involved in the approximation. Moreover, one may prefer to have the least number of activated shape functions which is equivalent to a feature extraction task, i.e., removing the spurious input features not having significant contribution in the approximation.
To incorporate such biases towards interpretability and simplicity, we have included a regularization term in the optimization statement Eq. 3 to promote sparsity for low-order and high-order terms. The so-called \(L0\)-norm, which simply counts the total number of nonzero elements of a vector, is known as one of the best sparsity measures (Natarajan, 1995; Gale et al., 2019). However, \(L0\)-norm minimization is an NP-hard problem and makes the proposed loss function in Eq. 3 non-differentiable (Natarajan, 1995), which
Figure 3: Neural network architecture for each shape function. A Fourier layer is utilized to improve the training of classical MLP.
is not preferred. As a result, we use the \(L1\)-norm as a differentiable replacement of the \(L0\)-norm as follows (Tibshirani, 1996; Brunton et al., 2016),
\[\mathcal{L}_{\text{sparsity}}(\mathbf{w})=\alpha_{\text{lo}}\sum_{i}|w_{i}|+\alpha_{ \text{ho}}\sum_{i,j}|w_{ij}|, \tag{7}\]
where \(\alpha_{lo}\) and \(\alpha_{ho}\) are hyperparameters to control how much low-order and higher-order terms should be sparse; a higher value penalizes more.
### Symbolic Regression of feature space for enhanced interpretability
Symbolic regression (SR) seeks to discover a mathematical expression that best fits a given dataset without specifying the form of the mathematical expression. Without specifying a form enables more flexibility to curve-fit the data. However, symbolic regression, particularly for multi-dimensional vector-valued or tensor-valued functions, is significantly more difficult due to the combinatoric nature of the optimization problem necessary to search the mathematical expression (Icke and Bongard, 2013; de Franca, 2018). However, the existence of the polynomial feature space spanned by the basis \(\{1,f_{i},f_{i}f_{j}\}\) offers us an opportunity to break down the multi-dimensional symbolic regression problem into multiple one-dimensional problems, one for each shape function \(f_{i}\). The final learned function is then expressed as the polynomial in the feature space \(\text{span}(\{1,f_{i},f_{i}f_{j}\})\) (see Fig. 5). This setting may greatly reduce the difficulty of the symbolic regression problem at the expense of injecting the additional assumption that the learned function can be expressed in the aforementioned way.
The space of possible expressions is commonly defined by specifying the set of mathematical operators, functions, variables, and constants that can be used to construct the expressions represented efficiently in binary trees (see Fig. 6.) Genetic programming is one of the most popular stochastic optimization methods to search the combinatorial space of all possible mathematical expressions (Koza, 1994; Schmidt and Lipson, 2009; Wang et al., 2019). Recently, methods based on deep reinforcement learning are also developed as alternative ways for conducting an efficient discrete search in the space of tree data structures (Petersen et al., 2019; Landajuela et al., 2021).
In genetic programming, the space of possible expressions is represented as a _population_ of candidate solutions, which are randomly generated at the start of the algorithm. Each _individual_ candidate solution
Figure 4: Network expressivity: (a) vanilla single-layer MLP with 80 hidden neurons and (b) single-layer spectral layer with 40 hidden neurons. Both models are trained for 10,000 epochs with ADAM optimizer. In this demonstration example, the data is intentionally over-fitted to test the expressivity power of the spectral layer for complicated mathematical expression.
is represented as an expression binary tree (shown in Figure 6), where the leaves of the tree represent the input variables or constants, and the internal nodes represent the mathematical operations or functions. The genetic programming algorithm then evaluates the _fitness_ of each candidate solution by comparing its output to the target output values. Fitness measures how well the candidate solution approximates the data, and mean square error is a commonly used fitness function.
The genetic programming algorithm then iteratively evolves the population of candidate solutions through _selection_, _crossover_, and _mutation_ operators, in a process similar to natural selection. Selection involves choosing the fittest individuals from the current population based on their fitness scores. Crossover, as shown in Figure 7(a), combines the genetic information of two individuals to create offspring with characteristics from both parents. Mutation involves randomly changing some of the genetic material of an individual to introduce new variations in the population; see Figure 7(b).
Figure 5: The divide-and-conquer symbolic regression for enhanced interpretability. A series of 1D symbolic regressions are trained to replace the 1D neural network basis function to form an analytical yield function.
Figure 6: Equation representation as expression binary tree. This expression tree with depth 4 and size (total number of nodes) 12 represents program \(\sin(\frac{1}{y})-0.2x^{2}+0.1\). Expression trees are the main building blocks of the modern symbolic regression algorithms.
Through these operations, the genetic programming algorithm creates a new generation of candidate solutions with higher fitness than the previous generation. The process is repeated until a satisfactory mathematical expression is found that fits the data well. Once a satisfactory expression is found, it can be used to predict output values for new input values that were not used in the training dataset. At inference, symbolic equations are more lightweight than other standard machine learning models, such as neural networks needed to store a large number of parameters, hence more transportable. Discovered equations by SR are shown to generalize well outside the train data support Kim et al. (2020), providing successful training.
Unlike multi-dimensional symbolic regression, in our proposed framework, we extract a symbolic equation for each shape function: single-variable to single-variable data. This feature enables easy application of parallel computing; enabling SR algorithms to be executed simultaneously for each shape function. In this work, we conduct symbolic regression with PySR open source package (Cranmer et al., 2020; Cranmer, 2023), which is developed based on evolutionary-based genetic programming. In this approach, a list of unitary and binary operators can be specified to restrict the search space of tree structures. The number of adjustable constants for these operators can be specified; a gradient-based optimizer optimizes these parameters after each iteration. There is a tradeoff between model complexity and expressivity during SR optimization. One needs to tune and balance the complexity versus expressivity of found analytical expression to increase interpretability and reduce overfitting. Users can decide based on the desired accuracy and simplicity to choose the best equation.
Definition 1 (Complexity score in symbolic regression): The complexity of a symbolic equation is often assessed qualitatively rather than quantitatively, as there is no universally agreed-upon definition. In this context, we adopt the complexity measure defined in Cranmer (2023), which utilizes the number of nodes in an expression tree as the complexity score. While it is possible to assign different weights to each node type, such as considering the exponential operator \(\exp(\cdot)\) as more complex than the addition operator \(+\), we do not incorporate such weightings in our analysis.
Remark 1 (Related literature on plasticity): Versino et al. (2017) introduced the application of SR for learning flow stress models from data. They incorporated domain knowledge (e.g., via augmenting the data, constraining the functional form, etc.) to enhance the generalization accuracy of the SR. Bomarito et al. (2021) showed the application of SR in discovering plastic yield surface equations. Our divide-and-conquer approach offers mechanical analysts an additional level of flexibility, allowing them to visually inspect the required equation type for SR before running computationally expensive symbolic regression. Additionally, our approach may exhibit superior performance in higher dimensions compared to direct SR Cranmer et al. (2020). Furthermore, addressing physics constraints such as convexity can be achieved during differ
Figure 7: (a) crossover and (b) mutation operations in an evolutionary-based symbolic regression algorithm.
entiable QNM training in the first step, potentially reducing computational costs compared to the approach that incorporates physics constraints during the discrete search of SR.
Remark 2 (Related literature on hybrid SR): Other research efforts have also utilized scalable models (e.g., neural networks and decision trees) in combination with SR algorithms to handle the curse of dimensionality while maintaining interpretability (Icke and Bongard, 2013; Cranmer et al., 2020; Wadekar et al., 2020). However, like NAM, their functional form assumption does not account for potential interactions among features, which may be necessary in certain applications. The proposed QNM approach considers such interactions in the constructed quadratic space of learned shape functions.
### Implementation for third-party validation in open-source finite element models
For completeness and to ensure third-party reproducibility, we outline the steps taken to implement the generated yield surface model into the user-defined material subroutine (UMAT) for finite element simulations. Unlike the previous approach in Suh et al. (2023) where yield functions are parametrized via neural network, the implementation of the analytical model is profoundly simpler.
#### 2.4.1 Level-set plasticity modeling framework
By limiting our attention to the case where material behavior is perfectly plastic, the yield function \(f_{y}\), which can be expressed in terms of the Cauchy stress \(\mathbf{\sigma}\in\mathsf{S}\), defines an elastic domain \(\mathbbm{E}\) as a region where \(f_{y}(\mathbf{\sigma})<0\) and a plastic domain \(\partial\mathbbm{E}\) where \(f_{y}(\mathbf{\sigma})=0\), such that all the admissible stresses that belong to its closure \(\mathbbm{E}\):
\[\overline{\mathbbm{E}}=\mathbbm{E}\cup\partial\mathbbm{E}=\{\mathbf{\sigma}\in \mathsf{S}|f_{y}(\mathbf{\sigma})\leq 0\}. \tag{8}\]
Based on the theory of elastoplasticity, this means that a dataset of stress points at the plastic regime measured either from the experiments or from the sub-scale simulations always reside on the yield surface \(f_{y}(\mathbf{\sigma})=0\). Since the data lacks the information inside and outside the yield surface, training a data-driven model directly from the collected stress states is not an easy task since the learned function may not be capable of returning positive values if the given stress is inadmissible and negative values at the elastic regime. Hence, this study adopts the concept of level-set modeling framework proposed by Vlassis and Sun (2021, 2022) that regularizes the yield function \(f_{y}(\mathbf{\sigma})\) into an implicit signed distance function \(\phi(\mathbf{\sigma})\) that is well-defined anywhere in the space of second-order symmetric tensors \(\mathsf{S}\):
\[\phi(\hat{\mathbf{x}})=\begin{cases}d(\hat{\mathbf{x}})&\text{if }f_{y}(\hat{\mathbf{x}})>0, \\ 0&\text{if }f_{y}(\hat{\mathbf{x}})=0,\\ -d(\hat{\mathbf{x}})&\text{if }f_{y}(\hat{\mathbf{x}})<0,\end{cases} \tag{9}\]
where \(\hat{\mathbf{x}}\) is an arbitrary stress point represented in a proper parametric space, while \(d(\hat{\mathbf{x}})\) represents the minimum Euclidean distance between \(\hat{\mathbf{x}}\) and the yield surface in principal stress space. It should be noted that the choice of parametric space for representing the stress state \(\hat{\mathbf{x}}\) greatly affects the performance of the data-driven model, as pointed out in Kuhn et al. (2013). Although it's effect will be further discussed in Section 3.1, this section focuses on a cylindrical coordinate system for the \(\pi\)-plane orthogonal to the hydrostatic axis, i.e., \(\hat{\mathbf{x}}=\hat{\mathbf{x}}(p,\rho,\theta)\), rather than directly adopting the Cartesian coordinates (\(\sigma_{1},\sigma_{2},\sigma_{3}\)).
Since the yield surface cross-section perpendicular to the hydrostatic axis forms a closed loop, one possible way to construct the signed distance field \(\phi\) is to solve the Eikonal equation, i.e.,
\[\|\nabla^{\hat{\mathbf{x}}}\phi\|=1, \tag{10}\]
while imposing homogeneous Dirichlet boundary condition at the stresses that belongs to \(\partial\mathbbm{E}\) since they exhibit zero minimum Euclidean distance from the yield surface.
Based on the obtained signed distance field, we augment the original set of stress points that satisfies \(f_{y}(\mathbf{\sigma})=0\) with \(N\) sets of points that are not necessarily located on the yield surface. These auxiliary data points are added, as the yield function is an implicit function that is defined everywhere in the parametric space. Adding these auxiliary data may help us reach the intended inductive bias with more support. The unit stress gradient also helps the learned model have a unit plastic flow, enabling the plastic multiplier to reflect the magnitude of the plastic strain for a given plastic flow direction.
#### 2.4.2 Implicit integration of interpretable-ML-based constitutive relation
This section presents an implicit return mapping algorithm for the interpretable-ML-based constitutive equation that computes the stress tensor \(\mathbf{\sigma}_{\text{n+1}}\) at loading step \(\text{n}+1\) for a given strain increment \(\Delta\mathbf{e}\) and the previous stress state \(\mathbf{\sigma}_{\text{n}}\). Similar to the previous studies, e.g., (Wilkins, 1963; Hughes, 1984; Borja, 2013), the stress integration consists of an elastic predictor that computes the trial stress \(\mathbf{\sigma}_{\text{n+1}}^{\text{tr}}\), followed by a plastic correction scheme, while the only difference is that we replace the mathematical expression of the yield criterion with the trained model \(\tilde{\phi}\) (i.e., either NAM, QNM, or symbolic model). In this case, by restricting the formulation within the infinitesimal range and assuming that the elasticity tensor \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}\) is given, the rate form of the constitutive equation based on an associative flow rule can be expressed as,
\[\dot{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}= \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bm
residual vector reaches an acceptable value near zero. Specifically, we construct the local residual vector \(\tilde{\mathbf{r}}(\tilde{\mathbf{x}})\) and the unknown vector \(\tilde{\mathbf{x}}\) as follows:
\[\tilde{\mathbf{r}}(\tilde{\mathbf{x}})=\begin{bmatrix}\varepsilon_{1}^{e}- \varepsilon_{1}^{e,\text{tr}}+\Delta\lambda\frac{\partial\tilde{\phi}}{ \partial\sigma_{1}}\\ \varepsilon_{2}^{e}-\varepsilon_{2}^{e,\text{tr}}+\Delta\lambda\frac{ \partial\tilde{\phi}}{\partial\sigma_{2}}\\ \varepsilon_{3}^{e}-\varepsilon_{3}^{e,\text{tr}}+\Delta\lambda\frac{ \partial\tilde{\phi}}{\partial\sigma_{3}}\\ \tilde{\phi}(\hat{\mathbf{x}})\end{bmatrix}\ ;\ \tilde{\mathbf{x}}=\begin{bmatrix} \varepsilon_{1}^{e}\\ \varepsilon_{2}^{e}\\ \varepsilon_{3}^{e}\\ \Delta\lambda\end{bmatrix}, \tag{17}\]
such that the admissible Cauchy stress tensor at loading step \(\text{n}+1\) can be recovered once we obtain the converged set of solutions \(\tilde{\mathbf{x}}\), e.g.,
\[\mathbf{\sigma}_{\text{n}+1}=\mathbf{C}^{e}:\left[\sum_{A=1}^{3} \varepsilon_{A}^{e}(\mathbf{n}_{A}\otimes\mathbf{n}_{A})\right], \tag{18}\]
where \(\mathbf{n}_{A}\) (\(A=\{1,2,3\}\)) indicates the principal direction.
## 3 Results
We will discuss and validate different aspects of our introduced interpretable framework for the data-driven discovery of plastic yield surfaces. The first example in Section 3.1 focuses on a pressure-insensitive dataset and examines how the proposed method can discover the symbolic yield surface. We also examine the method's extrapolation capability compared to previous methods in the literature. In the second example, Section 3.2, we study the efficacy of the QNM-based symbolic model in dealing with higher-dimensional data of metal plasticity in a five-dimensional space. We also discuss the enforcement of simplicity through sparsity control. The last example in Section 3.3 illustrates how the proposed QNM-based model can discover the correct symbolic equation for pressure-sensitive data. We demonstrate that the found symbolic equations can be readily used in classical FEM codes without significant changes compared to neural network-based plasticity models.
Remark: In a more general setup as a supervised learning task, in Appendix 4.2, we compare different modeling assumptions for solving a regression task when dealing with sparse data, considering the problem dimensionality. The limitations of the proposed scheme are discussed in Appendix A.
Remark: In this work, symbolic regression tasks are carried out using the PySR package (Cranmer et al., 2020). Therefore, readers should note that the level of symbolic regression accuracy obtained using other packages or methods may differ.
### Pressure-insensitive elastoplasticity for benchmark performance
In this example, we use pressure-insensitive data in the stress space, which has three dimensions. However, due to pressure insensitivity, only two dimensions are required to describe the yield surface. Our modeling framework aims to determine whether it can distinguish this independence. We also investigate the impact of spectral layer and data parameterization on training performance. Furthermore, we demonstrate that correct assumptions and inductive biases can improve generalization by conducting stress point integration via the return mapping algorithm.
The benchmark function where we generate a set of synthetic stress points resembles the von Mises yield criterion. While it manifests cylinder shape along the hydrostatic axis, the benchmark yield surface is also dependent on the Lode's angle \(\theta\) such that it exhibits a flower-shaped cross-section:
\[f=\sqrt{\frac{3}{2}}\rho\left[1+A_{p}\sin(k_{p}\theta)\right]- \sigma_{y}, \tag{19}\]
where the parameters \(k_{p}\) and \(A_{p}\) control the number and the size of petals, respectively, while \(\sigma_{y}\) is the yielding stress. From Eq. (19), we choose the parameters as \(k_{p}=3\), \(A_{p}=0.325\), and \(\sigma_{y}=250\) MPa, and then collect a set of stress points that satisfies \(f=0\). Specifically, we sample 20 data points along the mean pressure axis (from \(-1\) GPa to 1 GPa) and 120 points along the Lode's angle axis (from 0 to \(2\pi\)), such that total 2,400 different admissible stress states are considered as an original dataset. Then the original data points are then pre-processed via the signed distance function by setting \(N=11\), such that the number of stress points in our full dataset is 26,400. Here, compared to the previous studies (Vlassis and Sun, 2021, 2022) where the Lode's radii of the training dataset ranges from 0 to \(2\rho\), as illustrated in Figure 8, our augmented dataset only covers a narrow band region for the original yield surface (i.e., [0.85\(\rho\), 1.15 \(\rho\)]) in order to test the extrapolation capability of our symbolic regression model obtained from the trained NAM.
Our experiments show that the NAM assumption is sufficient to recover the correct yield surface in this problem. Therefore, we focus on presenting the NAM results and omit the QNM results for brevity. In Fig.9(a), we compare two NAM models trained with the same parameters, one with a Fourier layer and one without, using the full data set. The results show that the network with the Fourier layer achieves higher accuracy with fewer iterations. We also conduct a training process on a random data set, and the corresponding results are shown in Fig.9(b). The random dataset is a subset of the entire dataset, consisting of 2,000 random points with a zero level-set and 3,000 random points with non-zero level-set values. In this case, we observe a significant performance difference between the two methods when using a random subset of the full data set.
In Fig. 10, we study the effect of input data representation on the learning task using the same neural network architecture for both cylindrical and Cartesian coordinate systems, with the spectral layer utilized. The results suggest that the cylindrical coordinate system can outperform the Cartesian coordinate system, making a difficult training process much easier. Finding an appropriate data representation may become more critical in our proposed framework based on the NAM or QNM, as we have stronger assumptions regarding feature separability compared to classical surrogate modeling methods; our approach is also less flexible than those methods. This is consistent with classical approaches in mechanics, where researchers have introduced different coordinate systems to transform a complex problem into an easier one in the new coordinate system, for example, by taking into account the underlying symmetries in the new coordinate system.
We focus solely on the model trained with the full data set in the cylindrical coordinate system that utilized the Fourier layer. Figure 11 displays the learned shape functions by NAM after training their
Figure 8: Training dataset (colored symbols) augmented from the original dataset that satisfies \(f=0\) (black curve).
associated symbolic equations extracted by the symbolic regression algorithm. An advantageous feature of the NAM or QNM modeling idea is its ability to find appropriate univariate, separable representations of complex, multivariate data. This allows each shape function to be visually inspected individually, and even a reasonable symbolic equation can be derived for each univariate data set. In this example, without using any symbolic regression algorithm, it is clear that constant, linear, and sinusoidal functions can describe the NAM shape functions reasonably well. This is one of the primary advantages of using a divide-and-conquer algorithm to break down complexities into more straightforward tasks that can be handled more efficiently by humans.
The weights associated with each shape function are as follows: \(w_{p}=0.43\), \(w_{\rho}=5.27\), and \(w_{\theta}=3.82\), where these weights are denoted as \(w_{i}\) in Eq.2. Notably, the weight corresponding to the pressure coordinate is about one order of magnitude less than the weights of the other shape functions. Furthermore, as seen in Fig.11(a), the pressure shape function behaves almost constantly around 1. These observations confirm that the NAM is capable of discarding the effect of pressure coordinate on the final prediction, which is expected since the data is pressure-insensitive.
Figure 10: Prediction loss v.s. training epochs when stress data is represented in cylindrical and Cartesian coordinate systems.
Figure 9: Comparing training performance between neural networks used the Fourier layer v.s. not used: (a) using all the data; (b) using fewer data.
We apply the symbolic regression algorithm to determine the remaining shape functions, allowing for flexibility in equation forms. The algorithm employs binary and unary operations, including plus, multiplication, division, cosine, exponential, sine, and logarithm. We selected equations with varying complexities and displayed them in Figs.11(b,c), along with their explicit forms listed in Tables1 and 2.
The second shape function \(f_{2}(\bar{\theta})\) exhibits almost the same accuracy with the least complex equation, which is a linear function. It is noteworthy that the other options in Table 1 with higher Complexity scores include a linear term and a sinusoidal function. However, the amplitude of the sinusoidal term is two orders of magnitude smaller than that of the linear term, allowing it to be ignored.
Analyzing the third shape function \(f_{3}(\bar{\theta})\) in Table 2 poses more difficulty as higher complexity levels correspond to significant changes in accuracy. In the context of plasticity applications, accuracy may be more valued than simplicity since the return mapping algorithm requires both function values and their gradients to be accurate and stable. Failure to meet these requirements can lead to the Newton-Raphson algorithm's failure in an implicit method. However, as per Occam's Razor, simplicity's generalization power suggests that simpler models may generalize better (Thorburn, 1918). However, there are no extrapolation concerns for this shape function since it is only defined for the angle \(\theta\), which falls within the training data range of zero to \(2\pi\). Therefore, we can focus solely on finding a sufficiently accurate function in the interpolation regime. It is worth noting that we already considered Occam's Razor principle at the start by restricting the approximation function to be univariate and separable.
The divide-and-conquer scheme introduced here has an additional advantage in that, in certain cases, one can rely on their intuition to identify the appropriate equation without using the symbolic regression algorithm. This advantage stems from the one-dimensional nature of curve-fitting tasks. For instance, one may hypothesize that the third shape function is a sinusoidal function of the form \(f_{3}(\bar{\theta})=a\sin(br\bar{\theta})+c)+d\) and determine the unknown parameters \(a\), \(b\), \(c\), and \(d\) through a nonlinear least squares method. In this case, we obtain \(f_{3}(\bar{\theta})=0.89\sin(5.95\pi\theta-0.02)+0.16\), which is labeled "manual curve fitting" in Fig. 11(c). In terms of the trade-off between complexity and accuracy, one may prefer this equation over those obtained through symbolic regression algorithms, which are, in fact, closer to the ground truth function, Eq. (19). This simple exercise demonstrates that human intuition may outperform symbolic regression algorithms in some cases.
Figure 12(a) illustrates that our symbolic regression model (red curve) is capable of reproducing the shape of the benchmark yield function (black curve). Here, based on the full dataset, we also train a single multivariate MLP that consists of a number of fully connected layers and Multiply layers (Vlassis and Sun, 2021) to compare the predictive capability against our proposed framework. Although a multivariate MLP trained based upon the level-set augmented data can capture the yield surface (blue dots) that is similar to the benchmark, however, it fails to reproduce the stress-strain curve based upon an implicit
Figure 11: Shape functions learned by neural network and extracted accordingly by the symbolic regression algorithm: (a) the function corresponding to the normalized pressure \(\bar{p}\), (b) the function corresponding to the normalized radius \(\bar{p}\), (c) the shape function corresponding to the normalized angle \(\bar{\theta}\). Normalization in this study is a linear transformation of data into the rage \([0,1]\). Complexity labels indicate the level of complexity for the selected symbolic equations. Higher complexity is an indication of more terms. These symbolic equations are shown in Tables 1 and 2.
stress integration scheme [Figure 12(b)] due to its limited capacity to make predictions outside the training domain [0.85\(\rho\), 1.15\(\rho\)]. On the other hand, as illustrated in Figure 12(b), the symbolic regression model results in a stress-strain curve that is identical to the benchmark, which highlights that it can make accurate predictions outside the range of the training data based upon its extrapolation capacity.
### Discovery of symbolic level set plasticity model from porous metal
In this section, we benchmark the application of the proposed method for finding the plastic yield surface of porous metal material. The data in this problem are in five-dimensional space, including the level-set. We will discuss equation discovery under sparsity control.
In this section, we chose a model that discovered by Bomarito et al. (2021), which describes the plastic behavior of a porous material depending on the hydrostatic pressure \(\bar{v}_{h}\), the von Mises stress \(\bar{v}_{vm}\), the volume-averaged Lode parameter \(\bar{L}=3\sqrt{3}(\sigma_{1}-\bar{\sigma}_{h})(\sigma_{2}-\bar{\sigma}_{h})( \sigma_{3}-\bar{\sigma}_{h})/(2\big{]}_{2}^{3/2})\), and a parameter \(\bar{v}\) that describes the void fraction. To generate the training data, we adopt the expression for the yield function that can be found in Eq. (48) in (Bomarito et al., 2021), and sample 20 data points along the \(p\)-axis (from 0 to 1.8 MPa), 30 points along the \(\theta\)-axis (from 0 to 2\(\pi\)) based on the cylindrical coordinate system, and 10 points along the \(\vartheta\)-axis (from 0.063 to 0.065) such that total 6,000 different admissible stress states are considered. Here, we add uniformly distributed noise along the radial direction where its magnitude ranges from \(-4\) % to 4 % of Lode's radius to test the performance of the model trained by a dataset that may possibly contain noises. Then, similar to the previous example, the original dataset is then pre-processed via level-set augmentation by setting \(N=11\) that covers a narrow band region of [0.85\(\rho\), 1.15\(\rho\)] such that our full dataset consists of 66,000 stress points.
Figure 13 displays the yield surfaces found at different levels of hydrostatic stress and void volume fraction using the NAM and QNM methods. Both methods accurately represent the underlying yield surfaces, but the QNM method performs slightly better, especially at higher levels of hydrostatic stress, due to its higher level of flexibility. The QNM results shown in this figure were obtained by training the model with sparsity control, with \(\alpha_{\text{lo}}=0.01\) and \(\alpha_{\text{ho}}=0.001\), see Eq. 7. The QNM method uses four learnable
\begin{table}
\begin{tabular}{|l|c|c|} \hline Expression & Complexity score & Loss \\ \hline \(f_{2}(\bar{\rho})=1.0\bar{\rho}\) & 3 & 5.546e-05 \\ \(f_{2}(\bar{\rho})=\bar{\rho}-0.01\sin\left(\sin\left(\sin\left(\bar{\rho}+ \sin\left(\bar{\rho}\right)\right)\right)\right)\cos\left(1.32\bar{\rho}\right)\) & 17 & 4.908e-05 \\ \(f_{2}(\bar{\rho})=\bar{\rho}-0.01\sin\left(\sin\left(0.77\bar{\rho}+\sin \left(\bar{\rho}\right)\right)+0.29\right)\cos\left(1.32\bar{\rho}\right)\) & 21 & 4.894e-05 \\ \hline \end{tabular}
\end{table}
Table 1: Found symbolic shape function \(f_{2}(\bar{\rho})\) for pressure-insensitive benchmark
\begin{table}
\begin{tabular}{|l|c|c|} \hline Expression & Complexity score & Loss \\ \hline \(f_{3}(\bar{\theta})=-\sin\left(4.84\bar{\theta}\right)\) & 4 & 9.466e-02 \\ \(f_{3}(\bar{\theta})=-\dfrac{\sin\left(4.83\bar{\theta}\right)}{\cos\left(\sin \left(\cos\left(\sin\left(\sin\left(\cos\left(\bar{\rho}\right)\right)\right) \right)\right)\right)}\) & 13 & 2.367e-02 \\ \(f_{3}(\bar{\theta})=-1.36\sin\left(4.83\bar{\theta}\right)+1.36\cos\left( \left(0.86\sin\left(4.83\bar{\theta}\right)\right.\right.\right.\) \(\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.
shape functions, each with an associated learnable weight (\(w_{1}\), \(w_{2}\), \(w_{3}\), and \(w_{4}\)). The complete quadratic approximation based on these four shape functions has ten additional terms that are controlled by trainable weights (\(w_{ij}\)) for \(1\leq i\leq j\leq 4\).
In Figure 14, we can see how the weights change during training when sparsity is enforced compared to when it is not. When sparsity control is used, all of the lower-order contributions (shown by different colors) eventually diminish, with \(w_{i}\) approaching zero in later epochs, improving the model's simplicity and interpretability. Notably, this is consistent with the benchmark equation, which features couplings between multiple features and does not involve any single-variable term.
Tables 3 and 4 report the corresponding discovered symbolic equations generated by the symbolic regression algorithm. In this study, we deliberately chose the best function with the least loss function to validate our modeling performance for unseen data in the interpolation regime - where data falls inside the convex hull of the train data but was not seen during training. To this end, we plot the yield surface at two different levels in Figure 15. The results suggest that the modeling assumption in Equation (4), in terms of separability, may be sufficiently robust to avoid overfitting, at least in the interpolation regime.
In Appendix 4.1, we conduct a comparison with the brute-force symbolic regression approach directly applied to the data, highlighting the interpretability advantages of our proposed scheme.
Figure 12: (a) the recovered yield surface by the introduced scheme and single multi-input MLP; (b) the stress-strain curve obtained by the return mapping algorithm for the ML-based yield surfaces. The discovered symbolic yield surface offers a good extrapolation capability for loading conditions beyond the range of the training data in comparison to the purely neural network-based yield surface.
\begin{table}
\begin{tabular}{|l|c|c|} \hline shape function & Complexity score & Loss \\ \hline \(f_{1}(\bar{\sigma_{h}})=\bar{\sigma_{h}}-\sin\left(0.07\left(\bar{\sigma_{h}} +\exp(\bar{\sigma_{h}})\right)\cos\left(0.96\bar{\sigma_{h}}+0.43\right)\right)\) & 19 & 4.691e-5 \\ \(f_{2}(\bar{\sigma_{vm}})=\bar{\sigma_{vm}}-(\bar{\sigma_{vm}}+\cos\left(\sin \left(\bar{\sigma_{vm}}\right)-0.07\right))\) & 19 & 4.815e-5 \\ \(\sin\left(0.15\sin\left(\bar{\sigma_{vm}}-1.01\right)\right)\) & & \\ \(f_{3}(\bar{\sigma_{L}})=\bar{\sigma_{L}}\left(\sin((\sin((\sin((0.56\bar{\sigma _{L}}+0.56\cos((\sin(\) & \\ \(\bar{\sigma_{L}}+\cos((1.0\sin\left(\sin\left(\sin\left(\bar{\sigma_{L}}\right) \right)+0.99))))))))))))))))))))))))))))}}\}\)\)\)\)\)\)\)\)\
_Remark 5_ (_Model and training setup_): Each utilized MLP consists of one Fourier layer with 20 randomly selected frequencies, followed by three additional hidden layers with 40, 20, and 20 hidden units, respectively. The hidden and output activation layers are ReLU and Tanh layers. Penalty factors are set to \(\alpha_{\text{lo}}=0\) and \(\alpha_{\text{ho}}=0.01\). We set the initial learning rate to 0.005 and continued training for 22,000 epochs. While these hyperparameters were determined through manual trial and error, they are not necessarily optimal.
### Applications in finite element simulations
In this problem, we illustrate how our end-to-end framework can be used to discover symbolic equations for the plastic yield surface, which can then be directly incorporated into finite element simulations. Through this example, we will demonstrate how the QNM approach, with its greater flexibility in modeling assumptions, can lead to more appropriate and simpler symbolic equations compared to the NAM approach.
Our benchmark material model to be replicated via QNM is the Matsuoka-Nakai criterion (Matsuoka and Nakai, 1974):
\[f=-(I_{1}I_{2})^{1/3}+(\beta I_{3})^{1/3}, \tag{20}\]
where the stress invariants are defined as: \(I_{1}=\sigma_{1}+\sigma_{2}+\sigma_{3}\), \(I_{2}=\sigma_{1}\sigma_{2}+\sigma_{2}\sigma_{3}+\sigma_{3}\sigma_{1}\), and \(I_{3}=\sigma_{1}\sigma_{2}\sigma_{3}\), while the material parameter \(\beta\) depends on the friction angle \(\phi_{f}\):
\[\beta=\frac{9-\sin^{2}\phi_{f}}{1-\sin^{2}\phi_{f}}. \tag{21}\]
By setting the friction angle to be \(\phi_{f}=30^{\circ}\), we collected total 13,200 stress points as a training dataset. Specifically, we sampled 20 points along the \(p\)-axis from 0 to 1,000 MPa, 60 points along the \(\theta\)-axis from 0 to \(2\pi\), while choosing \(N=11\) from \(0.85\rho\) to \(1.15\rho\).
The shape functions learned through NAM and QNM are presented in Figs.16(a)-16(c). The yield surfaces discovered using these methods, as shown in Fig.16(d), are in good agreement with the benchmark. However, the shape functions learned through NAM in Figs. 16(a)-16(c) exhibit greater complexity and noise, particularly for pressure and radius. This is not surprising, given that the NAM model is unable to
\begin{table}
\begin{tabular}{|l|c|c|} \hline shape function & Complexity score & Loss \\ \hline \(f_{1}(\sigma_{h})=\sigma_{h}-0.29\sin\left(\exp(0.76\sigma_{h})\right)+0.17\) & 17 & 4.364e-05 \\ \(f_{2}(\bar{\nu}_{bm})=\bar{\nu}_{bm}+(1.47\bar{\nu}_{bm}+0.9)\left(0.04\cos \left(\bar{\nu}_{bm}+0.04\cos\left(\bar{\nu}_{bm}+0.9\right)\right)\right)\) & 21 & 8.601e-05 \\ \(f_{3}(\bar{\nu}_{L})=(\bar{\nu}_{L}\left(\sin\left(0.48\bar{\nu}_{L}\right)-0.03\cos\left(\bar{\nu}_{L}\right)-1.2\right)-0.48)\) & & \\ \(\cos\left(\sin\left(\cos\left(0.58\bar{\nu}_{L}+0.03\cos\left(\bar{\nu}_{L} \right)\right)\right)\right)\) & 36 & 2.003e-05 \\ \(f_{4}(\bar{\nu})=\bar{\nu}+\cos((\bar{\nu}+\cos((\bar{\nu}\) & & \\ \((-0.57\bar{\nu}\cos((\sin(\sin((\sin(\cos((\cos(\bar{\bar{\nu}}+(\bar{\bar{\nu} }+\cos(\bar{\bar{\nu}}\) &\\ \begin{array}{c}\\ \\
account for interactions between input features and, therefore, may increase the complexity of each shape function to improve overall flexibility in capturing the target response.
The QNM learned is expressed as \(f=1.39f_{1}(\bar{p})+2.18f_{2}(\bar{p})+0.24f_{3}(\bar{\theta})-0.22f_{1}(\bar{p })f_{3}(\bar{\theta})\). All other second-order interactions among the shape functions are nearly zero, except for \(f_{1}(\bar{p})f_{3}(\bar{\theta})\). The linear dependence found with pressure (as seen in Fig. 16(a)) and the form of the equation obtained are consistent with the benchmark. This demonstrates the QNM's ability to uncover interpretable relationships among different features and the underlying functional form, which can be useful for the second step of the symbolic regression algorithm. Table 5 summarizes the results of the symbolic regression for the shape function \(f_{3}(\bar{\theta})\). The last row in this table is used for the finite element analysis.
We now incorporate the obtained QNM-based symbolic expressions in a boundary value problem solved via the finite element method to showcase the applicability of our proposed approach. Specifically, as illustrated in Figure 17, we consider a 20 mm \(\times\) 20 mm rectangular plate that is weakened by a circular hole of a radius of 5 mm at its center. For simplicity, we limit our attention to a two-dimensional case by assuming plane strain condition while only considering its upper right quarter of our problem domain. Our domain of interest is spatially discretized with a mesh that consists of total 871 triangular elements that have one integration point each. By assuming that our target material behaves linearly in the elastic regime and setting Young's modulus \(E=25\) GPa and Poisson's ratio \(\nu=0.3\), we conduct a finite element simulation under a displacement-controlled regime by prescribing a vertical displacement \(\hat{\mathbf{u}}\) at a rate of \(-0.1\) mm/sec on the top, while imposing a 100 MPa compressive traction along the inner radii and the right-hand side of the domain as confinement.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Expression & Complexity score & Loss \\ \hline \(f_{3}(\bar{\theta})=-1.35\sin\left(4.78\bar{\theta}\right)\) & 6 & 5.286e-02 \\ \(f_{3}(\bar{\theta})=\dfrac{0.15-\sin\left(4.79\bar{\theta}\right)}{\cos\left( \cos\left(2.38\bar{\theta}-0.87\right)\right)}\) & 18 & 1.143e-02 \\ \(f_{3}(\bar{\theta})=\dfrac{0.13-\sin\left(4.79\bar{\theta}+6.19\right)}{\cos \left(1.04\cos\left(2.38\bar{\theta}-\cos\left(\sin\left(\sin\left(2.38\bar{ \theta}\right)\right)\right)\right)\right)}+0.03\) & 31 & 2.928e-03 \\ \hline \end{tabular}
\end{table}
Table 5: symbolic shape functions found for \(f_{3}(\bar{\theta})\) in case of pressure-sensitive material
Figures 18 and 19 compare the von Mises stress and the accumulated plastic strain contours obtained from the (a) benchmark and the (b) QNM-based symbolic expressions at \(\hat{u}_{y}=-0.04\,\mathrm{mm}\), \(-0.06\,\mathrm{mm}\), \(-0.08\) mm, and \(-0.1\) mm, respectively. We observe that the plastic strain first accumulates at the right-hand side of the perforation, where stresses are concentrated and evolves towards the upper right part of the domain of interest, such that it forms a localized pattern. Therefore, the stress history recorded at point B near the region where the accumulated plastic strain is localized exhibits a higher level of von Mises stress compared to point A, as illustrated in Figure 20. More importantly, the finite element analysis based upon the QNM-based symbolic regression replicates the classical finite element simulation with a benchmark material model, highlighting that our approach is not only capable of discovering the mathematical expression of the yield function from the given set of data without a priori knowledge but also easily replace the constitutive model for continuum-scale simulations.
Figure 16: (a-c) found shape functions for Matsuoka-Nakai data. (d) found yield surfaces at different confining pressure: \(200,400,600\) MPa.
## 4 Discussions
In this section, we conduct additional numerical experiments to benchmark the performances of the proposed models against other state-of-the-art approaches.
### Comparisons with the direct symbolic regressions
In this study, we compare the results obtained using our proposed two-step symbolic regression framework to those obtained by applying brute-force single-step symbolic regression directly to the multivariate dataset.
Figure 17: Geometry and boundary conditions for the perforated rectangular plate. The coordinates of points A and B are (2.50, 8.03) and (7.26, 2.47), respectively.
Figure 18: Comparison between the von Mises stress distribution at different stages of loading.
The total CPU time required for training QNM is approximately 83 minutes. Each univariate symbolic regression process took around one minute. Therefore, the estimated total computational time for our two-step framework is approximately 87 minutes. In contrast, when applying the same configuration used for the univariate symbolic regressions to the direct multivariate SR, the SR algorithm finds \(\phi_{1}=1.88\sin\tilde{\phi}_{1}\) in approximately 43 minutes where,
\[\begin{split}\tilde{\phi}_{1}=&\tilde{\sigma}_{vm} \sin\left(\sin\left(\sin\left(0.41\bar{\sigma}_{vm}+0.41\cos\left(\sin\left( \frac{1.08\sin\left(0.24\bar{\sigma}_{vm}\bar{\sigma}\right)\cos\left(0.69\bar{ \sigma}\right)\right.\right.\right.}{\bar{v}}\right)-0.16\left)\right)\right) \right)\\ &+\frac{\tilde{\sigma}_{h}+\bar{\sigma}_{vm}+\frac{\phi+\sin\left( \bar{L}\right)}{2.855+9.4}+\sin\left(0.22\sin\left(\cos\left(\bar{L}\cos\left( \cos\left(\tilde{\sigma}_{h}+\sin\left(\bar{L}+0.86\right)\right)\right) \right)\right)\right)-0.33)}{\cos\left(\cos\left(\sin\left(\cos\left(\bar{ \sigma}_{h}+0.73\right)\right)\right)\right)}.\end{split} \tag{22}\]
If the SR algorithm is allocated more time, it discovers \(\phi_{2}=1.94\sin\tilde{\phi}_{2}\) within approximately 3.4 hours where,
Figure 19: Comparison between the accumulated plastic strain distribution at different stages of loading.
Figure 20: Stress evolution in points A and B during the loading.
\[\begin{array}{l}\tilde{\phi}_{2}=1.17(\tilde{v}_{h}+\tilde{v}_{vm})+0.19(\tilde{v }+\cos(\tilde{v}_{vm})+\cos(\tilde{v}))+0.17e^{\tilde{v}_{h}}+\sin\left(\tilde{v }_{vm}-0.95\right)+\\ 0.17\sin\left(\tilde{v}_{vm}+(\bar{L}+1.11)\sin\left(\tilde{v}_{h}+\sin\left( \cos\left(\tilde{v}_{vm}-0.71\right)\right)\right)\right)+0.17\sin\left(\bar{L }+\sin\left(\cos\left(\bar{L}+0.17\right)\right)\right)-0.14.\end{array} \tag{23}\]
The training MSE values for \(\phi_{1}\) and \(\phi_{2}\) are 0.100 and 0.093, respectively. The RMSE values for random test data (not seen in train data) in each case are 0.0144 and 0.0082, respectively. However, RMSE for the proposed two-step method is 0.0070, slightly better than both achieved with less execution time.
\(\phi_{2}\) is more desirable compared to \(\phi_{1}\) in terms of simplicity. However, both of them may be less desirable in terms of interpretability compared to the proposed two-step framework. Since the contribution of each variable in the final yield surface is less apparent. Additionally, it is unclear how to simplify this equation and reduce its complexity, which is easily achievable in our framework, as discussed in Section 3.1.
Figure 21 demonstrates that QNM-based symbolic regression provides a higher accuracy representation of the target yield surface. This may be due to the proposed divide-and-conquer approach, which has the potential to break down complex learning objectives into simpler ones, possibly resulting in improved learning outcomes.
Remark 6: Directly comparing the computational time between the proposed method and the direct SR method in this manner may not provide a comprehensive analysis. It should be noted that the proposed method utilizes both PySR and PyTorch, which are developed by different groups of developers and optimized for different purposes. On the other hand, the direct SR method solely relies on PySR. Thus, due to the differences in the underlying packages and their optimizations, a direct time comparison may not accurately reflect the performance of each method.
### Comparisons among different methods for sparse data
In this example, we evaluate the effectiveness of various methods for a regression task involving input features with four dimensions, where the data is relatively sparse.
In this study, we create a regression task with predetermined shape functions, such as polynomial, exponential decay, and multiscale sinusoidal, to assess the method's effectiveness in capturing various shape functions with distinct characteristics. Additionally, we intentionally exclude one of the input features (\(x_{4}\)
Figure 21: Yield surface: direct multivariate symbolic vs. QNM-based symbolic regression (a) \(\tilde{v}_{h}=1.5\) MPa, \(\bar{v}=0.0645\); (b) \(\tilde{v}_{h}=0.75\) MPa, \(\bar{v}=0.0635\) – will work on the captions later.
in the data generation process to evaluate the method's ability to identify irrelevant features. The data is generated as follows:
\[f_{1}(x_{1}) =3(x_{1}^{3}-x_{1}), \tag{24}\] \[f_{2}(x_{2}) =\frac{1}{x_{2}+1.2},\] (25) \[f_{3}(x_{3}) =1.5\left(-x_{3}^{2}+0.3\sin(10\pi x_{3})+0.4\right),\] (26) \[f(x_{1},x_{2},x_{3},x_{4})=f_{1}(x_{1})+0.25f_{2}(x_{2})f_{3}(x _{3})+\mathcal{N}(0,0.1), \tag{27}\]
where input variables \(x_{i}\) are sampled randomly from a uniform distribution over the interval \([-1,1]\). The size of each of the randomly generated training and test datasets is 500 data points. Note that the shape function \(f_{3}\) exhibits parabolic behavior at the coarse scale and sinusoidal behavior at the fine scale, as shown in Fig. 23(c).
Figures 22(b-c) display the residuals (\(y_{\text{true}}-y_{\text{pred}}\)) of the models' predictions using different methods. Figure 22(a) reports the mean squared errors of the predictions during the training process for NAM, QNM, and the vanilla single MLP. The training error for the single MLP method is almost zero, but it performs poorly on the test data, as shown in Figure 22(c). This is expected because 500 data points are too sparse for a four-dimensional response surface without any inductive bias. In contrast, NAM and QNM show better generalization than the single MLP since their model assumptions have a more appropriate bias-variance tradeoff and stronger compatibility with the underlying data generation process. Furthermore, QNM outperforms NAM slightly in terms of residual errors and train mean squared error, which is expected due to its higher flexibility and structural assumptions fully compatible with the data.
The quadratic expression of QNM contains non-zero terms, with \(w_{1}\approx 0.38\) and \(w_{23}\approx 0.32\). This means that the structural model discovered by QNM can be written as \(\tilde{f}_{\text{QNM}}(x_{1},x_{2},x_{3})\approx 0.38\tilde{f}_{1}(x_{1})+0. 32\tilde{f}_{2}(x_{2})\tilde{f}_{3}(x_{3})\), where \(\tilde{f}_{i}\) are learned shape functions. QNM was able to identify the underlying data generation process and discard the irrelevant feature \(x_{4}\). Interestingly, QNM accurately captured even the complex, multiscale sinusoidal shape function \(f_{3}(x_{3})\). While marginal errors can be observed in \(f_{2}(x_{2})\), it effectively reflects the exponential decay behavior.
In contrast, NAM identified all terms as non-zero, resulting in the following structural equation:
\[\tilde{f}_{\text{NAM}}(x_{1},x_{2},x_{3},x_{4})=0.43\tilde{f}_{1}(x_{1})+0. 51\tilde{f}_{2}(x_{2})+0.58\tilde{f}_{3}(x_{3})+0.48\tilde{f}_{4}(x_{4}). \tag{28}\]
Although NAM and QNM perform similarly in terms of train and test errors, NAM's learned structural equation is misleading. Not only does the irrelevant feature \(x_{4}\) contribute to the model, but its effect is even higher than that of feature \(x_{1}\) (\(w_{4}\) is higher than \(w_{1}\)). This can lead to misinterpretation and confusion
Figure 22: (a) regression loss values of different models at each training epoch. (b) distribution of the residual error among various models for training data. (c) distribution of the residual error among various models for test data. In legends, SR stands for Symbolic Regression.
regarding causality. One possible explanation for this behavior is that, since NAM cannot incorporate interactions among features, it attempts to use \(x_{4}\) as an additional degree of flexibility to minimize prediction loss during training. The learned shape functions are shown in Figure 24, where only the polynomial shape function \(f_{1}(x_{1})\) is discovered by the model.
Figure 22 additionally shows the results for two symbolic regression models: "SR-UniVar" and "SR-MultiVar". The former corresponds to the model obtained by performing symbolic regression on the shape functions learned by the QNM, while the latter is the vanilla multivariate symbolic regression directly performed on the data. While SR-MultiVar does not have the least amount of error in the train data, its performance is comparable to that of SR-UniVar. The symbolic representation discovered by SR-MultiVar is presented below:
\[-\sin\left(1.77x_{1}+0.17\right)+\frac{0.07e^{-x_{2}}\cos\left(1.4x_{3}\right) }{\sin\left(\cos\left(\cos\left(\frac{\zeta_{3}}{x_{3}}\right)-0.09\right) \right)}\over\cos\left(\cos\left(\cos\left(\sin\left(x_{1}\right)\right) \right)\right))}. \tag{29}\]
The symbolic representation discovered by SR-MultiVar is fully transparent, and it clearly discards the contribution of the irrelevant feature \(x_{4}\). This is an essential ingredient for model interpretability. However, the equation itself needs to provide an easy way to uncover the underlying data generation process, making it less interpretable than the proposed divide-and-conquer scheme.
Figure 23: Comparison among the learned shape functions based on the QNM model, their corresponding symbolic equations, and their expected ground truth.
## 5 Conclusions
We introduce a framework that combines the expressivity of the neural network and the interpretability of the symbolic regression to yield a plasticity model that can be expressed analytically while (1) achieving the necessary accuracy for engineering applications and (2) overcoming the technical barrier of symbolic regression. In particular, we take advantage of the divide-and-conquer nature of the feature generation step of the proposed neural quadratic method such that a series of one-dimensional symbolic regression for the basis of the feature space may replace the NP-hard high-dimensional symbolic regression problem. The proposed machine learning tool is tested against synthetic data with a known analytical yield function that is not convex, as well as a data set for porous metal with no known analytical solution. In both cases, we found that the proposed method is feasible to train, and the generated model is capable of discovering yield functions with superior accuracy than those obtained from the classical neural additive model. To ensure third-party validation, the source code is open-sourced.
## Acknowledgments
The authors are supported by the National Science Foundation under grant contracts CMMI-1846875 and the Dynamic Materials and Interactions Program from the Air Force Office of Scientific Research under grant contracts FA9550-21-1-0391 with the major equipment supported by FA9550-21-1-0027, with additional funding from the Department of Energy DE-NA0003962 and the UPS Foundation Visiting Professorship at Stanford. These supports are gratefully acknowledged. The views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the official policies, either
Figure 24: Comparison between the learned shape functions based on the NAM model and their expected ground truth.
expressed or implied, of the sponsors, including the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. The views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the official policies, either expressed or implied, of the sponsors, including the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
## CRediT authorship contribution statement
Bahador Bahmani: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Writing - Original Draft. Hyoung Suk Suh: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Writing - Original Draft. WaiChing Sun: Conceptualization, Methodology, Investigation, Validation, Resource, Writing - Original Draft, Supervision, Project administration, Funding acquisition.
## Appendix A Limitations of QNM for non-smooth cases
For completeness, we also present a regression example that demonstrates NAM and QNM's inability to achieve the desired level of accuracy. We synthesized data using the following function:
\[f(x_{1},x_{2})=|x_{1}-x_{2}|+|x_{1}+x_{2}|, \tag{11}\]
This function represents a pyramid, which is depicted in Figure 25(a). We obtained prediction results using two models: QNM and NAM. These results are presented in Figures 25(b-c). However, since both models rely on restricted modeling assumptions by controlling the amount of possible interactions among input features, they may not be able to represent complex tasks that require higher order interactions among features. To improve the expressivity necessary for this non-smooth learned function, we can extend the QNM to higher-order polynomials or introduce specific enrichment function (manually or through machine learning) in the feature space to handle the sharp gradient. Alternatively, one may also construct feature space locally for coordinate charts that constitute a yielding manifold (Xiao and Sun, 2022). These potential improvements are out of the scope of this study but will be considered in the future. We included the prediction residuals of these models in Figure 26 for completeness.
Figure 25: Model predictions in case of (a) the pyramid function. QNM and NAM do not achieve satisfactory levels of accuracy. |
2306.13260 | Weyl Transform on Nonunimodular Groups | For $p>2$, B. Simon\cite{Simon} studied the unboundedness of the Weyl
transform for symbol belonging to $L^p({\mathbb{R}^n\times \mathbb{R}^n})$. In
this article, we study the analog of unboundedness of the Weyl transform on
some nonunimodular groups, namely, the affine group, similitude group, and
affine Poincar\'e group. | Santosh Kumar Nayak | 2023-06-23T01:58:22Z | http://arxiv.org/abs/2306.13260v1 | # Weyl transform on nonunimodular groups
###### Abstract.
For \(p>2\), B. Simon [14] studied the unboundedness of the Weyl transform for symbol belonging to \(L^{p}(\mathbb{R}^{n}\times\mathbb{R}^{n})\). In this article, we study the analog of unboundedness of the Weyl transform on some nonunimodular groups, namely, the affine group, similitude group, and affine Poincare group.
Key words and phrases:Representation theory, Nonunimodular group, Fourier Transform, Affine group, Similitude group, Affine Poincare group, Weyl transform 2020 Mathematics Subject Classification: Primary 47G10, 47G30,43A15, 43A30, Secondary 42B35 S. K. Nayak is supported by the Institute Fellowship.
## 1. Introduction
Weyl transform is an operator introduced by H. Weyl in 1950, [18]. It is a type of self-adjoint pseudo-differential operator on \(\mathbb{R}^{n}\), [15]. H. Weyl studied this operator while solving quantization problems in quantum mechanics, [10]. Physicists always love to work with self-adjoint operators. The theory of the Weyl transforms covers a broad area of great interest in both mathematical analysis and physics. In a number of problems like elliptic theory, regularity problems, spectral asymptotic, etc., Weyl transforms have proved to be a useful tool, [19]. The Weyl transform(operator) has been deeply investigated mainly in the case where the symbol is a smooth function on \(\mathbb{R}^{n}\times\mathbb{R}^{n}\) belonging to some symbol classes. For more details, we refer to [1, 2, 16, 17]. The Weyl transform is an operator acting on \(L^{2}(\mathbb{R}^{n})\) that belongs to Hilbert-Schmidt class when the symbol lies in \(L^{2}(\mathbb{R}^{n}\times\mathbb{R}^{n})\), [18]. And when the symbol \(\sigma\) lies in \(L^{p}(\mathbb{R}^{n}\times\mathbb{R}^{n}),1\leqslant p\leqslant 2\), the Weyl transform is bounded on \(L^{2}(\mathbb{R}^{n})\). For \(p>2\), B. Simon [14] studied the unboundedness of the Weyl transform for symbol belonging to \(L^{p}(\mathbb{R}^{n}\times\mathbb{R}^{n})\). The proof of boundedness and unboundedness of the Weyl transform can be found in [19].
In this paper, we assume group G to be any of the three groups, namely, affine group, similitude group, and affine Poincare group, and their dual \(\widehat{G}\) are discrete sets.
Let \(L^{p}(G\times\widehat{G},S_{p})\) be the set of all \(S_{p}\) valued functions such that
\[\|f\|_{p,\mu}^{p}=\sum_{\rho\in\widehat{G}}\int_{G}\|f(x,\rho)K_{\rho}^{\frac {1}{p}}\|_{S_{p}}^{p}d\mu(x)<\infty,\quad 1\leqslant p<\infty,\]
\[\|f\|_{\infty,\mu}=\mbox{ess sup}_{(x,\rho)\in G\times\widehat{G}}\|f(x,\rho) \|_{S_{\infty}}<\infty.\]
**Definition 1.1**.: Let \(f,g\in C_{c}(G)\). Then the Wigner transform associated to \(f\) and \(g\) is given by
\[W(f,g)(x,\rho)=\int_{G}f(x^{\prime})\tau_{x^{\prime}}g(x)\rho(x^{\prime})d\mu( x^{\prime}),\]
where \(\tau_{x}f(y)=f(x^{-1}y)\) is the left translation operator on \(L^{p}(G)\).
Proof of the following results is followed step by step from [9]. So we omit those proofs from the article but to make it self-contained, we just have stated.
**Theorem 1.2**.: _Let \(f,g\in C_{c}(G)\). Then for \(2\leq p\leq\infty\), we have \(W(f,g)\in L^{p}(G\times\widehat{G},S_{p})\) and_
\[\|W(f,g)\|_{p,\mu}\leq\|f\|_{L^{2}(G)}\|g\|_{L^{2}(G)}.\]
The following proposition represents the Fourier transform in terms of the Wigner transform.
**Proposition 1.3**.: _Let \(f\in L^{1}(G)\cap L^{2}(G)\) and \(C=\int\limits_{G}g(y)d\mu(y)\neq 0\). Then_
\[\widehat{f}(\rho)=C^{-1}\int_{G}W(f,g)(y,\rho)d\mu(y),\]
_for all \(\rho\in\widehat{G}\)._
**Corollary 1.4**.: _Let \(f\in L^{1}(G)\cap A(G)\) and \(g\in L^{1}(G)\cap L^{2}(G)\) with \(C=\int\limits_{G}g(x)d\mu(x)\neq 0\). Then_
\[f(x)=C^{-1}\sum\limits_{\rho\in\widehat{G}}\operatorname{Tr}\left(\rho^{*}(x) \left(\int_{G}W(f,g)(x^{\prime},\rho)d\mu(x^{\prime})\right)K_{\rho}\right).\]
**Definition 1.5**.: Let \(\sigma\in L^{p}(G\times\widehat{G},S_{p}),1\leq p\leq 2.\) Then the Weyl transform \(W_{\sigma}:L^{2}(G)\to L^{2}(G)\) is defined by
\[\langle W_{\sigma}f,\overline{g}\rangle=\langle W(f,g),\sigma\rangle_{\mu}= \sum\limits_{\rho\in\widehat{G}}\int_{G}\operatorname{Tr}\left(\sigma(x,\rho) ^{*}W(f,g)(x,\rho)K_{\rho}\right)d\mu(x), \tag{1.1}\]
where \(f,g\) are in \(L^{2}(G)\) and \(\langle,\rangle_{\mu}\) denotes innerproduct in \(L^{2}(G\times\widehat{G},S_{2})\).
**Theorem 1.6**.: _Let \(\sigma\in L^{p}(G\times\widehat{G},S_{p}),1\leq p\leq 2\). Then_
\[\|W_{\sigma}\|\leq\|\sigma\|_{p,\mu}.\]
For \(G=\mathbb{R}^{n}\), B. Simon [14] gave the example of symbols in \(L^{p}(\mathbb{R}^{n}\times\mathbb{R}^{n}),p>2\), such that the corresponding Weyl transform becomes unbounded. This idea has been motivating many authors to study the unboundedness of Weyl transform in other settings like the Heisenberg group [11], quaternion Heisenberg group [3], upper half plane [12], motion groups [9], reduced Heisenberg group with multidimensional center [7], and other Weyl transforms can be found in [13, 20]. In this article, we study the unboundedness of Weyl transform on the affine group \(U\), similitude group \(\mathbb{SIM}(2)\), and affine Poincare group \(\mathcal{P}_{\text{aff}}\).
The organization of the paper is as follows: we investigate the unboundedness of Weyl transform on the affine group, similitude group, and affine Poicare group, respectively in Section 2, Section 3, and Section 4.
## 2. Affine Group
In this section, we consider the group \(G\) to be an affine group \(U\). Here we recall the Fourier analysis of the affine group, \(U\), in detail. In this setup, we will define the Weyl transform formula and show that it cannot be extended as a bounded operator for the symbol in the corresponding \(L^{p}\) spaces with \(2<p<\infty\).
Let \(U=\{(b,a):b\in\mathbb{R},a>0\}\) be the upper half plane. Then with respect to the binary operation '\(\cdot\)' on \(U\), defined as
\[(b,a)\cdot(c,d)=(b+ac,ad),\quad(b,a),(c,d)\in U, \tag{2.1}\]
the upper half plane \(U\) forms a non-abelian group. It can be shown \((\frac{-b}{a},\frac{1}{a})\) is the inverse element of \((b,a)\) and \((0,1)\) is the identity element in \(U\). The left and right Haar measures on \(U\) are \(d\mu=\frac{dbda}{a^{2}}\) and \(d\gamma=\frac{dbda}{a}\) respectively. With respect to the above multiplication '\(\cdot\)' defined by (2.1), U is also a locally compact and Hausdorff group on which the left Haar measure is different from the right Haar measure, and hence \(U\) is a non-unimodular group.
Let \(\mathcal{U}(L^{2}(\mathbb{R}_{\pm}))\) be the set of all unitary operators on \(L^{2}(\mathbb{R}_{\pm})\). Then the unitary irreducible representations of \(U\) on \(L^{2}(\mathbb{R}_{\pm})\) are given by the mapping \(\rho_{\pm}:U\rightarrow\mathcal{U}(L^{2}(\mathbb{R}_{\pm}))\) defined as
\[(\rho_{+}(b,a)u)(s)=a^{1/2}e^{-ibs}u(as),\quad s\in\mathbb{R}_{+}=[0,\infty),\]
for every \(u\in L^{2}(\mathbb{R}_{+})\), and
\[(\rho_{-}(b,a)u)(s)=a^{1/2}e^{-ibs}v(as),\quad s\in\mathbb{R}_{-}=(-\infty,0],\]
for every \(v\in L^{2}(\mathbb{R}_{-}).\) These two \(\{\rho_{+},\rho_{-}\}\) are only the irreducible inequvalent unitary representations, i.e., \(\widehat{U}=\{\rho_{+},\rho_{-}\}\). The unbounded operators, \(K_{\pm}\), on \(L^{2}(\mathbb{R}_{\pm})\), are given by
\[(K_{\pm}\phi)(s)=|s|\phi(s),\quad s\in\mathbb{R}_{\pm}, \tag{2.2}\]
are known as Duflo-Moore operators, [8].
Let \(f\in L^{1}(U)\cap L^{2}(U)\), the Fourier transform
\[\widehat{f}(\rho_{\pm})=\int_{U}f(b,a)\rho_{\pm}(b,a)\frac{dbda}{a^{2}}.\]
Let \(\phi\in L^{2}(\mathbb{R}_{\pm})\), then \(\widehat{f}(\rho_{\pm})\phi(s)=\int\limits_{0}^{\infty}\int\limits_{-\infty}^ {\infty}f(b,a)\left(\rho_{\pm}(b,a)\phi\right)(s)\frac{dbda}{a^{2}},\phi\in L ^{2}(\mathbb{R}_{\pm})\). Define \(L^{p}\)-Fourier transform \(\mathcal{F}_{p}(f)\rho_{\pm}=\widehat{f}(\rho_{\pm})K_{\pm}^{\frac{1}{p^{ \prime}}},1\leq p\leq\infty,\frac{1}{p}+\frac{1}{p^{\prime}}=1.\)
**Theorem 2.1** (Fourier inversion formula).: _[_4_]_ _Let \(f\in L^{1}(U)\). Then_
\[f(x)=\sum\limits_{j\in\{\pm\}}\operatorname{Tr}\left(\rho_{j}(b,a)^{*}\widehat {f}(\rho_{j})K_{j}\right).\]
**Theorem 2.2** (Plancherel Formula).: _[_4_]_ _Let \(f\in L^{1}(U)\cap L^{2}(U)\). Then_
\[\int_{U}|f(b,a)|^{2}\frac{dbda}{a^{2}}=\sum\limits_{j\in\{\pm\}}\|\widehat{f }(\rho_{j})K_{j}^{\frac{1}{2}}\|_{S_{2}}^{2}.\]
For \(G=U\), recall the definition of Weyl transform defined in (1.1), and the Theorem 1.6 says that it is bounded when \(1\leq p\leq 2\). Now we prove the unboundedness of Weyl transform when \(p>2\).
**Theorem 2.3**.: _For \(2<p<\infty\), there exists an operator valued function \(\sigma\) in \(L^{p}(U\times\widehat{U},S_{p})\) such that \(W_{\sigma}\) is not a bounded linear operator on \(L^{2}(U)\)._
We prove the Theorem 2.3 using the method of contrapositive in three propositions.
**Proposition 2.4**.: _Let \(2<p<\infty\) and \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\). Then for all \(\sigma\in L^{p}(U\times\widehat{U},S_{p})\), the Weyl transform \(W_{\sigma}\) is bounded linear operator on \(L^{2}(U)\) iff there exists a constant \(C\) such that_
\[\|W(f,g)\|_{p^{\prime},\mu}\leq C\|g\|_{L^{2}(U)}\|f\|_{L^{2}(U)},\]
_for all \(f,g\) in \(L^{2}(U)\)._
Proof.: Assume that
\[\|W(f,g)\|_{p^{\prime},\mu}\leqslant C\|g\|_{L^{2}(U)}\|f\|_{L^{2}(U)},\quad f,g \in L^{2}(U),\]
for some \(C>0\). By the definition 1.5, we obtain the necessary part of the theorem. Conversely, suppose that for each \(\sigma\in L^{p}(U\times\widehat{U},S_{p})\), \(W_{\sigma}\) is bounded, i.e.,
\[\|W_{\sigma}f\|_{L^{2}(U)}\leqslant C(\sigma)\|f\|_{L^{2}(U)},\]
for all \(f\in L^{2}(U)\). Define a family of linear operators \(M_{f,g}:L^{p}(U\times\widehat{U},S_{p})\to\mathbb{C}\) by
\[M_{f,g}(\sigma)=\langle W_{\sigma}f,g\rangle,\quad f,g\in C_{c}(U).\]
For each \(f,g\in C_{c}(U)\), it is easy to check that this family is pointwise bounded linear functional. By uniform bounded principle, there exists \(C\) such that \(\|M_{f,g}\|\leqslant C\), for all \(f,g\in C_{c}(U),\|f\|_{2}=\|g\|_{2}=1\). Hence \(|\langle W_{\sigma}f,g\rangle|\leqslant C\|\sigma\|\). Thus for \(f,g\in C_{c}(U)\),
\[\|W(f,g)\|_{p^{\prime},\mu} =\sup_{|\sigma|_{p,\mu}=1}|\langle W(f,g),\sigma\rangle_{\mu}|\] \[=\sup_{|\sigma|_{p,\mu}=1}|\langle W_{\sigma}f,\overline{g} \rangle|\leqslant C\|f\|_{2}\|g\|_{2}.\]
By density argument, we conclude the proposition.
**Proposition 2.5**.: _Let \(2<p<\infty\) and \(f\) be a square integrable, compactly supported function on \(U\) such that \(\int\limits_{U}f(b,a)d\mu(b,a)\neq 0\). If \(W_{\sigma}\) is a bounded operator on \(L^{2}(U)\) for all \(\sigma\) in \(L^{p}(U\times\widehat{U},S_{p})\), then_
\[\|\mathcal{F}_{p}(f)\rho_{+}\|_{S_{p^{\prime}}}^{p^{\prime}}+\|\mathcal{F}_{p} (f)\rho_{-}\|_{S_{p^{\prime}}}^{p^{\prime}}<\infty.\]
Proof.: From Proposition 1.3, it is enough to show
\[\sum_{j\in\{\pm\}}\|\int_{U}W(f,f)(b,a,\rho_{j})K_{j}^{\frac{1}{p^{\prime}}}d \mu(b,a)\|_{S_{p^{\prime}}}^{p^{\prime}}<\infty,\]
for all square integrable, compactly supported function \(f\) on \(U\) such that \(\int\limits_{U}f(b,a)d\mu(b,a)\neq 0\). Suppose \(f\) is supported in \(A\) of \(U\), then \(W(f,f)\) is supported in \(AA\times\widehat{U}\). Now by Holder's inequality, Minkowski's integral inequality, and Proposition 2.4, we get
\[\sum_{j\in\{\pm\}}\|\int_{U}W(f,f)(b,a,\rho_{j})K_{j}^{\frac{1}{p ^{\prime}}}d\mu(b,a)\|_{S_{p^{\prime}}}^{p^{\prime}}\] \[\leqslant\left(\int_{AA}\left(\sum_{j\in\{\pm\}}\|W(f,f)(b,a,\rho _{j})K_{j}^{\frac{1}{p^{\prime}}}\|_{S_{p^{\prime}}}^{p^{\prime}}\right)^{ \frac{1}{p^{\prime}}}d\mu(b,a)\right)^{p^{\prime}}\] \[\leqslant\left(\int_{AA}d\mu(b,a)\right)^{\frac{p^{\prime}}{p}} \int_{AA}\sum_{j\in\{\pm\}}\|W(f,f)(b,a,\rho_{j})K_{j}^{\frac{1}{p^{\prime}}} \|_{S_{p^{\prime}}}^{p^{\prime}}d\mu(b,a)<\infty.\]
This completes the proof.
**Proposition 2.6**.: _For \(p\in(2,\infty)\), does there exists a square-integrable, compactly supported function \(f\) on \(U\) with \(\int\limits_{U}f(b,a)\frac{dbda}{a^{2}}\neq 0\) such that \(\|\mathcal{F}_{p}(f)\rho_{+}\|_{S_{p^{\prime}}}^{p^{\prime}}+\|\mathcal{F}_{p} (f)\rho_{-}\|_{S_{p^{\prime}}}^{p^{\prime}}=\infty\)?_
The answer to this question is yes. We will certainly find a function \(f_{\alpha},0<\alpha<\frac{1}{2}\), at the end of the section, which is needed for Proposition 2.6. Let \(f\in L^{2}(U)\). Now for all \(\phi\in L^{2}(\mathbb{R}_{+})\),
\[\left(\widehat{f}(\rho_{+})K_{+}^{\frac{1}{p^{\prime}}}\phi\right) (s) =\int_{0}^{\infty}\int_{-\infty}^{\infty}f(b,a)\left(\rho_{+}(b,a )K_{+}^{\frac{1}{p^{\prime}}}\phi\right)(s)\frac{dbda}{a^{2}}\] \[=\int_{0}^{\infty}\int_{-\infty}^{\infty}f(b,a)a^{\frac{1}{2}}e^{ -ibs}\left(K_{+}^{\frac{1}{p^{\prime}}}\phi\right)(as)\frac{dbda}{a^{2}}\] \[=\int_{0}^{\infty}(\mathcal{F}_{1}f)(s,a)a^{\frac{1}{2}}|as|^{ \frac{1}{p^{\prime}}}\phi(as)\frac{da}{a^{2}}\] \[=\int_{0}^{\infty}(\mathcal{F}_{1}f)(s,a)a^{\frac{1}{2}+\frac{1}{ p^{\prime}}}|s|^{\frac{1}{p^{\prime}}}\phi(as)\frac{da}{a^{2}}.\]
Now substituting \(as=t\), we get \(da=\frac{dt}{s}\).
\[\left(\widehat{f}(\rho_{+})K_{+}^{\frac{1}{p^{\prime}}}\phi\right)(s)=\int_{0 }^{\infty}(\mathcal{F}_{1}f)\left(s,\frac{t}{s}\right)\frac{t}{s}\frac{t^{ \frac{1}{2}+\frac{1}{p^{\prime}}}}{|s|^{\frac{1}{p^{\prime}}}\frac{s^{2}}{t^{2 }}\phi(t)\frac{dt}{s}.\]
Again for all \(\phi\in L^{2}(\mathbb{R}_{-})\), we have
\[\left(\widehat{f}(\rho_{-})K_{-}^{\frac{1}{p^{\prime}}}\phi\right)(s)=\int_{- \infty}^{0}(\mathcal{F}_{1}f)\left(s,\frac{t}{s}\right)\frac{t}{s}^{\frac{1}{ 2}+\frac{1}{p^{\prime}}}|s|^{\frac{1}{p^{\prime}}}\frac{s^{2}}{t^{2}}\phi(t) \frac{dt}{s}.\]
Now
\[\|\widehat{f}(\rho_{+})K_{+}^{\frac{1}{p^{\prime}}}\|_{S_{2}}^{2} +\|\widehat{f}(\rho_{-})K_{-}^{\frac{1}{p}}\|_{S_{2}}^{2} =\int_{0}^{\infty}\int_{0}^{\infty}\left|(\mathcal{F}_{1}f)\left( s,\frac{t}{s}\right)\right|^{2}\frac{t}{s}^{1+\frac{2}{p^{\prime}}}|s|^{ \frac{2}{p^{\prime}}-2}\frac{s^{4}}{t^{4}}dsdt\] \[+\int_{-\infty}^{0}\int_{-\infty}^{0}\left|(\mathcal{F}_{1}f) \left(s,\frac{t}{s}\right)\right|^{2}\frac{t}{s}^{1+\frac{2}{p^{\prime}}}|s|^ {\frac{2}{p^{\prime}}-2}\frac{s^{4}}{t^{4}}dsdt.\]
Substituting \(\frac{t}{s}=a\), then we get \(\frac{dt}{s}=da\).
\[\|\widehat{f}(\rho_{+})K_{+}^{\frac{1}{p^{\prime}}}\|_{S_{2}}^{2} +\|\widehat{f}(\rho_{-})K_{-}^{\frac{1}{p^{\prime}}}\|_{S_{2}}^{2} =\int_{0}^{\infty}\int_{0}^{\infty}\left|(\mathcal{F}_{1}f)\left( s,a)\right|^{2}a^{1+\frac{2}{p^{\prime}}}|s|^{\frac{2}{p^{\prime}}-1}\frac{1}{a^{ 4}}dsda\] \[+\int_{-\infty}^{0}\int_{0}^{\infty}\left|(\mathcal{F}_{1}f) \left(s,a)\right|^{2}a^{1+\frac{2}{p^{\prime}}}|s|^{\frac{2}{p^{\prime}}-1} \frac{1}{a^{4}}dsda\] \[=\int_{-\infty}^{\infty}\int_{0}^{\infty}\left|(\mathcal{F}_{1}f )\left(s,a)\right|^{2}a^{1+\frac{2}{p^{\prime}}-4}|s|^{\frac{2}{p^{\prime}}-1} dsda. \tag{2.3}\]
For \(0<\alpha<\frac{1}{2}\), let us choose
\[f_{\alpha}(b,a)=\begin{cases}|b|^{-\alpha}a^{2},\quad(b,a)\in Q\\ 0,\qquad\text{otherwise},\end{cases}\]
where \(Q=\{(b,a):-p\leqslant b,a\leqslant p\},p>0\). Hence
\[(\mathcal{F}_{1}f_{\alpha})(s,a)=\left[(2\pi)^{-1/2}\int_{-p}^{p}e^{-ibs}|b| ^{-\alpha}db\right]a^{2}.\]
Now for \(s>0\),
\[\int_{-p}^{p}e^{-ibs}|b|^{-\alpha}db=2\left(\int_{0}^{sp}t^{-\alpha}\cos tdt \right)s^{-1+\alpha},\quad j=1,2.\]
When \(\alpha\in(0,\frac{1}{2})\), then \(|\int\limits_{0}^{sp}t^{-\alpha}\cos tdt|\geq B\), for some \(B>0\). Now
\[\int_{-\infty}^{\infty}\int_{0}^{\infty}|(\mathcal{F}_{1}f)\left(s,a\right)|^{2 }a^{1+\frac{2}{p^{\prime}}-4}|s|^{\frac{2}{p^{\prime}}-1}dsda\geq\frac{2B^{2}} {\pi}\int_{0}^{p}\int_{R}^{\infty}s^{2(-1+\alpha)+\frac{2}{p^{\prime}}-1}a^{1+ \frac{2}{p^{\prime}}}dsda.\]
But
\[\int_{R}^{\infty}s^{2(-1+\alpha)+\frac{2}{p^{\prime}}-1}ds=\infty,\]
when \(\alpha>1-\frac{1}{p^{\prime}}\). Also \(f_{\alpha}\) is square integrable if \(\alpha<\frac{1}{2}\). Let us choose \(\frac{1}{2}>\alpha>1-\frac{1}{p^{\prime}}\), then the Equation (2.3) becomes
\[\|\widehat{f}_{\alpha}(\rho_{+})K_{+}^{\frac{1}{p^{\prime}}}\|_{S_{2}}^{2}+\| \widehat{f}_{\alpha}(\rho_{-})K_{-}^{\frac{1}{p^{\prime}}}\|_{S_{2}}^{2}=\infty.\]
Hence for given \(f_{\alpha},\frac{1}{2}>\alpha>1-\frac{1}{p^{\prime}}\), we get
\[\|\mathcal{F}_{p}(f_{\alpha})\rho_{+}\|_{S_{p^{\prime}}}^{p^{\prime}}+\| \mathcal{F}_{p}(f_{\alpha})\rho_{-}\|_{S_{p^{\prime}}}^{p^{\prime}}=\infty,1<p ^{\prime}<2.\]
## 3. Similitude Group
This section details the Fourier analysis of the similitude group, \(\mathbb{SIM}(2)\). In this setup, we will define the Weyl transform formula and show that it cannot be extended as a bounded operator for the symbol in the corresponding \(L^{p}\) spaces with \(2<p<\infty\).
The similitude group, \(\mathbb{SIM}(2)\), can be considered as the generalization of the affine group. This group contains affine group as a subgroup. Also one can consider \(\mathbb{SIM}(2)\) as a complexification of the affine group, \(U\). This group arises in the study of \(2\)-dimensional wavelet transforms. It is a four-parameter group that contains the translations in the image plane, \(\mathbb{R}^{2}\), global dilations (zooming in and out by \(a>0\)), and rotations around the origin (\(\theta\in[0,2\pi)\)). The action on the plane is given by,
\[x=(b,a,\theta)y=aR_{\theta}y+b,\quad x,y\in\mathbb{R}^{2},\]
where \(b\in\mathbb{R}^{2}\), \(a\in\mathbb{R}\) and \(R_{\theta}\) is the \(2\times 2\) rotation matrix,
\[R_{\theta}=\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}. \tag{3.1}\]
An useful representation of \((b,a,\theta)\) is a \(3\times 3\) matrix given by
\[(b,a,\theta)=\begin{pmatrix}aR_{\theta}&b\\ 0^{\intercal}&1\end{pmatrix},\ 0^{\intercal}=(0,0). \tag{3.2}\]
Matrix multiplication then gives the composition of successive transformations and thus the group law is derived from that,
\[(b,a,\theta)*(b^{\prime},a^{\prime},\theta^{\prime}) = (b+aR_{\theta}b^{\prime},aa^{\prime},\theta+\theta^{\prime})\] \[e = (0,1,0)\] \[(b,a,\theta)^{-1} = (-a^{-1}R_{-\theta}b,a^{-1},-\theta).\]
With respect to the operation \(*\), \(\mathbb{SIM}(2)\) is a non-abelian group in which \((0,1,0)\) is the identity element and \((\frac{-1}{a}R_{-\theta}b,\frac{1}{a},-\theta)\) is the inverse of \((b,a,\theta)\) in \(\mathbb{SIM}(2)\). Also, it can be shown that \(\mathbb{SIM}(2)\) is a non-unimodular group as its left and right Haar measures given by
\[d\mu_{L}(b,a,\theta)=\frac{dbdad\theta}{a^{3}},\ d\mu_{R}(b,a,\theta)=\frac{ dbdad\theta}{a},\]
respectively, are different.
Moreover, the similitude group \(\mathbb{SIM}(2)\), has the structure of a semidirect product:
\[\mathbb{SIM}(2)=\mathbb{R}^{2}\rtimes(\mathbb{R}_{*}^{+}\times\mathrm{SO}(2)),\]
where \(\mathbb{R}^{2}\) is the subgroup of the translations, \(\mathbb{R}_{*}^{+}\) that of dilations and \(\mathrm{SO}(2)\) of rotations. Topologically, one can identify \(\mathbb{R}^{2}\) with \(\mathbb{C}\), the complex plane and \(\mathbb{R}_{*}^{+}\times\mathrm{SO}(2)\) with \(\mathbb{C}^{*}\), the complex plane with the origin is removed. Then
\[\mathbb{SIM}(2)=\mathbb{C}\rtimes C^{*},\]
consisting of \((z,w)\), where \(z\in\mathbb{C}\) and \(w\in\mathbb{C}^{*}\), the group composition law is,
\[(z_{1},w_{1})(z_{2},w_{2})=(z_{1}+w_{1}z_{2},w_{1}w_{2}).\]
In particular, if one considers the elements \((z,w)\), with \(z=b+ic\), \(c=0\) and \(w=ae^{i\theta}\), \(\theta=0\) then these elements clearly constitute a subgroup, which is just the affine group of the line. Thus the affine group forms a subgroup of \(\mathbb{SIM}(2)\).
Let \(\mathcal{U}(L^{2}(\mathbb{R}^{2}))\) be the set of all unitary operators on \(L^{2}(\mathbb{R}^{2})\). We denote \(\pi:\mathbb{SIM}(2)\to\mathcal{U}(L^{2}(\mathbb{R}^{2}))\) be the mapping of \(\mathbb{SIM}(2)\) into the group \(\mathcal{U}(L^{2}(\mathbb{R}^{2}))\), is given by
\[\big{(}\pi(b,a,\theta)\phi\big{)}(x)=ae^{-ib\cdot x}\phi(aR_{-\theta}x),\quad x \in\mathbb{R}^{2},\]
for all \((b,a,\theta)\) in \(\mathbb{SIM}(2)\) and all \(\phi\) in \(L^{2}(\mathbb{R}^{2})\). Here \(\pi\) is the only irreducible unitary representation of \(\mathbb{SIM}(2)\) on \(L^{2}(\mathbb{R}^{2})\) upto equivalence, [5]. i.e., dual of \(\mathbb{SIM}(2)\) is just singleton.
Let \(f\in L^{1}(\mathbb{SIM}(2))\cap L^{2}(\mathbb{SIM}(2))\) and define the Fourier transform \(\widehat{f}\) of \(f\) on \(\widehat{\mathbb{SIM}(2)}=\{\pi\}\) by
\[(\widehat{f}(\pi)\phi)(x)=\int\limits_{\mathbb{SIM}(2)}f(b,a,\theta)(\pi(b,a, \theta)\phi)(x)\frac{dbdad\theta}{a^{3}},\quad x\in\mathbb{R}^{2},\]
for all \(\phi\in L^{2}(\mathbb{R}^{2})\).
Define the function \(K_{\pi}\phi\) on \(\mathbb{R}^{2}\) by,
\[(K_{\pi}\phi)(x)=\|x\|_{2}^{2}\phi(x),\quad x\in\mathbb{R}^{2}, \tag{3.3}\]
where \(\|x\|_{2}=\sqrt{x_{1}^{2}+x_{2}^{2}}\). Define the \(L^{p}\)-Fourier transform \(\mathcal{F}_{p}(f)\pi=\widehat{f}(\pi)K_{\pi}^{\frac{1}{p^{\prime}}},1\leq p \leq\infty,\frac{1}{p}+\frac{1}{p^{\prime}}=1.\) Now recall the Fourier inversion and Plancherel formula for the similitude group from [5].
**Theorem 3.1**.: _(Inversion theorem)_
_Let \(f\) be in \(L^{1}(\mathbb{SIM}(2))\cap L^{2}(\mathbb{SIM}(2))\). Then_
\[f(b,a,\theta)=\mathrm{Tr}\left(\pi(b,a,\theta)^{*}\widehat{f}(\pi)K_{\pi} \right).\]
Let \(S_{r}\) be the space of all \(r\)-Schatten class operators, which is a Banach space with the norm \(\|A\|_{S_{r}}^{r}=\mathrm{Tr}\left(A^{*}A\right)^{r/2}\) for \(1\leq p<\infty\), and for \(r=\infty\), the norm is the usual operator norm.
**Theorem 3.2**.: _(Plancherel formula) Let \(f\) be in \(L^{1}(\mathbb{SIM}(2))\cap L^{2}(\mathbb{SIM}(2))\). Then_
\[\int_{\mathbb{SIM}(2)}|f(b,a,\theta)|^{2}\frac{dbdad\theta}{a^{3}}=\| \widehat{f}(\pi)K_{\pi}^{1/2}\|_{S_{2}}^{2}.\]
For \(G=\mathbb{SIM}(2)\), recall the definition of Weyl transform defined in (1.1), and the Theorem 1.6 says that it is bounded when \(1\leq p\leq 2\). Now we prove the unboundedness of Weyl transforms when \(p>2\) for the similitude group.
**Theorem 3.3**.: _For \(2<p<\infty\), there exists an operator valued function \(\sigma\) in \(L^{p}(\mathbb{SIM}(2)\times\widehat{\mathbb{SIM}(2)},S_{p})\) such that \(W_{\sigma}\) is not a bounded linear operator on \(L^{2}(\mathbb{SIM}(2))\)._
We prove the Theorem 3.3 using the method of contrapositive in three propositions.
**Proposition 3.4**.: _Let \(2<p<\infty\) and \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\). Then for all \(\sigma\in L^{p}(\mathbb{SIM}(2)\times\widehat{\mathbb{SIM}(2)},S_{p})\), the Weyl transform \(W_{\sigma}\) is bounded linear operator on \(L^{2}(\mathbb{SIM}(2))\) iff there exists a constant \(C\) such that_
\[\|W(f,g)\|_{p^{\prime},\mu}\leqslant C\|g\|_{L^{2}(\mathbb{SIM}(2))}\|f\|_{L^{ 2}(\mathbb{SIM}(2))},\]
_for all \(f,g\) in \(L^{2}(\mathbb{SIM}(2))\)._
Proof.: The proof follows from Theorem 2.4.
**Proposition 3.5**.: _Let \(2<p<\infty\) and \(f\) be a square integrable, compactly supported function on \(\mathbb{SIM}(2)\) such that \(\int\limits_{\mathbb{SIM}(2)}f(b,a,\theta)d\mu_{L}(b,a,\theta)\neq 0\). If \(W_{\sigma}\) is a bounded operator on \(L^{2}(\mathbb{SIM}(2))\) for all \(\sigma\) in \(L^{p}(\mathbb{SIM}(2)\times\widehat{\mathbb{SIM}(2)},S_{p})\), then_
\[\|\mathcal{F}_{p}(f)\pi\|_{S_{p^{\prime}}}^{p^{\prime}}<\infty.\]
Proof.: The proof follows from Theorem 2.5.
**Proposition 3.6**.: _For \(p\in(2,\infty)\), does there exists a square-integrable, compactly supported function \(f\) on \(\mathbb{SIM}(2)\) with \(\int\limits_{\mathbb{SIM}(2)}f(b,a,\theta)d\mu(b,a,\theta)\neq 0\) such that \(\|\mathcal{F}_{p}(f)\pi\|_{S_{p^{\prime}}}^{p^{\prime}}=\infty\)?_
The answer to this question is yes. We will certainly find a function \(f_{\alpha},0<\alpha<\frac{1}{2}\), at the end of the section, which is needed for Proposition 3.6. The \(L^{p}\)-Fourier transform, for \(f\in L^{1}(\mathbb{SIM}(2)\cap L^{p}(\mathbb{SIM}(2)\), is defined by
\[\mathcal{F}_{p}(f)\pi=\widehat{f}(\pi)K_{\pi}^{\frac{1}{p^{\prime}}},\]
where \((K_{\pi}\phi)(x)=\|x\|_{2}^{2}\phi(x)\). The integral representation of the operator \(\mathcal{F}_{p}(f)\pi\) is given by
\[\left(\widehat{f}(\pi)K_{\pi}^{\frac{1}{p^{\prime}}}\phi\right)(x)=\iint \limits_{\mathbb{R}^{2}}(\mathcal{F}_{1}f)\left(x,\frac{\|y\|}{\|x\|},\cos^{- 1}\left(\frac{x\cdot y}{\|x\|\|y\|}\right)\right)\|y\|^{\frac{2}{p^{\prime}}- \frac{1}{\|x\|}}\frac{\|x\|}{\|y\|^{2}}\phi(y)dy,\]
and the kernel
\[K^{f}(x,y)=\begin{cases}(\mathcal{F}_{1}f)\left(x,\frac{\|y\|}{\|x\|},\cos^{- 1}\left(\frac{x\cdot y}{\|x\|\|y\|}\right)\right)\|y\|^{\frac{2}{p^{\prime}}- \frac{1}{\|x\|}}\frac{\|x\|}{\|y\|^{2}},&x\neq 0,y\neq 0,\\ 0,&otherwise.\end{cases} \tag{3.4}\]
Now
\[\iint\limits_{\mathbb{R}^{4}}|K^{f}(x,y)|^{2}dxdy =\iint\limits_{\mathbb{R}^{4}}\left|(\mathcal{F}_{1}f)\left(x, \frac{\|y\|}{\|x\|},\cos^{-1}\left(\frac{x\cdot y}{\|x\|\|y\|}\right)\right) \right|^{2}\|y\|^{\frac{4}{p^{\prime}}-2}\frac{\|x\|^{2}}{\|y\|^{4}}dxdy\] \[=\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\int_{0}^{2\pi}|(\mathcal{ F}_{1}f)(x,a,\theta)|^{2}(a\|x\|_{2})^{\frac{4}{p^{\prime}}-2}\frac{dxdad\theta}{a^{3}}.\]
For \(0<\alpha<\frac{1}{2}\), let us choose
\[f_{\alpha}(b,a,\theta)=\begin{cases}|b_{1}|^{-\alpha}|b_{2}|^{-\alpha}a^{3} \theta,\quad(b,a,\theta)\in Q\\ 0,\qquad\text{otherwise},\end{cases}\]
where \(Q=\{(b,a,\theta):-p\leq b_{1},b_{2},a\leq p,\theta\in[0,2\pi)\}\). Hence
\[(\mathcal{F}_{1}f_{\alpha})(\xi,a,\theta)=\left[(2\pi)^{-1}\int_{-p}^{p}e^{-ib_{1 }\xi_{1}}|b_{1}|^{-\alpha}db_{1}\int_{-p}^{p}e^{-ib_{2}\xi_{2}}|b_{2}|^{-\alpha }db_{2}\right]a^{3}\theta. \tag{3.5}\]
Now for \(\xi_{1},\xi_{2}>0\),
\[\int_{-p}^{p}e^{-ib_{j}\xi_{j}}|b_{j}|^{-\alpha}db_{j}=2\left(\int_{0}^{\xi_{j }p}t^{-\alpha}\cos tdt\right)\xi_{j}^{-1+\alpha},\quad j=1,2. \tag{3.6}\]
When \(\alpha\in(0,\frac{1}{2})\), then \(|\int\limits_{0}^{\xi_{j}p}t^{-\alpha}\cos tdt|\geq A\), for some \(A>0\). Since \(\frac{1}{\sqrt{2}}(|\xi_{1}|+|\xi_{2}|)\leq\|\xi\|_{2}\leq(|\xi_{1}|+|\xi_{2}|)\), then \(\|\xi\|_{2}\geq\frac{1}{\sqrt{2}}|\xi_{1}|\). Now using the equations (3.5) and (3.5), we get
\[\iint\limits_{\mathbb{R}^{4}}|K^{f_{\alpha}}(x,y)|^{2}dxdy =\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\int_{0}^{2\pi}|(\mathcal{ F}_{1}f_{\alpha})(\xi,a,\theta)|^{2}(a\|\xi\|_{2})^{\frac{4}{p^{\prime}}-2} \frac{d\xi dad\theta}{a^{3}}\] \[\geq(2\pi)^{-2}(2A)^{4}\int_{0}^{2\pi}\int_{0}^{\infty}\int_{R}^{ \infty}\int_{R}^{\infty}\xi_{1}^{2(-1+\alpha)}\xi_{2}^{2(-1+\alpha)}a^{3} \theta\left(a|\xi_{1}|\right)^{\frac{4}{p^{\prime}}-2}\frac{d\xi dad\theta}{a ^{3}}\] \[=\frac{4A^{4}}{\pi^{2}}C_{1}^{\frac{4}{p^{\prime}}-2}\int_{0}^{2 \pi}\theta d\theta\int_{0}^{\infty}a^{\frac{4}{p^{\prime}}-2}da\int_{R}^{ \infty}\xi_{1}^{2(-1+\alpha)+\frac{4}{p^{\prime}}-2}d\xi_{1}\int_{R}^{\infty} \xi_{2}^{2(-1+\alpha)}d\xi_{2}.\]
But
\[\int_{R}^{\infty}\xi_{1}^{2(-1+\alpha)+\frac{4}{p^{\prime}}-2}d\xi_{1}= \infty,\]
when \(\alpha>\frac{3}{2}-\frac{2}{p^{\prime}}\). Also \(f_{\alpha}\) is square integrable if \(\alpha<\frac{1}{2}\). Let us choose \(\frac{1}{2}>\alpha>\frac{3}{2}-\frac{2}{p^{\prime}},2>p^{\prime}>\frac{4}{3}\), and \(\frac{1}{2}>\alpha>0>\frac{3}{2}-\frac{2}{p^{\prime}},1<p^{\prime}\leq\frac{4} {3}\), then \(\widehat{f}_{\alpha}(\pi)K_{\pi}^{\frac{1}{p^{\prime}}}\notin S_{2}\). Thus for \(2<p<\infty\),
\[\|\widehat{f}_{\alpha}(\pi)K_{\pi}^{\frac{1}{p^{\prime}}}\|_{S_{p^{\prime}}}= \infty,\frac{1}{p}+\frac{1}{p^{\prime}}=1,\]
where \(\frac{1}{2}>\alpha>\frac{3}{2}-\frac{2}{p^{\prime}},2>p^{\prime}>\frac{4}{3}\), and \(\frac{1}{2}>\alpha>0>\frac{3}{2}-\frac{2}{p^{\prime}},1<p^{\prime}\leq\frac{4} {3}\).
## 4. Affine Poincare Group
This section covers the Fourier analysis of the affine Poincare group, \(\mathcal{P}_{\text{aff}}\), in detail. In this setup, we will define the Weyl transform and show that it cannot be extended as a bounded operator for the symbol in the corresponding \(L^{p}\) spaces with \(2<p<\infty\).
The affine Poincare group, \(\mathcal{P}_{\text{aff}}\), is a generalization of the affine group, or more precisely, a complexification of the affine group. This group emerges from the investigation of the 2-dimensional wavelet transform. Similar to the four-parameter similitude group, \(\mathcal{P}_{\text{aff}}\) contains the translations \(b\) in the image plane \(\mathbb{R}^{2}\), global dilations (zooming in and out by \(a>0\)), but it has hyperbolic rotations around the origin (\(\vartheta\in\mathbb{R}\)). The action on the plane is given by
\[x=(b,a,\vartheta)y=a\Lambda_{\vartheta}y+b,\]
where \(b\in\mathbb{R}^{2}\), \(a>0\), and \(\Lambda_{\vartheta}\) is the \(2\times 2\) hyperbolic rotation matrix
\[\Lambda_{\vartheta}=\begin{pmatrix}\cosh\vartheta&\sinh\vartheta\\ \sinh\vartheta&\cosh\vartheta\end{pmatrix}. \tag{4.1}\]
A convenient representation of the joint transformation \((b,a,\vartheta)\) is in the form of \(3\times 3\) matrices
\[(b,a,\vartheta)=\begin{pmatrix}a\Lambda_{\vartheta}&b\\ 0^{T}&1\end{pmatrix},\ 0^{T}=(0,0). \tag{4.2}\]
Then the matrix multiplications gives the composition of successive transformations and thus the group law is derived as
\[(b,a,\vartheta)\ast(b^{\prime},a^{\prime},\vartheta^{\prime}) = (b+a\Lambda_{\vartheta}b^{\prime},aa^{\prime},\vartheta+\vartheta ^{\prime}),\]
With respect to the operation \(\ast\), \(\mathcal{P}_{\text{aff}}\) is a non-abelian group in which \((0,1,0)\) is the identity element and \((\frac{-1}{a}\Lambda_{-\vartheta}b,\frac{1}{a},-\vartheta)\) is the inverse of \((b,a,\vartheta)\) in \(\mathcal{P}_{\text{aff}}\). Also, it can be shown that \(\mathcal{P}_{\text{aff}}\) is a non-unimodular group as its left and right Haar measures
\[d\mu_{L}(b,a,\vartheta)=\frac{dbdad\vartheta}{a^{3}},\ d\mu_{R}(b,a,\vartheta) =\frac{dbdad\vartheta}{a},\]
respectively are different, and hence the modular function is given by \(\Delta(b,a,\vartheta)=\frac{1}{a^{2}}\). Moreover, the affine Poincare group \(\mathcal{P}_{\text{aff}}\) has the structure of a semi-direct product:
\[\mathcal{P}_{\text{aff}}=\mathbb{R}^{2}\rtimes(\mathbb{R}_{\ast}^{+}\times \operatorname{SO}(1,1)),\]
where \(\mathbb{R}^{2}\) is the subgroup of the translations, \(\mathbb{R}_{+}^{\ast}\) that of dilations, and \(\operatorname{SO}(1,1)\) of hyperbolic rotations. Topologically, one can write \(\mathcal{P}_{\text{aff}}=\mathbb{R}^{2}\times\mathcal{C}\), where \(\mathcal{C}\) is any one of the four cones:
\[C_{1}^{1} =\{x\in\mathbb{R}^{2}:x_{1}^{2}>x_{2}^{2},+x_{1}>0\} \tag{4.3}\] \[C_{2}^{1} =\{x\in\mathbb{R}^{2}:x_{1}^{2}>x_{2}^{2},-x_{1}>0\}\] (4.4) \[C_{1}^{2} =\{x\in\mathbb{R}^{2}:x_{1}^{2}<x_{2}^{2},+x_{1}>0\}\] (4.5) \[C_{2}^{2} =\{x\in\mathbb{R}^{2}:x_{1}^{2}<x_{2}^{2},-x_{1}>0\}. \tag{4.6}\]
Let us define the Fourier transform \(\mathcal{F}\) and inverse Fourier transform \(\mathcal{F}^{-1}\), by
\[(\mathcal{F}\varphi)(\xi)=\frac{1}{2\pi}\int_{\mathbb{R}^{2}}e^ {i\langle\xi;x\rangle}\varphi(x)dx \tag{4.7}\] \[(\mathcal{F}^{-1}\varphi)(x)=\frac{1}{2\pi}\int_{\mathbb{R}^{2}} e^{-i\langle\xi;x\rangle}(\mathcal{F}\varphi)(\xi)d\xi, \tag{4.8}\]
for all \(\varphi\in S(\mathbb{R}^{n})\), where \(\langle x;y\rangle=x_{1}y_{1}-x_{2}y_{2}\) is the Minkowski inner product. In this section, we denote \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) are the Minkowski-Fourier transform and inverse Minkowski-Fourier transform on \(\mathbb{R}^{2}\), respectively.
**Remark 1**.: _The relation between Euclidean Fourier transform and Minkowski-Fourier transform on \(\mathbb{R}^{2}\) is_
\[(\mathcal{F}\varphi)(\xi_{1},\xi_{2})=\widehat{\varphi}(-\xi_{1},\xi_{2}),( \mathcal{F}^{-1}\varphi)(\xi_{1},\xi_{2})=\widetilde{\varphi}(-\xi_{1},\xi_{2}),\]
_where \(\widehat{\varphi}\) and \(\widetilde{\varphi}\) are the Euclidean Fourier transform and inverse Euclidean Fourier transform \(\varphi\) on \(\mathbb{R}^{2}\), respectively. The reader can easily verify that the Minkowski-Fourier transform carries almost all properties similar to Euclidean Fourier transform on \(\mathbb{R}^{2}\), like the Plancherel formula, and Parseval identity._
Define the mappings \(\pi_{i}^{j}:\mathcal{P}_{\text{aff}}\to\mathcal{U}(L^{2}(C_{i}^{j}))\), by
\[\left(\pi_{i}^{j}(b,a,\vartheta)\phi\right)(x)=ae^{\langle x;b\rangle}\phi(a \Lambda_{\vartheta}x),\]
for all \((b,a,\vartheta)\in\mathcal{P}_{\text{aff}}\) and \(\phi\in L^{2}(C_{i}^{j}),i,j=1,2.\) Let \(f\in L^{1}(\mathcal{P}_{\text{aff}})\cap L^{2}(\mathcal{P}_{\text{aff}})\), the Fourier transform is defined by
\[\left(\widehat{f}(\pi_{i}^{j})\phi\right)(x)=\int_{\mathcal{P}_{\text{aff}}}f( b,a,\vartheta)\left(\pi_{i}^{j}(b,a,\vartheta)\phi\right)(x)d\mu_{L}(b,a, \vartheta),\]
for all \(\phi\in L^{2}(C_{i}^{j}),i,j=1,2.\) Define the Duflo-Moore operators, [6],
\[(K_{i,j}\phi)(x)=\frac{1}{2\pi}|\langle x;x\rangle|\phi(x),i,j=1,2.\]
Define the \(L^{p}\)-Fourier transform
\[\mathcal{F}_{p}f(\pi_{i}^{j})=\widehat{f}(\pi_{i}^{j})K_{i,j}^{\frac{1}{p^{ \prime}}},i,j=1,2,\]
where \(\frac{1}{p}+\frac{1}{p^{\prime}}=1,1\leqslant p\leqslant\infty\).
**Theorem 4.1** (Plancherel Theorem).: _[_6_]_ _Let \(f\) be in \(L^{1}(\mathcal{P}_{\text{aff}})\cap L^{2}(\mathcal{P}_{\text{aff}})\). Then_
\[\int_{\mathcal{P}_{\text{aff}}}|f(b,a,\vartheta)|^{2}\frac{dbdad\theta}{a^{3}} =\sum_{i,j=1}^{2}\|\widehat{f}(\pi_{i}^{j})K_{i,j}^{\frac{1}{2}}\|_{S_{2}}^{2}.\]
**Theorem 4.2** (Inversion Theorem).: _[_6_]_ _Let \(f\) be in \(L^{1}(\mathcal{P}_{\text{aff}})\cap L^{2}(\mathcal{P}_{\text{aff}})\). Then_
\[f(b,a,\vartheta)=\sum_{i,j=1}^{2}\operatorname{Tr}\left(\pi_{i}^{j}(b,a, \vartheta)^{*}\widehat{f}(\pi_{i}^{j})K_{i,j}\right).\]
For \(G=\mathcal{P}_{\text{aff}}\), recall the definition of Weyl transform defined in (1.1), and the Theorem 1.6 says that it is bounded when \(1\leqslant p\leqslant 2\). Now we prove the unboundedness of Weyl transform when \(p>2\).
**Theorem 4.3**.: _For \(2<p<\infty\), there exists an operator valued function \(\sigma\) in \(L^{p}(U\times\widehat{U},S_{p})\) such that \(W_{\sigma}\) is not a bounded linear operator on \(L^{2}(U)\)._
We prove the Theorem 4.3 using the method of contrapositive in three propositions.
**Proposition 4.4**.: _Let \(2<p<\infty\) and \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\). Then for all \(\sigma\in L^{p}(\mathcal{P}_{\text{aff}}\times\widehat{\mathcal{P}_{\text{aff} }},S_{p})\), the Weyl transform \(W_{\sigma}\) is bounded linear operator on \(L^{2}(\mathcal{P}_{\text{aff}})\) iff there exists a constant \(C\) such that_
\[\|W(f,g)\|_{p^{\prime},\mu}\leqslant C\|g\|_{L^{2}(\mathcal{P}_{\text{aff}})} \|f\|_{L^{2}(\mathcal{P}_{\text{aff}})},\]
_for all \(f,g\) in \(L^{2}(\mathcal{P}_{\text{aff}})\)._
Proof.: The proof is followed from Theorem 2.4.
**Proposition 4.5**.: _Let \(2<p<\infty\) and \(f\) be a square integrable, compactly supported function on \(\mathcal{P}_{\text{aff}}\) such that \(\underset{\mathcal{P}_{\text{aff}}}{\int}f(b,a,\vartheta)d\mu_{L}(b,a, \vartheta)\neq 0\). If \(W_{\sigma}\) is a bounded operator on \(L^{2}(\mathcal{P}_{\text{aff}})\) for all \(\sigma\) in \(L^{p}(\mathcal{P}_{\text{aff}}\times\widehat{\mathcal{P}_{\text{aff}}},S_{p})\), then_
\[\|\mathcal{F}_{p}(f)\pi_{1}^{1}\|_{S_{p^{\prime}}}^{p^{\prime}}+\|\mathcal{F}_{ p}(f)\pi_{2}^{2}\|_{S_{p^{\prime}}}^{p^{\prime}}+\|\mathcal{F}_{p}(f)\pi_{2}^{1}\|_{S_{p^ {\prime}}}^{p^{\prime}}+\|\mathcal{F}_{p}(f)\pi_{1}^{2}\|_{S_{p^{\prime}}}^{p^ {\prime}}<\infty.\]
Proof.: The proof is followed from Theorem 2.5.
**Proposition 4.6**.: _For \(p\in(2,\infty)\), does there exists a square-integrable, compactly supported function \(f\) on \(\mathcal{P}_{\text{aff}}\) with \(\underset{\mathcal{P}_{\text{aff}}}{\int}f(b,a,\vartheta)d\mu(b,a,\vartheta)\neq 0\) such that_
\[\|\mathcal{F}_{p}(f)\pi_{1}^{1}\|_{S_{p^{\prime}}}^{p^{\prime}}+\|\mathcal{F}_{ p}(f)\pi_{2}^{2}\|_{S_{p^{\prime}}}^{p^{\prime}}+\|\mathcal{F}_{p}(f)\pi_{2}^{1}\|_{S_{p^ {\prime}}}^{p^{\prime}}+\|\mathcal{F}_{p}(f)\pi_{1}^{2}\|_{S_{p^{\prime}}}^{p^{ \prime}}=\infty?\]
The answer to this question is yes. We will certainly find a function \(f_{\alpha},0<\alpha<\frac{1}{2}\), at the end of the section, which is needed for Proposition 4.6. Let \(f\in L^{2}(\mathcal{P}_{\text{aff}})\). For \(i,j=1,2\), consider the operators
\[\left(\widehat{f}(\pi_{i}^{j})K_{i,j}^{\frac{1}{p^{\prime}}}\phi \right)(x) =\int_{\mathcal{P}_{\text{aff}}}f(b,a,\vartheta)\left(\pi_{i}^{j}(b,a, \vartheta)K_{i,j}^{\frac{1}{p^{\prime}}}\phi\right)(x)\frac{dbdad\vartheta}{a^ {3}}\] \[=\frac{1}{2\pi}\int_{\mathcal{P}_{\text{aff}}}f(b,a,\vartheta)ae^{ i\langle x;b\rangle}|\langle a\Lambda_{-\vartheta}x;a\Lambda_{-\vartheta}x|^{ \frac{1}{p^{\prime}}}\phi(a\Lambda_{-\vartheta}x)\frac{dbdad\vartheta}{a^{3}}\] \[=\int_{0}^{\infty}\int_{\mathbb{R}}a^{1+\frac{2}{p^{\prime}}}( \mathcal{F}_{1}f)(x,a,\vartheta)|\langle x;x|^{\frac{1}{p^{\prime}}}\phi(a \Lambda_{-\vartheta}x)\frac{dad\vartheta}{a^{3}},\]
where \(\phi\in L^{2}(C_{i}^{j})\). Substitute \(a\Lambda_{-\vartheta}x=y\), then \(a=\left(\frac{\langle y;y\rangle}{\langle x;x\rangle}\right)^{\frac{1}{2}}\), \(\vartheta=\cosh^{-1}\left(\frac{\langle x;y\rangle}{\langle x;x\rangle}\times \right.\)
\(\left(\frac{\langle x;x\rangle}{\langle y;y\rangle}\right)^{\frac{1}{2}}\)) and \(dy=a|\langle x;x\rangle|dad\vartheta\).
Hence the integral representations become
\[\left(\widehat{f}(\pi_{i}^{j})K_{i,j}^{\frac{1}{p^{\prime}}}\phi\right)(x)= \int_{C_{i}^{j}}K_{i,j}^{f}(x,y)\phi(y)dy,\]
where
\[K_{i,j}^{f}(x,y)=(\mathcal{F}_{1}f)\Bigg{(}x,\left(\frac{\langle y;y\rangle}{ \langle x;x\rangle}\right)^{\frac{1}{2}},\cosh^{-1}\left(\frac{\langle x;y \rangle}{\langle x;x\rangle}\times\left(\frac{\langle x;x\rangle}{\langle y;y \rangle}\right)^{\frac{1}{2}}\right)\left(\frac{\langle y;y\rangle}{\langle x ;x\rangle}\right)^{\frac{1}{2}+\frac{1}{p^{\prime}}}|\langle x;x\rangle|^{ \frac{1}{p}}\frac{|\langle x;x\rangle|}{|\langle y,y\rangle|^{2}}, \tag{4.9}\]
for all \((x,y)\in C_{i}^{j}\times C_{i}^{j}\), \(i,j=1,2\). Now using the Plancherel formula for Minkowski-Fourier transform \(\mathcal{F}\), we get
\[\|\mathcal{F}_{p}(f)\pi_{1}^{1}\|_{S_{2}}^{2}+\|\mathcal{F}_{p}(f )\pi_{1}^{2}\|_{S_{2}}^{2}+\|\mathcal{F}_{p}(f)\pi_{2}^{1}\|_{S_{2}}^{2}+\| \mathcal{F}_{p}(f)\pi_{2}^{2}\|_{S_{2}}^{2}\] \[=\sum_{i,j=1}^{2}\int_{C_{i}^{j}}\int_{C_{i}^{j}}|K_{i,j}^{f}(x,y) |^{2}dxdy\] \[=\sum_{i,j=1}^{2}\int_{C_{i}^{j}}\int_{C_{i}^{j}}\left|(\mathcal{F }_{1}f)\Bigg{(}x,\left(\frac{\langle y;y\rangle}{\langle x;x\rangle}\right)^{ \frac{1}{2}},\cosh^{-1}\left(\frac{\langle x;y\rangle}{\langle x;x\rangle} \times\left(\frac{\langle x;x\rangle}{\langle y;y\rangle}\right)^{\frac{1}{2}} \right)\right|^{2}\left(\frac{\langle y;y\rangle}{\langle x;x\rangle}\right)^ {1+\frac{2}{p^{\prime}}}\] \[\times|\langle x;x\rangle|^{\frac{2}{p}}\frac{|\langle x;x \rangle|^{2}}{|\langle y,y\rangle|^{4}}dxdy\] \[=\sum_{i,j=1}^{2}\int_{C_{i}^{j}}\int_{0}^{\infty}\int_{\mathbb{R }}|(\mathcal{F}_{1}f)(x,a,\vartheta)|^{2}\,a^{1+\frac{2}{p^{\prime}}}|\langle x ;x\rangle|^{\frac{2}{p^{\prime}}}\frac{|\langle x;x\rangle|^{2}}{|\langle y,y \rangle|^{4}}a|\langle x;x\rangle|dad\vartheta dx\] \[=\sum_{i,j=1}^{2}\int_{C_{i}^{j}}\int_{0}^{\infty}\int_{\mathbb{R }}|(\mathcal{F}_{1}f)(x,a,\vartheta)|^{2}\,a^{\frac{2}{p^{\prime}}-6}|\langle x ;x\rangle|^{\frac{2}{p^{\prime}}-1}dad\vartheta dx\] \[=\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\int_{\mathbb{R}}|( \mathcal{F}_{1}f)(x,a,\vartheta)|^{2}\,a^{\frac{2}{p^{\prime}}-6}|\langle x;x \rangle|^{\frac{2}{p^{\prime}}-1}dxdad\vartheta. \tag{4.10}\]
Let us consider a function on \(\mathbb{R}^{2}\), with \(0<\alpha<\frac{1}{2}\),
\[\phi_{\alpha}(b)=\begin{cases}|b_{1}|^{-\alpha}|b_{2}|^{-\alpha},\quad b\in Q,\\ 0,\qquad\text{otherwise},\end{cases}\]
where \(Q=\{b\in\mathbb{R}^{2}:-p\leq b_{j}\leq p,j=1,2\}\). Then the Minkowski-Fourier transform of \(\phi_{\alpha}\) becomes
\[(\mathcal{F}\phi_{\alpha})(x)=\frac{1}{2\pi}\int_{-p}^{p}e^{b_{1}x_{1}}|b_{1}| ^{-\alpha}db_{1}\int_{-p}^{p}e^{-b_{2}x_{2}}|b_{2}|^{-\alpha}db_{2}.\]
Now for \(x_{2}>0,0<\alpha<\frac{1}{2}\),
\[\int_{-p}^{p}e^{-b_{2}x_{2}}|b_{2}|^{-\alpha}db_{2}=2\left(\int\limits_{0}^{ px_{2}}t^{-\alpha}\cos tdt\right)\!x_{2}^{-1+\alpha}\geq 2Ax_{2}^{-1+\alpha}, \tag{4.11}\]
and for \(x_{1}>0,0<\alpha<\frac{1}{2}\),
\[\int_{-p}^{p}e^{-b_{1}x_{1}}|b_{1}|^{-\alpha}db_{1}=2\left(\int\limits_{0}^{ px_{1}}t^{-\alpha}\cos tdt\right)\!x_{1}^{-1+\alpha}\geq 2Ax_{1}^{-1+\alpha}. \tag{4.12}\]
Let us choose a function \(f\) on \(\mathcal{P}_{\text{aff}}\), with \(0<\alpha<\frac{1}{2}\),
\[f_{\alpha}(b,a,\vartheta)=\begin{cases}\phi_{\alpha}(b)a^{5}\vartheta,\quad b \in Q,0<a<p,-p\leq\vartheta\leq p,\\ 0,\qquad\text{otherwise}.\end{cases}\]
Now using equations (4.11) and (4.12), the equation (4.10) becomes
\[\|\mathcal{F}_{p}(f_{\alpha})\pi_{1}^{1}\|_{S_{2}}^{2}+\|\mathcal{ F}_{p}(f_{\alpha})\pi_{1}^{2}\|_{S_{2}}^{2}+\|\mathcal{F}_{p}(f_{\alpha})\pi_{2}^{ 1}\|_{S_{2}}^{2}+\|\mathcal{F}_{p}(f_{\alpha})\pi_{2}^{2}\|_{S_{2}}^{2}\] \[\geq 16A^{4}\int_{R}^{\infty}\int_{R}^{\infty}\int_{0}^{p}\int_{-p }^{p}x_{1}^{2(-1+\alpha)}x_{2}^{2(-1+\alpha)}a^{\frac{2}{\mathcal{P}}-1} \vartheta|\langle x;x\rangle|^{\frac{2}{\mathcal{P}}-1}dxdad\vartheta. \tag{4.13}\]
Let \(A=\{(x_{1},x_{2}):R<x_{1},x_{2}<\infty\}\) and \(B=\{(x_{1},x_{2})\in A:x_{2}<x_{1}-1\}\). It is easy to check that
\[x_{1}^{2}-x_{2}^{2}=x_{1}^{2}+x_{2}^{2}-2x_{2}^{2}\geq 2x_{1}x_{2}-2x_{2}^{2}= 2x_{2}(x_{1}-x_{2})\geq 2x_{2},\]
on \(B\). Consider the left side of equation (4.13), we have
\[\int_{R}^{\infty}\int_{R}^{\infty}x_{1}^{2(-1+\alpha)}x_{2}^{2(- 1+\alpha)}|\langle x;x\rangle|^{\frac{2}{\mathcal{P}}-1}dx \geq\iint_{B}x_{1}^{2(-1+\alpha)}x_{2}^{2(-1+\alpha)}(2x_{2})^{ \frac{2}{\mathcal{P}}-1}dx\] \[=\int_{x_{1}=R}^{\infty}\int_{x_{2}=R}^{x_{2}=x_{1}-1}x_{1}^{2(-1 +\alpha)}x_{2}^{2(-1+\alpha)}(2x_{2})^{\frac{2}{\mathcal{P}}-1}dx_{1}dx_{2}\] \[=2^{\frac{2}{\mathcal{P}}-1}\int_{x_{1}=R}^{\infty}\int_{x_{2}=R}^ {x_{2}=x_{1}-1}x_{1}^{2(-1+\alpha)}x_{2}^{2(-1+\alpha)+\frac{2}{\mathcal{P}}-1 }dx_{1}dx_{2}\] \[=K\int_{x_{1}=R}^{\infty}x_{1}^{2(-1+\alpha)}(x_{1}-1)^{2(-1+ \alpha)+\frac{2}{\mathcal{P}}}dx_{1}\] \[-KR^{2(-1+\alpha)+\frac{2}{\mathcal{P}}}\int_{R}^{\infty}x_{1}^{ 2(-1+\alpha)}dx_{1},\]
where \(K=\frac{1}{2(-1+\alpha)+\frac{2}{p^{\prime}}}\). The integral
\[\int_{x_{1}=R}^{\infty}x_{1}^{2(-1+\alpha)}(x_{1}-1)^{2(-1+\alpha)+\frac{2}{p^{ \prime}}}dx_{1}=\infty,\]
when \(\frac{1}{2}>\alpha\geq\frac{3}{4}-\frac{1}{2p^{\prime}}>0,1<p^{\prime}<2\), and the integral \(\int\limits_{R}^{\infty}x_{1}^{2(-1+\alpha)}dx_{1}<\infty\), when \(\alpha<\frac{1}{2}\). Then the left-hand side of equation (4.13) becomes infinity, and hence
\[\|\mathcal{F}_{p}(f_{\alpha})\pi_{1}^{1}\|_{S_{2}}^{2}+\|\mathcal{F}_{p}(f_{ \alpha})\pi_{1}^{2}\|_{S_{2}}^{2}+\|\mathcal{F}_{p}(f_{\alpha})\pi_{2}^{1}\|_ {S_{2}}^{2}+\|\mathcal{F}_{p}(f_{\alpha})\pi_{2}^{2}\|_{S_{2}}^{2}=\infty,\]
when \(\frac{1}{2}>\alpha\geq\frac{3}{4}-\frac{1}{2p^{\prime}}>0,1<p^{\prime}<2\). Thus for these values of \(\alpha\),
\[\|\mathcal{F}_{p}(f_{\alpha})\pi_{1}^{1}\|_{S_{p^{\prime}}}^{p^{\prime}}+\| \mathcal{F}_{p}(f_{\alpha})\pi_{2}^{2}\|_{S_{p^{\prime}}}^{p^{\prime}}+\| \mathcal{F}_{p}(f_{\alpha})\pi_{1}^{1}\|_{S_{p^{\prime}}}^{p^{\prime}}+\| \mathcal{F}_{p}(f_{\alpha})\pi_{1}^{2}\|_{S_{p^{\prime}}}^{p^{\prime}}=\infty.\]
## Acknowledgment
The author is thankful to Prof. Aparajita Dasgupta for many fruitful discussions on the problem.
|
2304.02514 | APIHarvest: Harvesting API Information from Various Online Sources | Using APIs to develop software applications is the norm. APIs help developers
to build applications faster as they do not need to reinvent the wheel. It is
therefore important for developers to understand the APIs that they plan to
use. Developers should also make themselves aware of relevant information
updates about APIs. In order to do so, developers need to find and keep track
of relevant information about the APIs that they are concerned with. Yet, the
API information is scattered across various online sources, which makes it
difficult to track by hand. Moreover, identifying content that is related to an
API is not trivial. Motivated by these challenges, in this work, we introduce a
tool named \tool that aims to ease the process of finding API information from
various online sources. \tool is built on works that link APIs or libraries to
various online sources. It supports finding API information on GitHub
repositories, Stack Overflow's posts, tweets, YouTube videos, and common
vulnerability and exposure (CVE) entries; and is extensible to support other
sources. | Ferdian Thung, Kisub Kim, Ting Zhang, Ivana Clairine Irsan, Ratnadira Widyasari, Zhou Yang, David Lo | 2023-04-05T15:38:43Z | http://arxiv.org/abs/2304.02514v1 | # APIHARVEST: Harvesting API Information from Various Online Sources
###### Abstract
Using APIs to develop software applications is the norm. APIs help developers to build applications faster as they do not need to reinvent the wheel. It is therefore important for developers to understand the APIs that they plan to use. Developers should also make themselves aware of relevant information updates about APIs. In order to do so, developers need to find and keep track of relevant information about the APIs that they are concerned with. Yet, the API information is scattered across various online sources, which makes it difficult to track by hand. Moreover, identifying content that is related to an API is not trivial. Motivated by these challenges, in this work, we introduce a tool named APIHARVest that aims to ease the process of finding API information from various online sources. APIHARVest is built on works that link APIs or libraries to various online sources. It supports finding API information on GitHub repositories, Stack Overflow's posts, tweets, YouTube videos, and common vulnerability and exposure (CVE) entries; and is extensible to support other sources.
API mining, API information, multi-source
## I Introduction
APIs are indispensable components of modern software development [1]. APIs allow developers to focus on developing the main functionality of their software as they can rely on APIs for common functionalities, which consequently lowers software production time and improves developer productivity. Due to its importance, developers should understand the APIs that they plan to use well. They can read API-related information in online sources, such as related code in GitHub, related posts in Stack Overflow, or related tweets in Twitter.
While developers can find the API information themselves, manually finding them could be tedious and time-consuming. First, the API information is scattered across various online sources. Thus, developers would need to open many different websites and perform a search on each one of them. Second, the search functionality in the website is typically not targeted for API search. Thus, developers may need to go through a considerable number of search results before finding the content that is related to the API they search for. Each content may consist of code or text that contains possible mentions of the API they search for. These mentions may be ambiguous, especially if the API name is generic (e.g., read), which may refer to APIs from different libraries or simply an English word _read_.
Existing works have dealt with the challenges of linking content that is related to either an API or a library while handling the ambiguity that may exist in the possible mentions of API [2, 3, 4, 5]. Luong et al. [2] proposed an approach that leverages semantic and syntactical analysis to determine whether a Stack Overflow thread is related to an API. Asryofi et al. [3] developed an approach that enables a more accurate code search on GitHub by performing type resolution on the retrieved code and returns only the code related to a particular API. Zhang et al. [4] investigated the effectiveness of several pre-trained models in the task of determining whether a tweet is related to a library. They found that RoBERTa [6] performs the best. Haryono et al. [5] investigated several eXtreme Multi-label Learning (XML) techniques to identify libraries from CVE entries, with LightXML [7] achieving the best performance.
Although the aforementioned work can be used to identify API content, they can only be used individually. As such, the overall experience is not much of a departure from opening multiple websites at once. Moreover, due to the nature of some contents (e.g., tweets), a direct mention of an API is either extremely rare or non-existent altogether. For such content, it is likely that only a library linking approach is available. In such a case, an extra step of identifying a unique library name that an API belongs to is needed before the available linking
approach can be utilized to find API information.
In this work, we built a tool named APIHarvest that can integrate the existing API/library linking approaches in one interface. It aims to streamline the process of finding APIs from many different contents. For content that only has a library linking approach, APIHarvest can automatically map an API to its library in order to leverage the library linking approach. It also supports content that has no linking approach available and defaults to a generic text search. With these features, APIHarvest is extensible to support various types of content.
The rest of the paper is structured as follows: Section II discusses the preliminaries that contain some API/library linking approaches that we build our tool upon. Section III provides the details of our proposed tool. Section IV describe the evaluation of APIHarvest. Finally, related work and conclusion are presented in Section V and Section VI, respectively.
## II Preliminaries
We describe API/library linking approaches that we leverage in APIHarvest. These approaches can be used to identify whether a content is related to an API or a library.
### _ARSeek_
ARSeek [2] is a method for determining if a post on Stack Overflow is discussing a specific API by analyzing the natural language paragraphs and the code snippets in the post. It has two components: DATYS+ and API Relevance Classifier. The first component, DATYS+, takes in potential posts, which are posts that contain a word matching the simple name of the given API method, and API candidates, which are API methods with the same simple name as the given API method. It then calculates a confidence score for the given API method being referred to in the thread. The second component, API Relevance Classifier, converts natural language paragraphs and code snippets in a Stack Overflow thread; and the comment and implementation code of the given API method into an API Relevance Embeddings. These embeddings are then used to determine the likelihood that the thread is discussing the given API method. A score is then calculated from these embeddings. If the score is above a certain threshold, the thread is considered relevant for the given API method.
### _AUSearch_
Asyrofi et al. [3] created AUSearch, an approach that uses type system and API method signatures, which consists of a class name, simple name, and parameters, to improve the precision of GitHub Code Search.1 This approach solves the problem of ambiguous API invocations in GitHub code examples, which was a limitation of the GitHub Code Search. By implementing type resolution, AUSearch eliminates irrelevant code examples and significantly increases GitHub Code Search's accuracy for finding code containing the API.
Footnote 1: [https://github.com/search](https://github.com/search)
### _Pretrained Models for Library-related Tweets_
Zhang et al. [4] explored several pre-trained models to identify whether a tweet is related to a library. They experimented with both general-purpose (i.e., BERT [8], RoBERTa [6], and XLNet [9]) and domain-specific pre-trained models (i.e., BERTweet [10] and BERTOverflow [11]). They fine-tuned these models to the task of classifying library-related tweets. They found that RoBERTa [6], which is a robustly optimized BERT [8], performs the best and beats existing state-of-the-art.
### _XML Techniques for Identifying Libraries in CVE_
Haryono et al. [5] explored several eXtreme Multi-label Learning (XML) techniques to identify libraries that are mentioned in CVE entries. They evaluated five traditional models, which are FastXML [12], DiSMEC [13], Parabel [14], Bonsai [15], and ExtremeText [16]. They also evaluated two deep-learning based models, which are XML-CNN [17] and LightXML [7]. These models are trained on a CVE dataset where the labels are the libraries mentioned across all CVE entries in the dataset. Bonsai and LightXML were shown to be the most effective, with LightXML leading the pack.
## III Tool Architecture
### _Overview_
Figure 1 shows the overall architecture of APIHarvest. APIHarvest consists of three main components: (1) _API Database Builder_, (2) _API Map Builder_, and (3) _API Information Retrieval Engine_. API Database Builder and API Map Builder belong to the _Data Preparation_ phase, while
Fig. 1: The architecture of APIHarvest
API Information Retrieval Engine belongs to the _Deployment_ phase.
In the Data Preparation phase, APIHARVest populates the _API Database_ and _API Map Database_ with API knowledge from _Input Content_ and _Libraries_, respectively. API Database holds contents from various sources that have been linked to APIs, while API Map Database holds mappings from APIs to their libraries. _API Database Builder_ uses existing API/library linking approaches (if available) to label the Input Content. It only stores the Input Content to the API Database if the Input Content is labelled as relevant by existing API/library approaches. _API Map Builder_ adds the mapping from APIs to libraries if only the library linking approach is available to extract information from Input Content. In the Deployment phase, _API Information Retrieval Engine_ accepts an API query and makes use of the API Database and the API Map Database to return the relevant API content. We further describe APIHARVest components in detail in the following subsections.
### _API Database Builder_
Given an Input Content, the API Database Builder first identifies the content source. If there exists an API linking approach that supports the content source, API Database Builder runs the corresponding API linking approach on the Input Content. If the Input Content is labelled as relevant to API(s), the Input Content is stored in the API Database with links to all relevant APIs. Similarly, if there exists only a library linking approach that supports the content source, API Database Builder runs the corresponding library linking approach and stores the Input Content to the API Database, with links to all relevant libraries. On the other hand, if there is no API/library linking approach that supports the content source, the Input Content is stored without any links. In all of the above cases, API Database Builder runs a script that adapts the approach's output format to the database format.
### _API Map Builder_
If the Input Content is from a source that is only supported by a library linking approach, API Database Builder sends Libraries that are outputted by the library linking approach to API Map Builder. API Map Builder scans all of these libraries and extracts all public APIs from each library (e.g., by using tools such as javap2). It then stores the library name and the public APIs' names in the API Map Database.
Footnote 2: [https://docs.oracle.com/javase/7/docs/technotes/tools/windows/javap.html](https://docs.oracle.com/javase/7/docs/technotes/tools/windows/javap.html)
### _API Information Retrieval Engine_
Given an API query, API Information Retrieval Engine first passes the query to _API Mapper_. The _API Mapper_ queries the API Map Database for the name of the library that belongs to the API in the query. It then passes both the API name and the library name to the _API Content Retriever_. API Content Retriever goes through all the available API information sources (e.g., Stack Overflow, GitHub, etc.) recorded in the API Database. For each source, it queries the API Database using either the API or the library name (depending on the linking approaches used for the source). If the API linking approach was used, it queries the database using the API name. Similarly, if library linking approach was used, it queries using the library name. On the other hand, if neither linking approaches were used, API Content Retriever leverages a classical information retrieval strategy to retrieve relevant contents. It computes and ranks BM25 score [18] between all contents from the source and the API name. After API Content Retriever obtains relevant contents from each source, it aggregates all contents and returns them as the API Content.
Figure 2 shows the user interface when searching contents related to an API method com.google.common.base.Object.hashCode().
The interface allows users to type the API name into the search bar. Once the user ceases typing, the API name is sent to API Information Retrieval Engine, which retrieves the related contents and show them to the user. The contents from different sources are organized in the form of tabs: StackOverflow, GitHub, Tweet, CVE, and YouTube. Each tab contains the relevant API content from each source (if any). The figure shows examples of relevant content in StackOverflow for the given API name.
## IV Evaluation
### _API Information Sources_
For evaluation and prototype, we integrate the following information sources into APIHARVest: Stack Overflow posts, GitHub code snippets, tweets, CVE entries, and YouTube videos. We run ARSee [2] and AUSearch [3] to link APIs to Stack Overflow posts and GitHub code snippets, respectively.
Fig. 2: The user interface of APIHARVest
Similarly, we run Zhang et al. approach [4] and Haryono et al. approach [5] to link software libraries to the tweets and CVE entries, respectively. We use the dataset from their studies and input the test set as the Input Content. We also use the implementation provided in their replication package. For each source, we wrote a script that adapts the data format in the linked content to the database format. Note that we also need to write such a script if we want to extend APIHarvest to support new API/library linking approaches.
For YouTube videos, we randomly selected 100 classes from Java Standard Library.3 We then query YouTube Search API and take the top-100 video metadata returned by the API. We then wrote a script to input these metadata into the API Database. As there is no available API/library linking approaches here, APIHarvest would perform an information retrieval based on the BM25 score.
Footnote 3: [https://docs.oracle.com/javase/7/docs/api/overview-summary.html](https://docs.oracle.com/javase/7/docs/api/overview-summary.html)
### _Usability_
We evaluate APIHarvest in the terms of its usability. We do not evaluate the linking effectiveness as it is inherently inherited from the respective API/library linking approaches. We invited 3 Ph.D. students in Computer Science and 2 Research Engineers in Computer Science to evaluate the usability of APIHarvest. All of them have a programming experience of more than 5 years. We provide them access to APIHarvest and asked them to score three aspects of whether: APIHarvest is easy-to-use, APIHarvest is useful, and they would like to use APIHarvest in the future. The score ranges from 1 to 5, which indicates strongly disagree, disagree, neutral, agree, and strongly agree. The evaluators consider APIHarvest to be easy-to-use (4.8 out of 5) and useful (4.6 out of 5) and are willing to use APIHarvest in the future (4 out of 5).
## V Related Work
### _API Linking Approaches_
API linking approaches aim to link an API to content that is related to it. A group of approaches aims to link an API to a code snippet [3, 19, 20, 21]. To identify API mentions in code snippets, Baker [19] iteratively performs a deductive linking analysis. Both COSTER [20] and STATTYPE [21] work by capturing and learning the tokens surrounding a code element and associating these tokens with similar tokens that they have encountered before. On the other hand, AUSearch [3] leverages type resolution to resolve the API that is being called from within a code.
Another group of approaches aims to link an API to software engineering texts such as Stack Overflow posts [22, 23, 24, 25]. In this line of work, several studies [22, 23, 24, 25] make use of classical information retrieval techniques and/or some heuristics. Miler, which was developed by Bacchelli et al. [24], used string matchings and IR techniques to link emails to source code entities in software systems such as classes in object-oriented systems and functions in procedural language systems. Dagenais and Robillard [25] used filtering heuristics to link APIs mentioned in software support channels (e.g., mailing lists and forums). ARSeek, which is developed by Luong et al. [2], performs a semantic and a syntactical analysis on a Stack Overflow post to determine if the post is related to an API.
### _Library Linking Approaches_
Library linking approaches aim to link a library to content that is related to it. A group of approaches can be used to link a library to microblogs such as tweets [4, 26, 27]. Prasetyo et al. [26] proposed an approach to identify whether a microblog is relevant to software. They extracted features from textual content and URLs in tweets. They then trained a classifier using Support Vector Machine (SVM), which is used to classify whether software-related tweets. Sulistya et al. [27] proposed an approach to exploit knowledge from rich platforms such as Stack Exchange and Stack Overflow to be used in other platforms such as Twitter. They extracted word embeddings from Stack Exchange and Stack Overflow; and used these embeddings as features in a classifier that identifies software-related tweets. These two approaches can be used to identify library-related tweets by filtering software-related tweets that contain library names. More recently, Zhang et al. [4] explored the capability of pre-trained models to identify library-related tweets and found them to be more effective than baseline approaches.
Another group of approaches aims to link a library to CVE entries [5, 28]. Chen et al. [28] is the first to propose to automatically link a library that is related to vulnerabilities mentioned in CVE entries. They cast the problem as eXtreme Multi-label Learning (XML), where libraries are the labels, and used FastXML [12] to identify libraries used in CVE entries. Haryono et al. [5] continued this direction by evaluating many XML algorithms and found that LightXML [7] is the best XML algorithm for this task.
## VI Conclusion and Future Work
API information is scattered in various online sources. Linking them to APIs is not a trivial task as some mentions of APIs may be ambiguous due to the fact that some APIs have the same method name. Although approaches have been developed to perform the linking, they are typically designed to target only a specific source. For some sources, there is also no available API linking approach. Due to these issues, we present APIHarvest, a tool to find API information from various online sources and present the API information in a single interface. APIHarvest makes use of both API and library linking approaches to find API information. It also accommodates finding API information for all kinds of sources, regardless of the availability of the API/library linking approach. Our preliminary evaluation supports the usability of APIHarvest. In the future, we plan to add more online sources by leveraging more API/library linking approaches. We release the source code of APIHarvest at [https://github.com/soarsmu/APIHarvest](https://github.com/soarsmu/APIHarvest). |
2305.15341 | Accuracy of the slow-rotation approximation for black holes in modified
gravity in light of astrophysical observables | Near-future, space-based, radio- and gravitational-wave interferometry
missions will enable us to rigorously test whether the Kerr solution of general
relativity accurately describes astrophysical black holes, or if it requires
some kind of modification. At the same time, recent work has greatly improved
our understanding of theories of gravity that modify the Einstein-Hilbert
action with terms quadratic in the curvature, allowing us to calculate black
hole solutions to (essentially) arbitrary order in a slow-rotation expansion.
Observational constraints of such quadratic gravity theories require the
calculation of observables that are robust against the expansion order of the
black hole solution used.
We carry out such a study here and determine the accuracy with respect to
expansion order of ten observables associated with the spacetime outside a
rotating black hole in two quadratic theories of gravity,
dynamical-Chern-Simons and scalar-Gauss-Bonnet gravity. We find that for all
but the most rapidly rotating black holes, only about the first eight terms in
the spin expansion are necessary to achieve an accuracy that is better than the
statistical uncertainties of current and future missions. | Pablo A. Cano, Alexander Deich, Nicolás Yunes | 2023-05-24T16:55:40Z | http://arxiv.org/abs/2305.15341v3 | Accuracy of the slow-rotation approximation for black holes in modified gravity in light of astrophysical observables
###### Abstract
Near-future, space-based, radio- and gravitational-wave interferometry missions will enable us to rigorously test whether the Kerr solution of general relativity accurately describes astrophysical black holes, or if it requires some kind of modification. At the same time, recent work has greatly improved our understanding of theories of gravity that modify the Einstein-Hilbert action with terms quadratic in the curvature, allowing us to calculate black hole solutions to (essentially) arbitrary order in a slow-rotation expansion. Observational constraints of such quadratic gravity theories require the calculation of observables that are robust against the expansion order of the black hole solution used.
We carry out such a study here and determine the accuracy with respect to expansion order of ten observables associated with the spacetime outside a rotating black hole in two quadratic theories of gravity, dynamical-Chern-Simons and scalar-Gauss-Bonnet gravity. We find that for all but the most rapidly rotating black holes, only about the first eight terms in the spin expansion are necessary to achieve an accuracy that is better than the statistical uncertainties of current and future missions.
## I Introduction
From gravitational wave detectors, such as LIGO/VIRGO [1; 2], to very long baseline interferometers, like the Event Horizon Telescope [3], we have never been able to better probe the extreme gravity environment near black holes (BHs) [4; 5]. The remarkable precision of these, as well as future space-based detectors, allow us to interrogate Einstein's theory of general relativity (GR) to a finer degree than ever before [6; 4]. Among the myriad applications of these tests, placing bounds or constraints on well-motivated theoretical modifications to GR is the topic of this work [7; 8].
The action underlying the field equations of general relativity, the Einstein-Hilbert (EH) action, is a gem of predictive success, having survived a century of rigorous testing [9; 10]. However, one might expect that the EH action is not the whole story. For one, multiple candidates for theories of quantum gravity conjecture modifications to the EH action; the latter is linear in the curvature through the Ricci scalar, while (effective) quantum gravity models introduce curvature terms that are higher than linear [11; 12; 13]. Second, that the EH action produces a theory that is so successful across a wide range of energy scales suggests that any modification to GR may only appear at high curvatures; therefore, we might expect the EH action to merely be the leading-order term of an effective theory, expanded in powers of curvature [14; 15; 16].
In this work, we focus on two such modifications to GR that introduce quadratic curvature terms, dynamical-Chern-Simons (dCS) [11] and scalar-Gauss-Bonnet (sGB) gravity [7; 17]. These theories introduce a dynamical scalar (sGB) and a pseudo-scalar (dCS) degree of freedom that couples non-minimally to the metric through the Gauss-Bonnet (sGB) and Pontryagin (dCS) topological invariant, respectively. Both theories are well motivated, either from quantum gravity extensions of GR such as heterotic string theory [11; 18] or from effective field theories of gravity [19; 11]. These theories naturally avoid binary pulsar constraints [20; 21], but they are now beginning to be bounded through observations of gravitational waves with advanced LIGO (aLIGO) in the sGB case [22; 23; 24; 25; 26] or observations of neutron stars with aLIGO and the Neutron star Interior Composition ExploreR (NICER) in the dCS case [27].
Finding rotating BH solutions to modified field equations, such as in dCS and sGB gravity, is in general extremely complicated. One way to do so is through non-perturbative, numerical methods [28; 29; 30; 31; 32], but, when doing so, one must be careful to properly resolve steep gradients, which may arise near horizons and curvature singularities. Another way to find modified BH solutions is perturbatively, as a simultaneous series expansion in small rotation or spin and in small (modified gravity) coupling [33; 34; 35; 36]. This is difficult because the perturbed modified field equations can become quite complicated at high enough spin order. Previous studies have often been stymied by this difficulty, and have therefore, until recently, been truncated to relatively low expansion order [33; 37; 38; 39; 13]. Such a truncation limits
any study to only focusing on BHs with relatively low spin values, i.e., to BHs in a spin regime where the perturbation is valid.
Recent work has made great progress in understanding how the modifications to the field equations produce modified BH solutions, allowing for analytic modified BH metrics of arbitrary order in a small spin expansion [18]. However, the high-order solutions generated through this procedure can be extremely cumbersome due to the high number of terms required in the expansions. We then have a conundrum for the working physicist who wishes to use these modified metrics: if one wishes to calculate a given observable in a modified theory to some accuracy, what order of expansion should one use? While it would technically be feasible to simply use the highest-order solution possible, the unwieldy size of these solutions renders this route computationally impractical. Instead, it would be much better if one could calculate the sought-after observable using the BH metric at the lowest order needed for a given accuracy, leveraging the fact that all observations have finite statistical certainty, and thus, saving considerable computation time.
But how do we determine what this order is for a given modified BH metric? This is the main topic of this paper. We focus on BHs in dCS gravity and sGB gravity as an example and on the following ten observables: (i) the mass quadrupole moment; (ii) the photon ring perimeter radius; (iii) the angular momentum on the photon ring; (iv) the orbital frequency on the photon ring; (v) the Lyapunov exponent on the photon ring; (vi) the perimeter radius of the ergosphere; (vii) the perimeter radius of the innermost stable circular orbit (ISCO); (viii) the angular momentum on the ISCO; (ix) the binding energy on the ISCO; and (x) the orbital frequency on the ISCO. For each of these observables, we calculate their corrections at each spin order in the metric up to order 24.
Calculating these observables becomes tricky at high enough order in spin. If done entirely analytically, the number of terms makes the calculations take an extremely long time, even on high-performance computing clusters. On the other hand, if done entirely numerically, the precision required quickly overwhelms what is available with double precision. To overcome these issues, we have developed a novel, semi-analytic method where we calculate only the observable's _correction_ analytically, and then store the previous order value numerically. This allows us to perform the calculations in a reasonable amount of time, while also minimizing any numerical instabilities.
We find that for BH spin of less than 0.7, only the first 6 orders of spin expansion are required to calculate the observables to a relative difference of \(10^{-2}\). Figure 1 shows how the error in the calculation of observables scales with the order kept in the spin expansion, for dCS BHs of various spins. Observe that only when the dimensionless spin is very high must a high order in the spin expansion be used. Throughout this paper, we will present the order required in the BH metric as a function of the BH spin value and the sought after accuracy.
Of course, ours is not the first work that has studied the required spin order in an approximate modified BH metric needed to place constraints on certain observables. Previous studies have performed similar analyzes, for example when looking at indicators of chaos in particle trajectories of these modified metrics [38; 39] or when looking at BH shadow observations [40; 41]. Our work extends these previous studies by not only considering far more observables, but also far higher orders in the small spin expansion, which have only recently been made possible by the (essentially) arbitrary-order metrics of [18]. Our work, therefore, allows now for careful data analysis studies of these theories against observations that will be robust to the approximate nature of the modified BH metrics used.
The remainder of this paper presents the details that lead to the conclusions summarized above, and it is organized as follows. In Sec. II, we discuss the two theories of quadratic gravity and their BH solutions. In Sec. III, we give a description of the observables in question and describe how they are calculated. Section IV covers how the error in the observables behaves as a function of spin and expansion order. Finally, Sec. V discusses the implications of the work presented. Henceforth, we use geometric units in which \(G=1=c\).
## II Rotating black holes in SGB and dCS gravity
Here, we present the basics of sGB and dCS gravity, and then briefly describe the BH solutions of each.
### The quadratic gravity action
We can modify the conventional EH action through an expansion in curvature terms, casting the EH term as merely the leading-order term in a broader effective field theory (EFT). In addition to this general EFT argument, quadratic theories also arise naturally from certain low-energy expansions of specific string theories [11; 33].
The action for these theories is defined as
\[S=S_{\rm EH}+S_{\rm mat}+S_{\vartheta}+S_{RR}, \tag{1}\]
where \(S_{\rm EH}\) is the EH action, \(S_{\rm mat}\) is the matter action, \(S_{\vartheta}\) is an action for a dynamical scalar or pseudo-scalar field, and \(S_{RR}\) couples a quadratic curvature term to the field. The only distinction between the two quadratic theories we are concerned with is in this final term. The EH action reads
\[S_{\rm EH}=\kappa\int d^{4}x\,\sqrt{-g}\,R\,, \tag{2}\]
where \(\kappa=(16\pi)^{-1}\), \(g\) is the metric tensor determinant and \(R=g^{\alpha\beta}g^{\rho\sigma}R_{\rho\alpha\sigma\beta}\) the Ricci scalar, with the Riemann tensor \(R_{\rho\alpha\sigma\beta}\). The action for the scalar or the pseudo-scalar field \(S_{\vartheta}\) is
\[S_{\vartheta}=-\frac{1}{2}\int d^{4}x\sqrt{-g}\left[\nabla_{\mu}\vartheta \nabla^{\mu}\vartheta+2V(\vartheta)\right], \tag{3}\]
where \(V(\vartheta)\) is the potential of the scalar field. In practice, we set \(V(\vartheta)=0\) to ensure a massless theory [42].
For the specific case of sGB and dCS gravity, we can write down a generic action that encompasses both theories after some parameter selection. Generically, we can say
\[\begin{split} S_{RR}=\int d^{4}x\sqrt{|g|}\Big{\{}& \alpha_{\rm sGB}\vartheta_{\rm sGB}RR\\ &+\alpha_{\rm dCS}\vartheta_{\rm dCS}R\tilde{R}\Big{\}},\end{split} \tag{4}\]
where
\[RR=R_{\mu\nu\rho\sigma}R^{\mu\nu\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^{2} \tag{5}\]
is the so-called Gauss-Bonnet density, and
\[R\tilde{R}\equiv{}^{*}R^{\alpha}{}_{\beta}{}^{\gamma\delta}R^{\beta}{}_{ \alpha}{}_{\gamma\delta}\,, \tag{6}\]
is the Pontryagin density with \({}^{*}R^{\alpha}{}_{\beta}{}^{\gamma\delta}=\frac{1}{2}\epsilon^{\gamma \delta\rho\lambda}R^{\alpha}{}_{\beta\rho\lambda}\) the dual of the Riemann tensor. The parameters \(\alpha_{\rm sGB}\) and \(\alpha_{\rm dCS}\) determine the coupling parameter strength of the particular theory being described, and they have dimensions of length squared in geometric units.
From this generic non-minimal coupling action, we can now define sGB theory and dCS gravity. The action for sGB gravity is given by setting \(\alpha_{\rm dCS}=0\) in Eq. (4) and \(\vartheta=\vartheta_{\rm sGB}\) in Eq. (3). This finds motivation in a certain low-energy limit of string theory [43]. Gravitational wave observations have already constrained \(\alpha_{\rm sGB}^{1/2}\leq 5.6\)km within a 90% confidence interval [22]. Unique among these two theories, sGB gravity induces modifications in the spacetime regardless of whether the spacetime is spherically symmetric (i.e., regardless of whether the BH is spinning or not).
On the other hand, dCS gravity is defined when \(\alpha_{\rm sGB}=0\) in Eq. (4) and \(\vartheta=\vartheta_{\rm dCS}\) in Eq. (3). In this case, \(\vartheta_{\rm dCS}\) behaves as a pseudo-scalar, on account of the fact that the Pontryagin density is odd under parity transformations. DCS gravity finds motivation from a few sources, including loop quantum gravity [21; 33], the standard model gravitational
Figure 1: Spread of error in the calculation of all observables studied in this paper, as a function of the spin order kept in the approximate metric for dCS BHs (left) and sGB BHs (right) of two spin values and with the largest coupling allowed by the small-coupling approximation used to derive these metrics. Observe that an error of \(\epsilon<10^{-2}\) can be achieved for all observables with fewer than 8 terms in the expansion for BHs of moderate spin (red line on the right). This improves significantly for lower spin BHs and for smaller coupling parameters, allowing the same error to be achieved for spins less than \(\chi=0.5\) with only 2 expansion orders (red line on the left).
anomaly [20; 33], and investigations in string theory1[34; 45]. Neutron star multi-messenger observations have been able to constrain \(\alpha_{\rm cdS}^{1/2}\leq 8.5\)km
Footnote 1: A combination of both theories, sGB and dCS, with two scalar fields, also arises naturally in the effective action of heterotic string theory [44].
within a 90% confidence interval [27]. Unlike the sGB case, dCS gravity does not modify spherically symmetric spacetimes. For this reason, it does not induce a change in a non-rotating BH. For a more thorough discussion, including the explicit field equations of both theories, see [18].
In both cases, the modifications to the metric are proportional to the dimensionless coupling parameter
\[\zeta_{\rm q}\equiv\frac{\alpha_{\rm q}^{2}}{\kappa M^{4}}, \tag{7}\]
which is the parameter we will use to present our results (where q stands for the theory being considered, either dCS or sGB). In order to satisfy the requirement of the theories being effective, we assume the parameters \(\zeta_{\rm q}\) are small, in a sense that we will make more precise below [46].
### The corrected Kerr metric and its formal regime of validity
Vacuum solutions in sGB and dCS gravity for axisymmetric and stationary spacetimes (in other words, the corrected forms of the Kerr metric in each theory) are found by starting with an ansatz for a corrected line element in Boyer-Lindquist-like coordinates \((t,r,\theta,\phi)\)[18]:
\[\begin{split} ds^{2}=&-\left(1-\frac{2Mr}{\Sigma}- \zeta_{\rm q}H_{1}\right)dt^{2}-(1+\zeta_{\rm q}H_{2})\frac{4M^{2}\chi r}{ \Sigma}dtd\phi+(1+\zeta_{\rm q}H_{3})\,\Sigma\left(\frac{dr^{2}}{\Delta}+\frac {dx^{2}}{1-x^{2}}\right)\\ &+(1+\zeta_{\rm q}H_{4})\left(r^{2}+M^{2}\chi^{2}+\frac{2M^{3} \chi^{2}r\left(1-x^{2}\right)}{\Sigma}\right)(1-x^{2})d\phi^{2},\end{split} \tag{8}\]
with \(\Sigma=r^{2}+M^{2}\chi^{2}x^{2}\) and \(\Delta=r^{2}+2Mr+M^{2}\chi^{2}\). Here, \(M\) is the mass of the BH, \(\chi=a/M\) is the nondimensionalized spin, and \(x=\cos\theta\). The corrections \(H_{i}\) are functions only of \(r\) and \(x\), and it is assumed that \(|\zeta_{\rm q}H_{i}|\ll 1\) everywhere in the BH exterior. The quantity \(H_{i}\) can be expressed as a power series in the spin:
\[H_{i}=\sum_{n=0}^{\infty}H_{i}^{(n)}\chi^{n}, \tag{9}\]
where the \(H_{i}^{(n)}\) can always, for the theories under consideration, be written as a polynomial in both \(1/r\) and \(x\):
\[H_{i}^{(n)}=\sum_{p=0}^{p_{\rm max}}\sum_{k=0}^{k_{\rm max}}H_{i}^{(n,p,k)}x^{ p}r^{-k}. \tag{10}\]
Here, \(H_{i}^{(n,p,k)}\) are constant coefficients containing powers of \(M\) and \(k_{\rm max}\) depends on \(n\) and \(p\). For a complete treatment, see [18].
Using the corrections to the metric found in [18], we can now establish the highest allowed spin values at a given spin order from purely theoretical considerations. Let us define the relative error in the function \(H_{i}\) via
\[\delta_{i}\equiv 1-\frac{\sum_{n=0}^{N_{\rm tr}}H_{i}^{(n)}\chi^{n}}{\sum_{n=0 }^{N_{\rm hi}}H_{i}^{(n)}\chi^{n}}\,, \tag{11}\]
where \(N_{\rm hi}\) is the highest spin order considered in this paper. Figure 2 shows this relative error for the \(H_{1}\) function for various values of \(N_{\rm tr}\). Observe that, while the dCS error monotonically decreases as the spin order is increased (even at high values of spin), this is not always the case for sGB corrections. At small values of \(N_{\rm tr}\) and for large values of spin, the accuracy of \(H_{1}\) does not improve monotonically, until \(N_{\rm tr}>6\). The slow error reduction of the sGB spin expansion limits us to using a maximum spin value of \(\chi=0.8\), while dCS allows calculations through to \(\chi=0.9\).
We can now establish the maximum allowed value of the coupling parameters, \(\zeta_{\rm sGB}^{\rm max}\) and \(\zeta_{\rm dCS}^{\rm max}\), that are allowed by the perturbative solution. Recall that these approximate solutions are bivariate expansions in small spin (to higher order) and in the small coupling \(\zeta_{\rm q}\) (to leading order), as explained around Eq. (9). We must therefore enforce that \(|\zeta_{\rm q}H_{i}|\ll 1\), which we will use to find the maximum value of \(\zeta_{\rm q}\) allowed. Evaluating the \(H_{i}\) functions on the equator and at the event horizon with \(\chi=0.9\) in dCS gravity and \(\chi=0.8\) in sGB theory, and solving for the
smallest value of the coupling parameter that makes one of the correction factors \(|\zeta_{\rm q}H_{i}|\) larger than \(0.5\), we find
\[\zeta_{\rm sGB}^{\rm max}\ \approx\ 0.5\,,\quad\zeta_{\rm dCS}^{\rm max}\ \approx\ 0.15\,. \tag{12}\]
This clearly indicates that the \(H_{i}\) functions (evaluated at the above values of \(\chi\), on the horizon and the equator) are of order unity in both sGB and dCS gravity. Henceforth, we will limit our analysis to these maximum values of the dimensionless coupling, i.e. \(\zeta_{\rm q}<\zeta_{\rm q}^{\rm max}\) and to \(\chi\leq 0.8\) in sGB and \(\chi\leq 0.9\) in dCS gravity. Just doing so, however, does not imply that observables calculated from these truncated approximate metrics will all be equally accurate, which is the topic of the rest of this paper.
## III Definition of observables
In this section, we describe every observable we will study in this paper. All observables are summarized in Table 1.
### Multipole moments
Multipole moments characterize the exterior field of a gravitating body. For stationary spacetimes in GR, they come in two classes: mass and angular multipoles, \(M_{l}\) and \(S_{l}\). For the Kerr BH, all of these are determined by the mass and angular momentum, and they satisfy
\[M_{l}+iS_{l}=M(ia)^{l}\,. \tag{13}\]
This is nothing but a manifestation of the no-hair theorems of GR, _e.g._[47; 48]. Hence, the measurement of at least one multipole moment, besides the mass and angular momentum, provides a test of the Kerr hypothesis. Multipole moments, in fact, leave an imprint in the inspiral phase of a compact binary [49; 50; 51], and thus, on the gravitational waves that such a binary emits, allowing us to look for signatures of beyond-GR theories [52; 23; 12; 24]. The mass quadrupole \(M_{2}\) is the most relevant for observational purposes, so we focus on it here.
The multipole moments can be identified by writing the metric in ACMC (asymptotically Cartesian and mass-centered) coordinates and reading-off certain terms in the \(g_{tt}\) and \(g_{t\phi}\) components [53]. This method was used in [54] to compute the multipole moments in several higher-derivative theories, including dCS and sGB gravity. The value of \(M_{2}\) for these theories reads
\[\frac{M_{2}^{\rm sGB}}{M^{3}} =-\chi^{2}+\zeta_{\rm sGB}\left(-\frac{4463\chi^{2}}{2625}+\frac{ 33863\chi^{4}}{68600}+\ldots\right)\,, \tag{14}\] \[\frac{M_{2}^{\rm dCS}}{M^{3}} =-\chi^{2}+\zeta_{\rm dCS}\left(\frac{201\chi^{2}}{112}-\frac{181 9\chi^{4}}{3528}+\ldots\right)\,,\]
and we have obtained the expansion of this quantity to order \(\mathcal{O}(\chi^{28})\) for dCS gravity and to order \(\mathcal{O}(\chi^{20})\) for sGB theory.
### Geodesic Properties
Here we describe the trajectory parameters we calculate, which are related to null and timelike geodesics. To facilitate our discussion of these tra
Figure 2: The \(H_{1}\) correction functions for dCS (blue) and sGB (orange), evaluated at \(r_{\rm ISCO}\), with \(\chi=0.4\) (left), and \(\chi=0.8\) (right). These plots show the non-monoticity of the correction size for \(H_{1,\rm sGB}\) at low expansion orders and high spins. Although the relative error of the \(H_{i,\rm sGB}\) do decrease at high expansion orders, this undesirable behavior should be noted for \(N_{\rm tr}<6\). The other \(H_{i}\) functions are superficially identical to the above.
jectories, let us first review a few details about particle dynamics, which are common to all stationary and axisymmetric spacetimes. First, the stationarity and axisymmetry give rise to two Killing vectors, which in turn beget two conserved quantities of the motion. Stationarity gives a conserved energy via the equation \(E=-\mu^{-1}\xi^{\nu}_{(t)}p_{\nu}\), and axisymmetry gives a conserved angular momentum via \(L=\mu^{-1}\xi^{\nu}_{(\phi)}p_{\nu}\), where \(p_{\nu}=\mu u_{\nu}\) is the particle's 4-momentum (with \(u_{\nu}\) its 4-velocity) and \(\xi^{\nu}_{(X)}\) is the Killing vector associated with the \(X\)-direction [55]. A third conserved quantity is the rest mass of the particle itself, \(\mu\), whose conservation is guaranteed by the conservation of the metric signature during the evolution of a geodesic. In this work, \(E\) (in the form of the binding energy, \(E_{b}\equiv(\mu-E)/\mu\)) and \(L\) are crucial quantities we will derive from the approximate dCS and sGB metrics.
More precisely, the derivation of the \(E_{b}\) and \(L\) observables starts with the particular effective potential, \(V_{\text{eff.}}\), that governs geodesic motion. To derive the effective potential, we start with the normalization condition,
\[u^{\alpha}u_{\alpha}=-k, \tag{15}\]
where \(k=1\) for timelike geodesics parametrized by the particle's proper time and \(k=0\) for null geodesics. Then, making use of the conservation of energy and angular momentum, we can recast this equation in the particularly useful form [38],
\[\frac{1}{2}\left(u_{r}^{2}+u_{\theta}^{2}\right)+V_{\text{eff.}}=-\frac{1}{2}k, \tag{16}\]
from which it can be shown that the effective potential \(V_{\text{eff.}}\) is simply
\[V_{\text{eff.}}=\frac{1}{2}\left(\frac{g_{\phi\phi}E^{2}+2g_{t\phi}EL+L^{2}g_{ tt}}{g_{tt}g_{\phi\phi}-g_{t\phi}^{2}}\right), \tag{17}\]
Effective potentials can be analyzed in familiar ways from classical mechanics: bound orbits correspond to the potential's extrema (Fig. 3), and the potential's second derivative informs the orbit's stability to small perturbations. The potential is in general a function of \(r\) and \(\theta\), but here we only focus on equatorial trajectories, in which case the potential becomes a function of \(r\) only.
#### ii.2.1 Photon rings
The set of equatorial null orbits that are bound is called the _photon ring_. The radii and angular momenta corresponding to these orbits are found from the conditions
\[V_{\text{eff.}}=0=\partial_{r}V_{\text{eff.}}, \tag{18}\]
along with specifying that on the ring, \(p_{r}=p_{\theta}=0\)[55; 56]. All photon ring orbits have fixed \(r\)-coordinate values, which depend only on the BH spin. Photon ring locations are of particular interest to VLBI missions because they define the edges of the black hole shadow [5].
#### ii.2.2 Lyapunov Exponents
Owing to their position at the top of the "hill" of an effective potential (Fig. 3), null bound orbits on the photon shell are inherently unstable to small perturbations, a fact that can be quantified through the orbit's _Lyapunov exponents_. In general, for any dynamic Hamiltonian system, the stability of a phase space trajectory can be described by a Lyapunov exponent, \(\lambda\). While a full treatment of Lyapunov exponents of photon shell orbits does not permit analytic solutions, the symmetries of a photon ring orbit allow us to make significant simplifications. An equatorial orbit has a two-dimensional phase space, in \(r\) and \(p_{r}\). This simplifies the form of the Lyapunov exponent significantly, which can be written as
\[\lambda=\sqrt{\partial_{r}^{2}V_{\text{eff.}}}\,. \tag{19}\]
While this form is sufficient for our present purposes, a full treatment of Lyapunov exponents in a generally relativistic context can be found in [57; 58; 59]. It should be noted that, in general, Lyapunov exponents in GR depend on the time parametrization of the given null geodesics. The definition given in (19) is implicitly in proper-time parametrization. Like photon ring locations, Lyapunov exponents are of interest for VLBI missions, as the Lyapunov exponent controls various aspects of the BH image, including magnification and ratio between fluxes of adjacent subrings [60].
#### ii.2.3 Innermost Stable Circular Orbit
The ISCO is the timelike orbit around a BH with the smallest value of \(r\) which is stable to small perturbations [55]. The ISCO is an equatorial feature, being confined to the plane with \(\theta=\pi/2\). One can identify the ISCO by demanding the effective potential of a massive particle equal \(-1/2\), along with its first two radial derivatives vanishing:
\[V_{\text{eff.}}=-1/2,\] \[\partial_{r}V_{\text{eff.}}=0=\partial_{r}^{2}V_{\text{eff.}}. \tag{20}\]
ISCOs are of particular interest to observational BH astronomy, as they represent the inner boundary where light-emitting matter may be stably found. For this reason, the ISCO is often considered the "inner edge" of astrophysical accretion disks [61, 62]. For the ISCO, we will study the accuracy of the calculation of \(r_{\rm ISCO}\), \(L_{\rm ISCO}\), and \(E_{\rm ISCO}\), as well as \(\omega_{\rm ISCO}\), the orbital frequency of massive particles at the ISCO. The latter is constructed from the Hamiltonian equations of motion, because \(\omega_{\rm ISCO}=\phi^{\prime}/t^{\prime}\)[63], where primes indicate derivatives with respect to proper time.
#### iii.2.4 Ergosphere
The ergosphere is the region outside a BH in which it is impossible for a massive observer to remain stationary. That is to say, the region where the four-velocity of a stationary observer becomes null, i.e. \(g_{\mu\nu}u^{\mu}_{(t)}u^{\nu}_{(t)}>0\), where \(u^{\alpha}_{(t)}=dx^{\alpha}/dt=(1,0,0,0)\) is the tangent to the world line of a stationary observer. In practice, this implies solving \(g_{tt}=0\) for \(r\)[55, 64], which for Kerr reduces to the well-known result \(r_{\rm ergo}=M\pm\sqrt{M^{2}-a^{2}\cos^{2}(\theta)}\). Rather than reporting the value of the \(r\)-coordinate for the photon ring, ISCO and ergosphere, we will focus on the "perimeter radius" \(R=\sqrt{g_{\phi\phi}}|_{r}\), which represents the physical radius of each trajectory in the sense that \(2\pi R\) is its arc length.
## IV Accuracy Required for each observable
In this section, we analyze the order in the spin expansion that is required to calculate several observables to a given accuracy. First, we outline the
\begin{table}
\begin{tabular}{c c} Symbol & description \\ \hline \(M_{2}\) & Mass quadrupole \\ & moment \\ \hline \(R_{\rm ph}\) & photon ring \\ & perimeter radius \\ \hline \(L_{\rm ph}\) & angular momentum \\ & on the photon ring \\ \hline \(\omega_{\rm ph}\) & orbital frequency \\ & on the photon ring \\ \hline \(\lambda_{\rm ph}\) & Lyapunov exponent \\ & on the photon ring \\ \hline \(R_{\rm ergo}\) & perimeter radius \\ & of the ergosphere \\ \hline \(R_{\rm ISCO}\) & perimeter radius \\ & of the ISCO \\ \hline \(L_{\rm ISCO}\) & angular momentum \\ & on the ISCO \\ \hline \(E_{b}^{\rm ISCO}\) & binding energy \\ & on the ISCO \\ \hline \(\omega_{\rm ISCO}\) & orbital frequency \\ & on the ISCO \\ \hline \end{tabular}
\end{table}
Table 1: A summary of all observables studied in this paper for approximate metrics truncated at various orders in a small-spin expansion.
Figure 3: A collection of null orbit effective potentials in dCS gravity (left) and sGB gravity (right) for varying values of \(\zeta_{\rm q}\), with \(\chi=0.9\) and \(L\approx-6.832\). The red lines at the peak of the \(\zeta_{\rm q}=0\) curve correspond to the photon orbit in Kerr. Observe that this point moves as the coupling strength is increased.
scheme by which we measure the errors as a function of expansion order, and then we summarize the results of calculating the errors for all quantities listed in Table 1.
### Error estimation scheme
We will fix the modified gravity coupling constants \(\zeta_{\rm q}\) to their maximum allowed values to ensure the approximate metrics remain valid outside the horizon [see Eq. (12)]. We will then study the relative difference between an observable computed with an "exact" metric and with a metric truncated at a given order in spin. By "exact" metric, we here mean the series-expanded metric truncated at a very high order in spin, such that the terms neglected introduce modifications below double precision. Since the series is convergent for \(\chi<1\) outside the horizon, taking the truncation above 20th order will suffice. In practice, we set the maximum truncation order, which defines an exact metric, to 24.
More precisely, any observable \(A\) that depends only on the metric can be calculated to linear order in the deformation as
\[A=A_{\rm Kerr}+\zeta_{\rm q}\delta A+\mathcal{O}(\zeta_{\rm q}^{2}) \tag{21}\]
where \(A_{\rm Kerr}\) is the observable computed with the Kerr metric (without expanding in small spins) and \(\delta A\) is the deformation from Kerr introduced by the \(\zeta_{\rm q}\)-dependent terms in the metric. Since the modified gravity metric is known only as an expansion in small spins, the term \(\delta A\) must also be cast as a series in spin, namely
\[\delta A=\sum_{n}^{N}\delta A^{(n)}\chi^{n}+\mathcal{O}(\chi^{N+1})\,. \tag{22}\]
In deriving these expressions, we have first expanded in small coupling \(\zeta_{\rm q}\ll 1\), and then expanded only the deformation in small spins \(\chi\ll 1\), without expanding the Kerr contribution in small spins, as this is known to all orders.
With this in hand, we can define the relative fractional error between an observable computed with an "exact" metric \(A_{\rm ex}\) and one computed with a truncated metric \(A_{\rm tr}\) via
\[\epsilon=1-\frac{A_{\rm tr}}{A_{\rm ex}}=1-\frac{A_{\rm Kerr}+\zeta_{\rm q} \sum_{n}^{N_{\rm tr}}\delta A^{(n)}\chi^{n}}{A_{\rm Kerr}+\zeta_{\rm q}\sum_{n }^{N_{\rm hi}}\delta A^{(n)}\chi^{n}}\,, \tag{23}\]
where \(N_{\rm hi}=24\), while \(N_{\rm tr}\) is a number we will vary. In what follows, we will study how the accuracy of the calculation of various observables changes as we increase \(N_{\rm tr}\) toward \(N_{\rm hi}\).
### Implementation of the calculation of observables with a truncated slow-spin series
We must be careful when calculating the observables described above in order to make their computation feasible. The reason for this is twofold. If we try to perform the calculation entirely analytically, term-by-term, the computation becomes intractably slow for expansion orders greater than \(N_{\rm tr}\sim 10\), even on high-performance computing clusters. This is simply because the number of terms in the metric increases rapidly with the expansion order, making it extremely large beyond roughly \(\mathcal{O}(\chi^{10})\). If, on the other hand, we try to perform the computation entirely numerically, we find numerically unstable behavior at large \(N_{\rm tr}\). This is because the \(H_{i}\) terms, when expanded to order \(N_{\rm tr}>10\), contain pieces that decay faster than \(r^{-20}\) far from the black hole. These pieces are very small, even when evaluated close to the event horizon; for example, when evaluated at the ISCO, they are smaller than \(10^{-16}\). The background metric and other pieces in the \(H_{i}\) terms that are proportional to spin to a lower power, however, are of order unity. Therefore, when the small terms are added to the order-unity terms, one can quickly overwhelm double precision, yielding numerical calculations unstable.
To get around these issues, we here develop a semi-analytic approach, which allows us to limit the numerical noise, while also achieving useful speed. The general idea is to always store the \((N_{\rm tr}-1)^{\rm th}\) term of the given observable to double precision numerically, and then use the perturbation to the metric to analytically calculate _only the perturbation to the observable_. For a concrete example, let us describe explicitly the process of calculating the first two corrections to the parameters of an ISCO orbit.
The location of the ISCO in Boyer-Lindquist coordinates \(R_{\rm ISCO}\), the energy \(E_{\rm ISCO}\), and the angular momentum \(L_{\rm ISCO}\) are found simultaneously by solving the system of equations in (20). Let us then define the following notation. First, we denote by \({\rm x}_{i}\equiv(R_{\rm ISCO},E_{\rm ISCO},L_{\rm ISCO})\) a 3-vector containing the orbital parameters we are going to solve for. Then, we define \(Y_{i}\equiv(V_{\rm eff.}+1/2,\partial_{r}V_{\rm eff.},\partial_{r}^{2}V_{\rm eff.})\) as a vector containing the symbolic effective potential (plus one-half), and its first two radial derivatives. With this notation, the location of the ISCO, its energy and angular momentum are simply found by solving the system of equations
\[Y_{i}({\rm x}_{j})=0\,, \tag{24}\]
for the \({\rm x}_{j}\), which is simply a rewriting of Eq. (20) in more compact notation.
Let us now find these orbital parameters order by
order. To start, we evaluate Eq. (24),
\[Y_{i}^{\rm Kerr}\Big{(}{\rm x}_{i}^{\rm Kerr}\Big{)}=0, \tag{25}\]
to find the usual \({\rm x}_{i}^{\rm Kerr}\). To find the zeroth-order, the spherically-symmetric (i.e. \(N_{\rm tr}=0\)) correction, \(\delta{\rm x}_{i}^{(1)}\), we start by deriving the correction to the effective potential. At this order, the effective potential is
\[V_{\rm eff.}^{(1)}=V_{\rm eff.}^{\rm Kerr}+\delta V_{\rm eff.}^{(1)}, \tag{26}\]
and \(\delta V_{\rm eff.}\) is the first-order perturbation:
\[\delta V_{\rm eff.}^{(1)}=\delta g_{tt}^{(1)}\frac{\partial V_{\rm eff.}^{ \rm Kerr}}{\partial g_{tt}}+\delta g_{t\phi}^{(1)}\frac{\partial V_{\rm eff.}^ {\rm Kerr}}{\partial g_{t\phi}}+\delta g_{\phi\phi}^{(1)}\frac{\partial V_{ \rm eff.}^{\rm Kerr}}{\partial g_{\phi\phi}^{(1)}}, \tag{27}\]
where the \(\delta g_{ij}^{(1)}\) terms are the \(\mathcal{O}(\zeta_{\rm q}\chi^{0})\) perturbations to the metric given by Eq. (8). Equipped with the perturbation to the effective potential (and the corresponding vector \(Y_{i}^{(10}=Y_{i}^{\rm Kerr}+\delta Y_{i}^{(0)}\)), we are now ready to calculate the correction to \({\rm x}_{i}\), which we can find by linearizing Eq. (25) about a small \(\delta{\rm x}_{i}\):
\[Y_{i}^{(1)}({\rm x}_{i}) = Y_{i}^{\rm Kerr}\Big{(}{\rm x}_{i}^{\rm Kerr}+\delta{\rm x}_{i} \Big{)}+\delta Y_{i}\Big{(}{\rm x}_{i}^{\rm Kerr}+\delta{\rm x}_{i}\Big{)}, \tag{28}\] \[\approx Y_{i}^{\rm Kerr}\Big{(}{\rm x}_{i}^{\rm Kerr}\Big{)}+ \delta{\rm x}^{j}\left.\frac{\partial Y_{i}^{\rm Kerr}}{\partial\chi^{j}} \right|_{{\rm x}_{i}^{\rm Kerr}}\] \[+\delta Y_{i}\Big{(}{\rm x}_{i}^{\rm Kerr}\Big{)}+\delta{\rm x}^ {j}\left.\frac{\partial\delta Y_{i}}{\partial\chi^{j}}\right|_{{\rm x}_{i}^{ \rm Kerr}},\] \[=\delta{\rm x}^{j}\underbrace{\left.\frac{\partial Y_{i}^{\rm Kerr }}{\partial\chi^{j}}\right|_{{\rm x}_{i}^{\rm Kerr}}}_{(1)}+\underbrace{ \delta Y_{i}\Big{(}{\rm x}_{i}^{\rm Kerr}\Big{)}}_{(2)}. \tag{29}\]
The first term of the second line vanishes by virtue of Eq. (25). Further, the last term of the second line is of second order in the perturbation, so we drop it for the calculation of \(\delta{\rm x}_{i}\). Then, in Eq. (29), term (2) is the perturbation to the effective potential vector, which is analytic and must be evaluated at the orbital parameters found in the previous iteration. The latter are known analytically in the Kerr case, but we will store them numerically, allowing us to also store term (2) numerically. Term (1) is the Jacobian of \(Y_{i}^{\rm Kerr}\) evaluated at the Kerr orbital parameters, which is calculated analytically, and then stored numerically. Finally, Eq. (29) can be solved for the values of \(\delta{\rm x}_{i}\) that make this equation vanish. Before we move on to the next order, we update the Jacobian of term (1) by calculating \(\partial\delta Y_{i}/\partial{\rm x}^{j}\) analytically, and then evaluating it at the Kerr orbital parameters and storing the result numerically.
Let us now move to the \(N_{\rm tr}=1\) correction. First, we find the effective potential as in the \(N_{\rm tr}=0\) case, but now
\[V_{\rm eff.}^{(21} = V_{\rm eff.}^{\rm Kerr}+\delta V_{\rm eff.}^{(0)}+\delta V_{ \rm eff.}^{(1)}\,, \tag{30}\] \[= V_{\rm eff.}^{\rm bg}+\delta V_{\rm eff.}^{(1)}\,, \tag{31}\]
where we have absorbed the \(\mathcal{O}(\zeta_{\rm q})\) correction into a modified background \(V_{\rm eff.}^{\rm bg}\). The correction to this background effective potential, \(V_{\rm eff.}^{(1)}\), is then simply \(V_{\rm eff.}^{(0)}\) but with the replacements \(V_{\rm eff.}^{\rm Kerr}\to V_{\rm eff.}^{\rm bg}\) and \(\delta g_{ij}^{(0)}\to\delta g_{ij}^{(1)}\), where \(g_{ij}^{(1)}\) is of \(\mathcal{O}(\zeta_{\rm q}\chi)\). Making the same replacements in Eq. (29) and using the same procedure we described above gives the corrections to \(\delta{\rm x}_{i}\) at second order. Clearly then, this method allows one to bootstrap the correction to the ISCO observables (and to any other observable by following a similar procedure) to arbitrary order in perturbation theory, while minimizing the number of numerically troublesome terms. At every step after
Figure 4: Accuracy of the calculation of the Lyapunov exponent \(\lambda_{\rm ph}\), as a function of the truncation order of the metric in a slow-spin expansion. In this figure, we have saturated the coupling constants \(\zeta_{\rm q}\) at the largest values allowed by the small-coupling approximation, with \(\zeta_{\rm dCS}=0.15\) (left) and \(\zeta_{\rm dGB}=0.5\) (right).
\(N_{\rm tr}=0\), one must be careful to keep only the order in \(\chi\) being considered, and expand all analytic expressions to \(\mathcal{O}(\zeta_{\rm q}X^{N_{\rm tr}})\) whenever possible, to minimize numerical error.
### Accuracy of observables computed with a truncated slow-spin series
Having laid out the scheme above, we are now ready to calculate the quantities described in Sec. III. We here seek to understand how the relative error in the calculation of observables behaves as we vary the truncation order, for a given set of dimensionless spins and coupling constants. To get a grasp of this behavior, we present two types of figures that describe the error in the observables. The first type presents the relative error in Eq. (23) as a function of expansion order for several values of \(\chi\), while setting \(N_{\rm hi}=25\) and \(\zeta_{\rm q}\) to the maximum values of Eq. (12).
Figure 4 shows an example of this first type of figure for the calculation of the Lyapunov exponent, \(\lambda_{\rm ph}\). Observe that as the truncation order is increased, the relative error decreases as expected. Observe also that we only plot even values of \(N_{\rm tr}\). This is due to the fact that the \(H_{i}\) functions only modify the metric at even orders in \(\chi\), a fact already shown in Fig. 2. Observe finally that as the spin increases, the number of terms needed to maintain a given accuracy also increases. For example, if one wishes to calculate \(\lambda_{\rm ph}\) in dCS gravity to \(0.1\%\) accuracy, then the Kerr deformations in the metric must be known to \(\mathcal{O}(\chi^{2})\) when the spin is \(\chi<0.4\), but the deformations need to be known to \(\mathcal{O}(\chi^{20})\) if \(\chi\sim 0.9\). SGB gravity presents similar behavior, but, in this case, even fewer terms are needed in the metric deformation to achieve a certain accuracy. At this point, we should also explain some unexpected behavior that appears in the relative error calculations for several observables in sGB gravity -- see the plots in Appendix A. The error is observed to increase briefly at low orders before continuing to reduce. This is simply due to the fact that the relevant calculated quantity changes sign, which can produce this behavior when the absolute value is taken, as is done for the calculation of \(\epsilon\).
The second type of plot is, in some sense, an inversion of the first type. For a given value of the coupling constants, we present the expansion order required to achieve an error less than a given threshold. Figure 5 presents an example of such a figure, again focusing on the Lyapunov exponent, \(\lambda_{\rm ph}\). Observe that indeed, if the spin is low enough, then a few terms suffice to achieve good accuracy. For example, for an accuracy of \(1\%\) in dCS gravity, the metric deformation must only be known to \(\mathcal{O}(\chi^{5})\) if the spin \(\chi<0.6\). However, if one wishes to study more rapidly spinning BHs, such as one with \(\chi=0.9\), then the metric deformation must be known to \(\mathcal{O}(\chi^{16})\). Observe that this is also the case for sGB gravity, although here fewer terms are needed. For example, for an accuracy of \(1\%\) in sGB gravity, the metric deformation must only be known to \(\mathcal{O}(\chi^{4})\) when the spin \(\chi<0.6\).
The other observables we presented in Sec. III show very similar behavior. In order to avoid cluttering the paper, we have included these figures in Appendix A and B. To summarize all results, Fig. 1 presents the spin order to which different metric deformations need to be kept to achieve a given accuracy, for BHs of various spins. Each band presented in that figure corresponds to the set of curves for all observables studied, at a fixed value of spin. For example, the blue-shaded region in Fig. 1 corresponds to the region occupied by the relative accuracy curves for all observables of Table 1, corresponding to the blue curves in the plots of Appendix A. Observe that, once more, the number of terms required to obtain \(1\%\) accuracy is rather small for spins \(\chi<0.5\), but for more rapidly spinning BHs, more terms need to be kept.
These plots make it clear that for \(\chi\lesssim 0.6\), corrections of \(\mathcal{O}(\zeta_{\rm q}\chi^{8})\) provide excellent accuracy, with a relative difference \(\epsilon\) being less than \(10^{-4}\) in almost all cases of the coupling parameter (Fig. 1). For intermediate spins, with \(0.6<\chi<0.8\), the story is more complicated, depending on the observable in question. For high spins of \(\chi>0.8\), the accuracy increases much more slowly.
## V Conclusions
We have here calculated several observables associated with the spacetime outside BHs described by two different quadratic theories, sGB and dCS gravity. These BH metrics are now known to linear order in the coupling but to very high order in a small spin expansion, begging the question of what order must be kept in the latter to obtain sufficiently accurate calculations. The answer to this question is non-trivial because various observables that one may wish to calculate depend non-linearly on the various components of the metric. We have here carried out a careful and extensive exploration of the accuracy in the calculation of various observables and showed that, for large regions in spin and coupling parameter space, only relatively few orders of expansion in spin are required to achieve quite impressive accuracy.
We expect these results to be of interest when computing other quantities with direct observational
interest in detectors, such as quasinormal modes, gravitational waveforms and shadows. Since the computation of these more realistic observables can be challenging, it is crucial to reduce the computational cost by truncating the spin expansion of the metric at the minimum order required to obtain a given desired accuracy. The work presented here provides a suggestion for what this minimum order should be. Future work could study whether indeed the order requirements suggested here for a given set of observables also applies to other observables of interest.
For instance, photon ring properties are sometimes related to the quasinormal mode spectrum of BHs through the geodesic analogy [59, 65, 66], even in some modified gravity theories [67]. Future experiments may be able to measure these modes with an accuracy of around 1% [68, 69]. Thus, to be on the safe side, one should compute the QNMs with an accuracy higher than this. Using our results for the photon ring frequency as a reference -- see Figs. 6 and 9 -- if one wishes to calculate QNMs with an accuracy of 1% for a BH of \(\chi=0.7\), one would need expansions of at least order \(N_{\rm tr,sGB}\sim N_{\rm tr,dCS}\sim 6\). On the other hand, we need expansions of around twice this order in order to achieve an accuracy of 0.1%. One could therefore attempt to compute quasinormal modes with such a metric and determine whether indeed this order in small-spin expansion is sufficient. Of course, BH perturbation theory for small-spin metrics can itself be a very challenging calculation, with results only known to first [70, 71, 72] or second [73] orders in spin in dCS and sGB gravity -- see also [74, 75, 76, 77]. Other, numerical methods for QNM calculations, such as wave-packet scattering, may be more suitable for calculations with higher-order-in-spin metrics.
Further, there is more exploration to be done of other high-curvature modifications to gravity, such as those with quartic [78, 79] and cubic [18] terms in the curvature, Einstein-AEther theory [80], or even other coupling functions in Gauss-Bonnet gravity that are not merely the shift-symmetric version considered here, such as in [81]. Finally, it would be useful to confirm the results found here through a parameter estimation study of a set of observables. One could imagine a study wherein GRMHD simulation data was compared to real-life interferometric data in order to perform a similar error analysis to what has been presented here. Our study lays the foundations for such follow-up work.
###### Acknowledgements.
The work of PAC is supported by a postdoctoral fellowship from the Research Foundation - Flanders (FWO grant 12ZH121N). NY and AD are supported through the Simons Foundation Award No. 896696 and the NSF Award PHY-2207650.
## Appendix A Relative error plots
Below we show the relative error plots for every observable listed in Table 1.
## Appendix B Required order plots
Below we show the order required to achieve a given error for every observable listed in Table 1.
Figure 5: The expansion order required to achieve a given relative error as a function of spin, for the Lyapunov exponent \(\lambda_{\rm ph}\), with \(\zeta_{\rm dCS}=0.15\) (left) and \(\zeta_{\rm GB}=0.5\) (right). |
2302.11180 | DISCO: Distributed Inference with Sparse Communications | Deep neural networks (DNNs) have great potential to solve many real-world
problems, but they usually require an extensive amount of computation and
memory. It is of great difficulty to deploy a large DNN model to a single
resource-limited device with small memory capacity. Distributed computing is a
common approach to reduce single-node memory consumption and to accelerate the
inference of DNN models. In this paper, we explore the "within-layer model
parallelism", which distributes the inference of each layer into multiple
nodes. In this way, the memory requirement can be distributed to many nodes,
making it possible to use several edge devices to infer a large DNN model. Due
to the dependency within each layer, data communications between nodes during
this parallel inference can be a bottleneck when the communication bandwidth is
limited. We propose a framework to train DNN models for Distributed Inference
with Sparse Communications (DISCO). We convert the problem of selecting which
subset of data to transmit between nodes into a model optimization problem, and
derive models with both computation and communication reduction when each layer
is inferred on multiple nodes. We show the benefit of the DISCO framework on a
variety of CV tasks such as image classification, object detection, semantic
segmentation, and image super resolution. The corresponding models include
important DNN building blocks such as convolutions and transformers. For
example, each layer of a ResNet-50 model can be distributively inferred across
two nodes with five times less data communications, almost half overall
computations and half memory requirement for a single node, and achieve
comparable accuracy to the original ResNet-50 model. This also results in 4.7
times overall inference speedup. | Minghai Qin, Chao Sun, Jaco Hofmann, Dejan Vucinic | 2023-02-22T07:20:34Z | http://arxiv.org/abs/2302.11180v1 | # DISCO: Distributed Inference with Sparse Communications
###### Abstract
Deep neural networks (DNNs) have great potential to solve many real-world problems, but they usually require an extensive amount of computation and memory. It is of great difficulty to deploy a large DNN model to a single resource-limited device with small memory capacity. Distributed computing is a common approach to reduce single-node memory consumption and to accelerate the inference of DNN models. In this paper, we explore the "within-layer model parallelism", which distributes the inference of _each layer_ into multiple nodes. In this way, the memory requirement can be distributed to many nodes, making it possible to use several edge devices to infer a large DNN model. Due to the dependency within each layer, data communications between nodes during this parallel inference can be a bottleneck when the communication bandwidth is limited. We propose a framework to train DNN models for Distributed Inference with Sparse Communications (DISCO). We convert the problem of selecting which subset of data to transmit between nodes into a model optimization problem, and derive models with both computation and communication reduction when each layer is inferred on multiple nodes. We show the benefit of the DISCO framework on a variety of CV tasks such as image classification, object detection, semantic segmentation, and image super resolution. The corresponding models include important DNN building blocks such as convolutions and transformers. For example, each layer of a ResNet-50 model can be distributively inferred across two nodes with five times less data communications, almost half overall computations and half memory requirement for a single node, and achieve comparable accuracy to the original ResNet-50 model. This also results in 4.7 times overall inference speedup.
## 1 Introduction
Deep neural networks (DNNs) have made great progress in solving real-world problems [33]. A majority of studies assume that DNN models are inferred on a single device, such as a GPU. There are a few reasons to study the distributed inference of DNN models on multiple devices. Firstly, good DNN models usually require large memory for inference. This issue becomes more critical as the size of modern DNN models becomes larger and the popularity of resource-limited devices for them to be deployed grows rapidly. For example, ESP32 [54] series of SoCs have only hundreds of kilobytes of RAM/ROM, while a ResNet50 model requires hundreds of megabytes of RAM to store its weights and features, making it extremely challenging to infer a large DNN model on the edge device. By distributing the memory consumption of DNN models to multiple nodes, it is then possible to infer large models by small devices. Second, the inference latency can also be reduced by distributing the computation of inference to multiple nodes as well. Thirdly, some applications can benefit from cooperative inference of several branches DNN models. For example, in the scenario of multi-camera surveillance, if the DNN model in each camera can not only infer features from its own view, but collaboratively receive/send data from/to other cameras, the detection and classification of an object or an action can be more accurate. The overall system can be viewed as a distributed inference of a larger DNN model with communicated features between nodes.
When the memory capacity is satisfied by distributed hardware, two of the most popular metrics to measure the performance of a DNN-based system are the latency and the throughput [22]. The latency is the time difference between a data sample is fed into the DNN and the output is obtained. The throughput is the number of data samples the system can process per time unit. While throughput plays an important role in training, the latency in inference is a critical criterion of the system for time-sensitive applications, such as autonomous driving [18], tele-surgery [14], security cameras [42], and so on. Therefore, the inference latency is our primary interest in this paper.
Parallelism by distributing the computation across many nodes is one of the most popular methods to accelerate the DNN and it can also reduce the memory requirement for each node [8, 35]. In the _data parallelism_, each node is responsible for processing a subset of input samples in one mini-batch. In the _pipeline parallelism_[12, 28, 31, 41], the DNN model is partitioned into sequential parts based on the execution order, and each node is responsible for processing one part. The DNN is therefore processed in the pipeline manner. However, neither data nor pipeline parallelism can reduce the latency during the inference of one input. This is because the input data has to be sequentially processed through all DNN layers, and within each layer the computation is not distributed.
In order to distribute the computation within a layer, the _model parallelism_ is proposed where each node is responsible for computing a subset of the output of a layer [35]. However, each feature in the output of a layer is usually dependent on **all** input data of the layer, such that all input data needs to be distributed to all nodes (Figure 1(a)). The data communication latency between nodes will be the bottleneck of the distributed system with low communication bandwidth. For example, based on the configuration of a head-mounted system in [10], the communication latency can be one or two orders of magnitudes higher than the computation latency (See Figure 3 in Section 3.4) for layers of a ResNet50 model. This phenomenon also happens for high-end (e.g., A100 GPUs and NV-Links) and low-end (e.g., ARM cores and wireless links) platforms. Many prior works [27, 30, 43, 45, 55] tried to use better scheduling methods to improve the overall system performance. However, the potential of jointly designing the DNN architectures based on the system configurations is not explored. Another trend to distribute the computation is by splitting some stages of the DNN into independent branches [32, 10, 20] (Figure 1(b)(c)(d)). In these works, the separate branches can be
Figure 1: Comparison of different distributed DNN inference methods. (a) Conventional dense communications. (b) Dense-then-split ([32]). (c) Split-then-aggregate ([10]). (d) Independent branches ([20]). (e) Distributed inference with sparse communications (DISCO, ours). (f) Details of one layer in an example 2-node DISCO. Each node has 8 features from previous layers. Node-0 and Node-1 select 3 and 2 data features (out of 8) to transfer to the other node, respectively. The transferred features are combined with its own features to form the input to the convolutional kernels. The output of the convolution (after point-wise non-linear activation functions) will again be served as the input to the next layer with sparse communications.
processed independently on different nodes and there is no data communication between them. This eliminates the data communication latency for the corresponding stages. However, completely separated branches hurt the accuracy.
In this paper, we jointly design the DNN architecture and system configurations for distributed inference. In our proposal, only a subset of input features are transmitted between nodes (Figure 1(e)). The problem of selecting an optimal subset of them to communicate between nodes is NP-hard. We first demonstrate that this problem is equivalent to the problem of DNN weight pruning with certain patterns. To be more specific, weights in a convolutional or fully-connected layer can be represented as a 2-dimensional matrix \(M\). And if Node-\(i\) does not need a specific input feature in Node-\(j\) to compute the output features in Node-\(i\), it is equivalent to setting a sub-row of appropriate size and location in \(M\) to be all-zero. Details are presented in Section 3.2. In our framework, we first train a complete model with dense communication. Then we identify the non-zeros weights corresponding to the sparse features to communicate. We gradually sparsify the communication by fine-tuning the remaining weights. This framework, called _distributed inference with sparse communications (DISCO)_, can search for models with a better trade-off between data communication latency, computation latency, and accuracy. In addition, observing that the data communications and computation can be pipelined to hide their latency, the proposed method can further reduce the end-to-end system latency at almost no cost to accuracy. By experimenting on multiple machine learning tasks, including image recognition, object detection, semantic segmentation, and super-resolution, we conclude that 1) Compared to DNNs with completely independent branches, the accuracy can be significantly improved by a very tiny proportion of the data communications; 2) Compared to DNNs with completely dependent input-output layers (dense data communications), our proposed method can significantly reduce data communications with negligible accuracy loss.
The contribution of this paper is summarized as follows.
**1)**. We propose a framework, called _DISCO_, to design distributed DNN model architectures with sparse communications based on system configurations.
**2)**. We proposed to convert the problem of selecting the subset of data to communicate (which is NP hard) to a DNN weight pruning problem with certain structured patterns.
**3)**. The proposed DISCO can reduce the average inference latency by 4.4x based on system configurations in [10]. We also calculate the latency improvement based on four publicly available system specifications and observed 3.5x average speedup without accuracy degradation. The results are based on the geometric mean of five types of DNN architectures and four system configurations (See Table 1). Compared with prior arts, the DISCO can increase accuracy by 1.6% on image classification, 2.8 mAP on object detection, 6.6 mIoU on semantic segmentation, and 0.6 dB PSNR on super-resolution. As far as we know, this covers the widest range of applications compared to similar works.
## 2 Related Works
### Distributed inference
#### 2.1.1 Network splitting
Some prior works assume a device-cloud scenario where the layers of neural network are distributed among them [49, 10, 21, 38, 39]. Many works focus on selecting the splitting point for separate device and cloud execution [12, 28, 31, 41]. In these methods, the communication happens at the splitting point and no other communication latency is considered within each side. Our work, on the contrary, focuses on the fundamentals of distributed inference where no difference in the computational capabilities of processors is assumed and we consider the communication latency of **all** layers. Therefore, our work can also be used to improve their latency on either the device or the cloud side.
[34, 32, 26, 20] partition the neural network into several branches and each branch is distributed to a node for independent execution. Our work is different in that our network design is not completely separate such that there is communication between nodes. We can show that a tiny fraction of communication (e.g., 1%) can improve accuracy significantly with our framework.
#### 2.1.2 Network scheduling
Unlike works with changes to DNNs, there are some other works focusing on the system scheduling of inference without changing the models. [27] uses compression and dynamic scheduling to improve the throughput. [30] forms the splitting problem into a graph routing problem. [43] presents a framework to dispatch the inference data to compute nodes. [55, 45] minimizes the footprint in parallelism and enables dynamic workload distribution. [25] considers heterogeneous and non-linear device characteristics and proposes splitting and distributing methods. Compared to system design methods where the focus is to find a good schedule for unchanged models, our work is different, and can
be superimposed on theirs as well, in that we modify the network architecture to directly reduce data communications to achieve a better accuracy-latency trade-off.
### Distributed training
Model and pipeline parallelism can accelerate the distributed training. Graph partitioning [15, 16, 17, 19, 23, 6, 48] splits the training data into micro-batches and they are pipelined into the training devices. [44, 29, 51] split the tensors and place them onto different devices to distribute the computing. Data parallelism encounters the problem of high communication bandwidth for transmitting gradients. [2, 4, 46, 5, 1] propose methods to sparsify and quantize the gradients to reduce communication, respectively.
While the distributed training research focuses on improving the training throughput, our work, on the other hand, focuses on improving the **distributed inference latency** of each DNN layer by a slight architectural change.
## 3 Methodology
### Problem statement
The basic operation of the DNN model inference is the multiplication of input features and weights. The input features are feature maps for convolutional neural networks (CNNs) and are neuron vectors/matrices for multi-layer perceptions (MLPs) and Transformers. Assume there are \(I\) input features. In order to distribute the computation of each DNN layer into \(N\) nodes, they are equally distributed with \(\frac{I}{N}\) input features per node. If an output feature in Node-\(i\) is dependent on input features in Node-\(j\), the corresponding dependent input features are required to be transmitted from Node-\(j\) to Node-\(i\), inducing the communications from Node-\(j\) to Node-\(i\). Figure 1(f) demonstrates an example of distributing a convolutional layer with \(3\times 3\) kernels into \(N=2\) nodes. The total number of input features is \(I=16\) and each node carries \(8\) of them. For conventional DNNs with dense communications shown in Figure 1(a), the convolution tensor on each node has the shape \((W,H,I,O)=(3,3,16,8)\), where \(W\) and \(H\) are the kernel size, \(I\) and \(O\) are 16 input features and 8 output features, respectively. The total number of output feature maps is then \(8+8=16\). Though the computation of the convolution operation is distributed across 2 nodes, the communication latency may be a bottleneck of the system since all 8 input features need to be transmitted across the communication channel in both ways.
In DISCO, in order to reduce the communication latency, on the other hand, for example, Node-0 can select 3 input features, which are the 2nd, 3rd, and 6th feature maps, and sends them to Node-1, resulting in \(\frac{5}{8}\) one-way communication reduction. Similarly, in the other direction, the selection of the 0th and 5th feature maps from Node-1 to Node-0 can reduce the data transmission by \(\frac{6}{8}\). The problem of designing the DNN model for distributed inference can be formulated as the following few questions:
Figure 2: Equivalence of feature selection and weight selection. (a) One of two features is transmitted from Node-0 to Node-1 and from Node-1 to Node-0, respectively. The non-transmitted feature can be viewed as all-zero (a blank feature). (b) The equivalent representation where all features are transmitted but the corresponding weights are all-zero (blank 3-by-3 kernels). (c) A real example of the weight matrix in a ResNet-50 model found by our DISCO. The number of both input and output feature maps are 256. Each black subrow in the anti-diagonal represents one transmitted input feature between the two nodes.
1. If the total number of features to be communicated is known, which subset of input features should be selected to transmit to other nodes?
2. If the subset of transmitted input features is determined, how should the weights of the DNN be trained?
3. How many input features should be communicated between nodes?
### Selection of the subset of input features to communicate
In this section, we use a convolutional layer to demonstrate the subset selection problem and it can be generalized to MLP and Transformers as well.
Let \(I_{\text{each}}=\frac{I}{N}\) be the number of local input features on each node, and assume the number of input features communicated from Node-\(i\) to Node-\(j\) is \(I_{\text{comm}}\), then there are \(\binom{I_{\text{each}}}{I_{\text{comm}}}\) possible subsets of input features to select. This problem is NP-hard for moderate fraction \(\frac{I_{\text{comm}}}{I_{\text{add}}}\). For example, if \(I_{\text{each}}=128\) and \(I_{\text{comm}}=32\) (transmitting a quarter of features), then the number of subset selections is larger than \(10^{30}\), making it impossible to enumerate.
Let the subset of input features that are _not_ transmitted to the other node be denoted by \(\mathcal{F}\), it is mathematically equivalent to transmitting them as all-zero features \(\mathcal{F}_{\text{zero}}\) and the corresponding weights \(W_{\mathcal{F}}\) convolving these all-zero features can be arbitrary (Figure 2(a)). This is because zero times any value equals zero. Then, it is mathematically equivalent to transmit all input features including \(\mathcal{F}\) and set \(W_{\mathcal{F}}=\mathbf{0}\), as in Figure 2(b). Therefore, the problem of selecting a subset of features, which is dynamic and related to the dataset distribution, can be converted into a weight pruning problem, where the pruning pattern is a sequence of convolution kernels (e.g., a sequence of \(3\times 3\) kernels) that corresponds to input features in the other nodes. Therefore, if the 4-D convolution weight tensor is projected to a 2-D mask-matrix by condensing each 2-D convolution kernel into one value representing whether this kernel is all-zero, then the pruned pattern is a half row of the matrix (for 2 nodes), or in general, a \(\frac{1}{N}\)th row of the matrix, because each sub-row represents the weights that convolve one input feature in the other nodes. Note that for Node-\(i\), the convolution kernels corresponding to its input features are always kept. Therefore, the diagonal blocks of the mask-matrix are always kept. Figure 2(c) shows the 2-node mask-matrix for one of the layers in ResNet-50 that has 256 input and output feature maps, respectively. The communication sparsity is 90%. Each point in the mask represents a \(3\times 3\) convolutional kernel. Two dark blocks on the diagonal represent the dense weights and communications within each node. In the anti-diagonal, white regions mean the corresponding weights are pruned and the dark black sub-rows correspond to those input features that will be communicated between the two nodes.
After converting the input feature selection problem to a weight pruning problem, we can find an approximation of the problem efficiently by estimating the significance of the weights. First, a full DNN model with dense communications is trained. Second, for each layer, create a mask matrix \(M\in\{0,1\}^{I\times O}\) for each convolutional kernel of shape \(W\times H\) (e.g., \(3\times 3\) or \(1\times 1\)), where \(I\) and \(O\) are the number of input and output features, respectively. Then, for the non-diagonal sub-row of size \(1\)-by-\(\frac{O}{N}\) that corresponds to \(\frac{OWH}{N}\) weights in the full model, calculate the \(L_{1}\) norm of them and prune the sub-rows with least \(L_{1}\) norms to all-zero values. If we partition mask \(M\) into \(\frac{I}{N}\times\frac{O}{N}\) sub-matrices, then pruning one non-diagonal sub-row in the \((i,j)\)-th sub-matrix corresponds to preventing one input feature from being transmitted from Node-\(i\) to Node-\(j\). The total number of pruned sub-rows depends on the desired sparsity in the non-diagonal matrix, which is the communication sparsity of this layer. The remaining sub-rows are the input features with greater significance and should be communicated between nodes for optimum latency-accuracy trade-offs. We ablate over the random sub-row pruning to show the benefit of DISCO.
### Training DNNs with sparse communications
After setting up the equivalence between sparse weights and sparse communications, we use the simple yet effective iterative magnitude prune-and-finetune procedure to obtain trained DNNs with sparse communications. First, we train a full model with dense communications. Then we prune the least significant sub-rows in the weight matrix (see Section 3.2) by a fraction of \(p_{1}\), and then finetune the remaining weights to recover some accuracy. This completes one iteration. We perform this fraction-\(p_{i}\) prune and finetune process for a few iterations to generate a sequence of models with increasing weight sparsity, which corresponds to models with increasing communication and computation sparsity. The relationships between weight, communication, and computation sparsity will be discussed in Section 3.4. If we set the target communication sparsity to be 100% in DISCO, then the resultant model has the same architecture as a model with all independent branches in Figure 1(d). Nonetheless, the model obtained by DISCO significantly outperforms directly training models with all independent branches, under the same number of training efforts (e.g., epochs).
### Number of input features to communicate
The total latency of the system depends on the latency of computation and communications. Assume a conventional DNN layer has computation complexity \(C_{c}\) (in the number of floating-point operations, FLOPs) and the total size of features being \(F_{s}\) (in Bytes). The inference of the layer is distributed across \(N\) nodes. If there is no sparsity in communications, each node will send out its own input features of size \(\frac{F_{s}}{N}\) and receive all other features from other nodes of total size \(\frac{(N-1)F_{s}}{N}\). Therefore, the total data communication of each node is \(\frac{F_{s}}{N}+\frac{(N-1)F_{s}}{N}=F_{s}\). Note that if the speed of sending and receiving is different, we can calculate the _weighted_ sum of \(\frac{F_{s}}{N}\) and \(\frac{(N-1)F_{s}}{N}\). To avoid complication of adding extra system parameters, we assume equal sending/receiving communication bandwidth, denoted by \(B\) (in Bytes/second). Let \(S_{\text{comm}}\) denote the communication sparsity of each node, then the latency of the communications is
\[L_{\text{comm}}=\frac{F_{s}(1-S_{\text{comm}})}{B}. \tag{1}\]
Let \(S_{\text{comp}}\) denote the computation sparsity of the _entire_ layer (which also equals to the weight sparsity \(S_{\text{ut}}\)), and let \(C\) denote the computation speed (in operations/second), then the latency of the computation for each node is
\[L_{\text{comp}}=\frac{C_{c}(1-S_{\text{comp}})}{NC}. \tag{2}\]
Figure 3 shows \(L_{\text{comm}}\) and \(L_{\text{comp}}\) for each layer of a ResNet-50 model with dense communications (\(S_{\text{comp}}=S_{\text{comm}}=0\)) when the inference is distributed to \(N=2\) nodes. We assume each value in the feature maps is represented as a 4-byte floating-point number. The system configuration is assumed to be the same in [10] where \(C=125\) GOP/s and \(B=37.5\) MB/s. Note that the Y-axis is in the logarithmic scale and the communication latency can be larger than the computation latency by one or two orders of magnitude. This phenomenon can also be observed on other typical computing platforms, from high-end GPUs (NVIDIA A100 with NV-Link) to low-end edge devices (Arm Cortex-M4 with wireless connections). Therefore, reducing communication latency can significantly improve the overall inference performance.
From Figure 1(f), we can observe that sparse communication can also bring computation reduction. The relationship can be described as
\[S_{\text{comm}}=\frac{N}{N-1}S_{\text{comp}}=\frac{N}{N-1}S_{\text{ut}}. \tag{3}\]
The proof is given in Appendix.
#### 3.4.1 Communications and computation pipelining
Prior work, e.g., [10], calculates the end-to-end system inference latency by adding the latency of communication and computation. In this work, on the other hand, we note that computation consists of a small portion of the communicated features and a large portion of local features. Thus, asynchronous computation and communication can be pipelined to "hide" the latency of each other. For example, one node can simultaneously start processing the convolution of features
Figure 3: Computation and communication latency of 54 layers (including 4 skip links) in a ResNet-50 with dense communications between \(N=2\) nodes. The system configuration is in [10], with \(B=37.5\) MB/s, \(C=125\) GOP/s.
and weights of its own and receiving features from another node. Therefore, in the "pipeline" mode, the latency of inferring Layer-\(i\) is the maximum latency of computation or communication, i.e.,
\[L(i)=\max\{L_{\text{comm}}(i),L_{\text{comp}}(i)\}, \tag{4}\]
where \(L_{\text{comm}}(i)\) and \(L_{\text{comp}}(i)\) denote the corresponding latency of Layer-\(i\) and they can be computed by Eq. 1 and Eq. 2. On the other hand, "waiting" mode means that the compute starts after feature communication is completed (\(L(i)=L_{\text{comm}}(i)+L_{\text{comp}}(i)\)). The overall latency of the DNN is then the summation of latencies of all layers. We present the results of the "pipeline" mode in this main context to demonstrate our performance improvement. The improvements of our method for the "waiting" mode are similar.
#### 3.4.2 Computation and communication equilibriums
From Eq. 4, the latency of a layer is determined by the slower one between \(L_{\text{comm}}\) and \(L_{\text{comp}}\). From Eq. 1, Eq. 2, and Eq. 3, both \(L_{\text{comm}}\) and \(L_{\text{comp}}\) will be reduced by increasing \(S_{\text{comm}}\). Therefore, there exists an equilibrium point \(S_{\text{comm}}^{\text{eql}}\) where \(L_{\text{comm}}=L_{\text{comp}}\). If \(S_{\text{comm}}<S_{\text{comm}}^{\text{eql}}\), the latency reduction of the layer is contributed by reducing \(L_{\text{comm}}\); On the other hand, if \(S_{\text{comm}}>S_{\text{comm}}^{\text{eql}}\), further reducing \(L_{\text{comm}}\) will induce less effective accuracy-latency trade-off than before because \(L_{\text{comp}}\) is reduced slower than \(L_{\text{comm}}\) beyond the equilibrium point and it dominates the latency of the layer. By setting \(L_{\text{comm}}=L_{\text{comp}}\) in Eq. 1, 2, 3, we can obtain the equilibrium sparsity of communication for each layer as
\[S_{\text{comm}}^{\text{eql}}=\frac{AN-N}{AN-N+1}, \tag{5}\]
where \(A=\frac{NC}{C_{e}}\cdot\frac{F_{2}}{B}\). Note that \(S_{\text{comm}}^{\text{eql}}\in[0,1]\) if \(A\geq 1\), that is, the system bottleneck is the communication latency. Figure 4 shows the \(S_{\text{comm}}\) at the equilibrium for the system configuration presented in Figure 3. It can be observed that earlier layers favor higher communication sparsity than later layers. We also observe that for the ResNet-50 architecture, it is helpful to maintain the accuracy if later layers have smaller sparsity, which coincides with the equilibrium point. Therefore, DISCO at the equilibrium is more likely to achieve a better accuracy-latency trade-off. This is confirmed by the ablation of the communication sparsity in the ResNet-50 experiments.
## 4 Experimental Results
In this section, we demonstrate the benefits of DISCO through several machine learning tasks, including image classification (the ResNet and DeiT models), object detection (the SSD models), semantic segmentation (the DeepLabV3+ models), super resolution (the ESRGAN models), machine translation (the encoder-decoder Transformer models). All models are trained on NVIDIA A100 GPUs by PyTorch implementation.
In Figures 5, 6, 7, 8, 9, we investigate the latency and the accuracy trade-offs of various DNN models, datasets, and evaluation metrics when they are distributed across two nodes. We also show the \(S_{\text{comm}}\) of each DNN next to the plotted result. Results on the larger number of nodes are presented in Section 5. Details on the training recipe are presented in the Appendix. We vary \(S_{\text{comm}}\) (and thus \(S_{\text{comp}}\) based on Eq. 3) in the DISCO framework to obtain a sequence of DNN models with different distributed inference latency and accuracy. The latency estimation is based on Eq. 1, Eq. 2, and Eq. 4 for the "pipeline" mode. The system configuration is based on [10] where \(B=37.5\) MB/s and \(C=125\) GOP/s. The input feature is assumed to be represented as 4-byte floating point numbers. Results on other configurations with publicly available specifications, such as NVIDIA A100 with NVLink, T4 with PCIe, Xeon CPU with ethernet, Cortex M4 processor with wireless communications, are summarized in Table 1.
We compare our framework (DISCO) with three different existing or baseline approaches.
Figure 4: Non-uniform communication sparsity of each layer in a ResNet-50 model such that \(L_{\text{comm}}(i)=L_{\text{comp}}(i)\).
1. Random sparse communications: The transmitted input features are randomly selected instead of being selected based on the significance of weights in Section 3.2. In particular, when \(S_{\text{comm}}=1\), there is no communication between nodes and DNNs are partitioned into "independent branches" [20].
2. Dense-then-split: The features in the first few layers are densely communicated, and the remaining layers are all separate without feature communication (Figure 1(b)). Different splitting points result in different accuracy-latency trade-off. This architecture is proposed in [32].
3. Split-then-aggregate: The first few layers are all separate without communicated features, and the remaining layers have dense feature communications (Figure 1(c)). Different aggregation points result in different accuracy-latency trade-offs. This architecture is proposed in [10].
To quantify the benefits of DISCO, we show two metrics. The first metric is the **latency reduction** from the densely communicated model without accuracy loss (the horizontal dashed double arrow in Figures 6, 6, 7, 8, 9). The second metric is the **accuracy improvement** over all baseline and existing methods with the specified latency in the first metric (the vertical dashed double arrow in Figures 6, 6, 7, 8, 9).
The observation that our results outperform the random sparse communications shows the effectiveness of DISCO in selecting features and training the DNN (Section 3.2, 3.3). On the other hand, our results outperform "Dense-then-split" and "Split-then-aggregate", which shows that the sparse feature communications between nodes should exist in most of the layers, instead of concentrating on a subset of layers and completely being overlooked by the rest of layers.
### ResNet50 models on image classification
The ResNet architecture is widely used in computer vision tasks. We use ResNet-50 models on ImageNet [9] to demonstrate the benefit of DISCO compared to existing and baseline methods.
Figure 5 shows that DISCO outperforms prior arts with the following advantages: 1) The accuracy improvement is 1.6%. 2) The latency reduction is 4.7x (from 1.14s to 0.24s). 3) Compared to the model with indenpendent branches (\(S_{\text{comm}}=100\%\)), by adding only 1% feature communications (\(S_{\text{comm}}=99\%\)), which slightly increases the inference latency from 32ms to 35ms, the accuracy can be improved by a large margin from 73.4% to 75.9%. This demonstrates that a small portion of communicated features can significantly increase the model accuracy.
### DeiT models on image classification
There is a growing interest in Vision Transformers [11, 50] and we also apply the DISCO to DeiT-small models. We follow the same training recipe as in [53]. From Figure 6 we can observe: 1) DISCO has over 1% accuracy advantage over all existing or baseline methods. 2) By adding 1% of data communications between the two nodes, DISCO methods can improve the accuracy from 74.58% to 78.70%.
### SSD models on object detection
The single-shot detection (SSD) model [37] is a fast one-phase object detection model. We use the COCO2017 [36] dataset and follow the training recipe in [40].
From Figure 7, we can observe the following: 1) DISCO reduces inference latency by 9.2x (from 3.4s to 0.37s) with equal accuracy of 26% mAP [40]. 2) DISCO has clear accuracy advantage (+2.8% mAP at the same inference latency) 3) By introducing 1% feature communications, the accuracy can be significantly improved from 19.6% to 24.6% while the latency is slightly increased from 109ms to 116ms.
### DeepLabV3+ models on semantic segmentation
We use the standard DeepLabV3+ models with ResNet-101 as the backbone [7] for semantic segmentation task. The dataset is PASCAL VOC2012 [13]. We follow the training recipe in [7].
Figure 8 shows the following: 1) DISCO reduces the inference latency by 9.9x (from 12.3s to 1.24s) with slightly better accuracy (77.5% vs. 77.21% [7]). 2) DISCO has a clear accuracy advantage (+6.6%) over existing or baseline methods. 3) By introducing 1% feature communications, the accuracy can be improved from 64.5% to 76.7% while the latency is only increased from 0.87s to 0.89s.
### ESRGAN models on super resolution
We use the ESRGAN [52] with 23 residual-in-residual dense blocks (RRDB) [52] architecture to demonstrate the DISCO on image super resolution (up-scaling the height and width each by 4 times). The dataset is DIV2K [3].
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Models & ResNet-50 & DeiT & SSD & DeepLabV3+ & ESRGAN \\ \hline NVIDIA-T4 (65TFLOPs) & 4.7x & 1.7x & 8.5x & 8.8x & 2.3x \\ PCIe (32GB/s) & & & & & \\ \hline NVIDIA-A100 (312TFLOPs) & 4.2x & 1.7x & 5.0x & 4.5x & 2.6x \\ NVLink (600GB/s) & & & & & \\ \hline Xeon CPU (32\(\times\)3GFLOPs) & 3.9x & 1.7x & 5.4x & 5.7x & 2.1x \\ Ethernet (125MB/s) & & & & & \\ \hline ARM Cortex-M4 (64MFLOPs) & 4.2x & 1.7x & 5.0x & 4.4x & 2.3x \\ Wireless (1Mbps) & & & & & \\ \hline Accuracy Improv. & +1.6 & +1.6 & +2.8 & +6.6 & +0.6 \\ \hline \end{tabular}
\end{table}
Table 1: A summary of latency and accuracy improvement by DISCO
Figure 9 shows 1) There is a clear PSNR advantage (around 0.6dB) for DISCO. 2) DISCO reduces the inference latency by 2.3x (from 83s to 36s) at the same PSNR=32.56dB.
### Results summaries
In addition to the system configuration to generate Figures 5, 6, 7, 8, 9 (\(B=37.5\) MB/s and \(C=125\) GOP/s [10]), Table 1 shows the latency and accuracy improvement for other typical computation-communication system with publicly available specifications. When our proposed models and baseline models are inferred using these systems, the accuracy will not change but the latency will. Therefore, we show the accuracy improvement of each DNN model for all systems, and show the latency improvement of each DNN model for each system.
The geometric mean of latency reduction obtained by DISCO is 3.5x on 20 model and system configuration pairs. It also has clear accuracy advantages over the baseline or existing designs.
## 5 Ablation Study
### Number of Nodes
One of the motivations for distributing DNN inference to multiple nodes is to reduce the memory requirement of each node. To demonstrate the benefit of multiple nodes, we make the following assumption: 1) We denote ResNet50/8 the DNN with 1/8 of the layer width of a standard ResNet50 model. And we assume each node has a memory capacity that is slightly larger than the requirement of inferring ResNet50/8. 2). The system has comparable communication and computation latency, where we assume \(B=37.5\)MB/s and \(C=3.75\)GOP/s in this subsection.
If we have \(N\) nodes and the memory of each node restricts it to infer a ResNet50/8, but not a significantly larger model. We compare two cases. 1) They form \(N\) independent branches [20]. 2) Those \(N\) nodes can have a tiny amount of communications (\(S_{\text{comm}}\approx 99\%\)) between any pair of them (DISCO), such that the memory requirement to handle the extra 1% data does not exceed the memory capacity of each node.
Table 2 shows the comparison where a few phenomena can be observed. 1) With less then 2% latency overhead, multiple-node distributed inference can significantly improve the accuracy of single-node inference of a small ResNet50/8 model (from 55.55% to 67.59%). 2) DISCO has (2.84%, 6.51%, 7.15%) accuracy advantage over \(N=2,4,8\) independent branches of ResNet50/8's with small (0.2ms to 4ms) latency overhead, respectively.
### Equilibrium sparsity distribution
In Table 3, we compare the distribution of communication sparsity by using a 1) uniform distribution and 2) non-uniform distribution at the equilibrium point shown in Section 3.4.2. Since the equilibrium sparsity for each layer guarantees an equal latency for communication and computation, they can perfectly hide each other. Therefore, at the same inference latency, the average sparsity of both communication and computation are smaller, leading to 0.74% better top-1 accuracy of the ResNet-50 model. Note that for a fair comparison, both models are obtained by the DISCO framework directly from a densely communicated ResNet-50 model without iterative pruning method. Also note that \(S_{\text{comm}}\) at the equilibrium is very high. This is because for a system with \(B=37.5\) MB/s and \(C=125\) GOP/s [10], the communication latency is much larger than the computation latency. If \(B\) increases, \(S_{\text{comm}}\) at the equilibrium can be smaller.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Method & \(N\) & \(S_{\text{comm}}\) & Lat.(ms) & Accu.(\%) \\ \hline ResNet50/8 & 1 & n/a & 96.2 & 55.55 \\ \hline Independent & 2 & f00\% & 96.4 & 59.47 \\ branches & 4 & 100\% & 96.6 & 61.08 \\
[20] & 8 & 100\% & 97.2 & 64.74 \\ \hline \multirow{3}{*}{DISCO} & 2 & 99\% & 96.7 & 62.31 \\ & 4 & 98.7\% & 98.2 & 67.59 \\ \cline{1-1} & 8 & 98.9\% & 101.2 & 71.89 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of independent branches and DISCO with different number of nodes \(N\). System configuration: \(B=37.5\)MB/s and \(C=3.75\)GOP/s.
## 6 Conclusions
In this paper, we propose a framework, called DISCO, to train DNNs for distributed inference with sparse communications. We convert the problem of selecting the subset of data for transmission to a DNN sparsification problem and show that DISCO can reduce the latency by an average of 3.5x or improve the accuracy significantly.
|
2302.12489 | Channel State Information Based User Censoring in Irregular Repetition
Slotted Aloha | Irregular repetition slotted aloha (IRSA) is a massive random access protocol
which can be used to serve a large number of users while achieving a packet
loss rate (PLR) close to zero. However, if the number of users is too high,
then the system is interference limited and the PLR is close to one. In this
paper, we propose a variant of IRSA in the interference limited regime, namely
Censored-IRSA (C-IRSA), wherein users with poor channel states censor
themselves from transmitting their packets. We theoretically analyze the
throughput performance of C-IRSA via density evolution. Using this, we derive
closed-form expressions for the optimal choice of the censor threshold which
maximizes the throughput while achieving zero PLR among uncensored users.
Through extensive numerical simulations, we show that C-IRSA can achieve a
4$\times$ improvement in the peak throughput compared to conventional IRSA. | Chirag Ramesh Srivatsa, Chandra R. Murthy | 2023-02-24T07:10:17Z | http://arxiv.org/abs/2302.12489v1 | # Channel State Information Based User Censoring in Irregular Repetition Slotted Aloha
###### Abstract
Irregular repetition slotted aloha (IRSA) is a massive random access protocol which can be used to serve a large number of users while achieving a packet loss rate (PLR) close to zero. However, if the number of users is too high, then the system is interference limited and the PLR is close to one. In this paper, we propose a variant of IRSA in the interference limited regime, namely Censored-IRSA (C-IRSA), wherein users with poor channel states censor themselves from transmitting their packets. We theoretically analyze the throughput performance of C-IRSA via density evolution. Using this, we derive closed-form expressions for the optimal choice of the censor threshold which maximizes the throughput while achieving zero PLR among uncensored users. Through extensive numerical simulations, we show that C-IRSA can achieve a 4\(\times\) improvement in the peak throughput compared to conventional IRSA.
Irregular repetition slotted aloha, massive machine-type communications, user censoring, random access
## I Introduction
Massive machine-type communications (mMTC) is an evolving use-case, expected to serve about a million users per square km [1]. In this context, irregular repetition slotted aloha (IRSA) [2] is a distributed massive random access protocol which has received much attention in the literature [3, 4]. The performance of IRSA mainly depends on the system load, i.e., the number of users participating in the protocol per frame. At low system loads, the system is not interference-limited, and the packet loss rate (PLR) is close to zero. As the system load increases beyond a so-called _inflection load_, IRSA becomes interference limited, and the system throughput rapidly drops to zero, with PLR approaching one [3]. In this paper, we address the issue of the poor throughput of IRSA in the high load regime by introducing a _distributed self-censoring scheme,_ which allows the system to maintain the throughput at the maximum possible value even as the load increases.
In IRSA, users transmit packet replicas in multiple randomly selected slots (within a frame) to a base station (BS); the latter decodes the packets using successive interference cancellation (SIC) [5]. If the BS successfully decodes a user in a slot, it uses the decoded data to perform SIC in all other slots in which that user has transmitted a replica. The decodability of any user in IRSA depends on the signal to interference plus noise ratio (SINR) of that user [6]. If the users have poor channel states or there are many collisions in a slot resulting in increased multi-user interference (MUI), then the SINR decreases, leading to users not getting decoded.
When the system load is low, all users with sufficiently good channel states can be decoded at the end of the SIC process [7]. However, as the load increases, more collisions lead to the system becoming interference limited, and the throughput quickly drops close to zero [5]. To tackle this issue, in this paper, we propose a modified IRSA protocol, as follows. The BS transmits a pilot signal at the start of each frame, using which the users estimate their channel state information (CSI). Then, users with poor CSI _self-censor_, i.e., they refrain from transmitting, thereby reducing collisions and enable the decoding of the transmissions from the active users to succeed. Here, choosing too high a (CSI-based) censor threshold leads to very few users participating in the IRSA protocol, while too low a threshold leads to too many collisions and the system becoming interference-limited. Thus, given the system load, there is an optimal threshold that maximizes the throughput. The censor threshold is calculated by the BS based on the load and the SNR, and its value is periodically broadcast to the users. Note that this approach retains the fully distributed nature of IRSA.
IRSA has been studied for the collision channel [2], with multiple antennas [3], with activity detection [4], with path loss [6], for the Rayleigh fading channel [7], with channel estimation errors [8], and with multi-cell effects [9]. Density evolution has been used to characterize the asymptotic throughput of IRSA [3, 6, 7]. Variants of aloha such as \(K\)-repetition have also been studied [10, 11]. However, none of these papers address the dramatic increase in PLR and the corresponding reduction in throughput as the system load increases, which is our focus in this paper. Our specific contributions are as follows:
1. We propose a censored-IRSA (C-IRSA) protocol in the interference limited regime, where users with poor CSI self-censor to decrease the effective system load, thereby enabling the uncensored users to be decoded at the BS.
2. We theoretically analyze the asymptotic performance of C-IRSA using density evolution.
3. We provide the optimal choice of the censor threshold with which the PLR of uncensored users can be driven close to zero at all system loads, while maintaining the throughput of the system at its highest value.
With CSI-based censoring in C-IRSA, we can achieve a 4\(\times\) improvement in the peak performance of the system compared to conventional IRSA. Further, C-IRSA can be operated at the peak performance for all system loads, whereas the throughput of conventional IRSA becomes zero at high loads.
_Notation:_ The symbols \(a\), \(\mathbf{a}\), and \(\mathbf{A}\), denote a scalar, a vector, and a matrix, respectively. \([N]\) denotes the set \(\{1,2,\ldots,N\}\). \(\mathbb{1}\{\cdot\}\), \(|\cdot|\), \([\cdot]^{*}\), and \(\mathbb{E}[\cdot]\), denote the indicator, magnitude (or cardinality of a set), conjugate, and expectation, respectively. |
2304.08987 | Quadruply robust estimation of marginal structural models in
observational studies subject to covariate-driven observations | Electronic health records and other sources of observational data are
increasingly used for drawing causal inferences. The estimation of a causal
effect using these data not meant for research purposes is subject to
confounding and irregular covariate-driven observation times affecting the
inference. A doubly-weighted estimator accounting for these features has
previously been proposed that relies on the correct specification of two
nuisance models used for the weights. In this work, we propose a novel
consistent quadruply robust estimator and demonstrate analytically and in large
simulation studies that it is more flexible and more efficient than its only
proposed alternative. It is further applied to data from the Add Health study
in the United States to estimate the causal effect of therapy counselling on
alcohol consumption in American adolescents. | Janie Coulombe, Shu Yang | 2023-04-18T13:34:43Z | http://arxiv.org/abs/2304.08987v1 | Quadruply robust estimation of marginal structural models in observational studies subject to covariate-driven observations
###### Abstract
Electronic health records and other sources of observational data are increasingly used for drawing causal inferences. The estimation of a causal effect using these data not meant for research purposes is subject to confounding and irregular covariate-driven observation times affecting the inference. A doubly-weighted estimator accounting for these features has previously been proposed that relies on the correct specification of two nuisance models used for the weights. In this work, we propose a novel consistent quadruply robust estimator and demonstrate analytically and in large simulation studies that it is more flexible and more efficient than its only proposed alternative. It is further applied to data from the _Add Health_ study in the United States to estimate the causal effect of therapy counselling on alcohol consumption in American adolescents.
## 1 Introduction
The study of causes and effects is an important scientific topic and an essential component of learning healthcare systems aimed at improving public health (see e.g., Krumholz (2014); Dahabreh & Kent (2014)). When prescribing a treatment, physicians need to know about the causal effect of that treatment on the outcome to improve, instead of the mere association (or statistical dependence) between the treatment and outcome. Statistical methods with good properties must, therefore, be developed to estimate causal effects consistently with the least bias and variance possible. This manuscript proposes a novel, consistent and efficient estimator for the marginal causal effect of a treatment (exposure) on a longitudinal outcome using observational data.
Randomized controlled trials are the gold standard for drawing causal inferences. The randomization of participants to treatments ensures a balance in patient characteristics between treatment groups at study entry, allowing a fair comparison of clinical outcomes across treatment groups (Rubin, 2008). Often, randomized controlled trials also have clear protocols for the timing of patients' visits (i.e., observation times) at which patient health status is measured.
It is not always possible to conduct a randomized controlled trial designed specifically for answering a causal question, such that researchers often turn to observational data (Black, 1996). In this work, we focus on the particular features often met in observational data from electronic health records. Electronic health records data may contain information on patients' demographic variables, diagnoses, health system usage, drug dispensations, and outcomes, making them highly useful to infer drug safety and effectiveness.
While electronic health records are increasingly available for analysis, they are not meant for research purposes. The treatments measured in electronic health records are not randomized to patients. Their observation rather depends on real-world prescription mechanisms, which depend on patients' characteristics. This leads to spurious associations in the data called _confounding_(Greenland & Morgenstern, 2001). These data are also measured irregularly across patients. They are not collected under a controlled scheme. Instead, each patient follows their pattern in how they access care, also likely to depend on their characteristics (Lokku et al., 2020). For instance, sicker patients tend to interact with the healthcare system more often than healthier patients. Statistically, this creates a long-term dependence structure between the outcome
and the visit processes that can result in biased estimators of causal or associational parameters (see e.g., Lin & Ying (2001); Lipsitz et al. (2002); Farewell (2010); Pullenayegum & Lim (2016) and most recently Coulombe et al. (2021a,b, 2022); Yang (2022) in a context of causal inference). When aiming for a causal effect, this bias can be due to confounding by the visit process or, if the visit indicators act as colliders (i.e. are themselves affected by the treatment prescribed and the study outcome), to collider-stratification bias described in Greenland (2003).
Under a set of causal assumptions, the causal marginal effect of a binary treatment on a longitudinal continuous outcome can generally be inferred by estimating the parameters of a marginal structural model fitted on the data from a pseudo-population that is free of confounding and other types of spurious associations, such as collider-stratification bias (Robins et al., 2000). Previous work has tackled this problem in settings with covariate-driven observation times and confounding, leading to the _Flexible Inverse Probability of Treatment and Monitoring_ weighted (FIPTM) estimator (Coulombe et al., 2021b, 2022). However, that method suffers from two important problems. First, it relies on the correct specification of the treatment and outcome observation models as a function of patient characteristics. This implies both a correct specification of the variables to include in each model and their functional forms. When one or both models are not correctly specified, the FIPTM can be biased. Secondly, that estimator can be variable due to its inverse weights and could be made more efficient by using the geometry of influence functions and by augmenting the estimating equations (Robins et al., 1994; Tsiatis, 2006).
To address the two issues raised above, we propose the first quadruply robust estimator for the causal marginal effect of a binary treatment on a longitudinal, continuous outcome, that accounts for confounding and irregular covariate-driven observation times of the outcome simultaneously.
## 2 Methods
### Notation
Our interest is in the causal marginal effect of a binary treatment or exposure (a choice between two possible options, say treatment with aspirin versus treatment with the absence of aspirin) on a longitudinal continuous outcome that is measured repeatedly. That effect could consist of a "contrast" causal effect between two different active drugs, or of the causal effect of an active drug when compared with a placebo.
We assume working with a random sample of size \(n\) from a larger population and denote by \(i\) the patient index and by \(t\in[0,\tau]\) the time, with \(\tau\) a maximum follow-up time in the cohort. Let \(A_{i}(t)\) represent the binary treatment taking values in \(\{0,1\}\) and \(Y_{i}(t)\) be the continuous, longitudinal study outcome for patient \(i\) at time \(t\), that is only observed at certain points in time.
The type of data-generating mechanism we focus on is presented in the left panel of Fig. 1. The proposed approach is not specific to that data-generating mechanism and the novel estimator could be tailored according to various data-generating mechanisms. In Fig. 1, variables \(K_{i}(t)\) are confounders that create (open) backdoor paths from the treatment \(A_{i}(t)\) to the outcome \(Y_{i}(t)\). Associations that are not due to the causal effect of the treatment on the outcome can pass through them (Pearl, 2009). The set \(M_{i}(t)\) denotes a set of mediators for the treatment effect on the outcome. The set \(P_{i}(t)\) contains pure predictors of the outcome that potentially also affect the observation of a patient outcome \(Y_{i}(t)\).
We use a counting process to model observation times. Only the outcome process is assumed to be measured sporadically (one may think of weight or systolic blood pressure measured irregularly according to an observation scheme depending on patient characteristics such as a change in medication or smoking status). All the other variables necessary to the estimation of the marginal causal effect of treatment are assumed to be available at all times during follow-up, for each patient. While this may seem like a strong assumption, the drugs and comorbidities are typically recorded in electronic health records data anytime there is a new diagnosis or a new prescription is made or dispensed. Let \(N_{i}(t)\) be the total number of observation times of the outcome \(Y_{i}(t)\) between times 0 and \(t\) for individual \(i\). The indicator \(\mathrm{dN}_{i}(\mathrm{t})\) equals to 1 when there is a jump in the process at time \(t\), i.e., the observation of the outcome \(Y_{i}(t)\), and 0 otherwise. We denote by \(T_{i,1},...,T_{i,Q_{i}}\) the observation times of the outcome, with \(Q_{i}\) the total number of observation times of individual \(i\). The set \(V_{i}(t)\) includes all the variables causing observation times (i.e., causing \(\mathrm{dN}_{i}(\mathrm{t})\) in Fig. 1) and _also_ including all the confounders of the treatment-outcome relationship. We must include \(K_{i}(t)\) in the set \(V_{i}(t)\) for the proposed methodology to be consistent. In Fig. 1, the set \(V_{i}(t)\) contains the treatment, the mediator(s) of the treatment effect on the outcome, the confounder(s), and the pure predictor(s).
Patients are allowed to have different follow-up times \(C_{i}\), assumed to be non-informative conditional on the outcome model design matrix, an assumption denoted by \(Y_{i}(t)\perp C_{i}\mid A_{i}(t),V_{i}(t)\). Finally, \(\xi_{i}(t)=\mathbf{1}\{C_{i}>t\}\) is an indicator that patient \(i\) is still in the study at time \(t\), meaning they are not lost to follow-up.
### Causal Estimand
The potential outcome framework (Neyman, 1923; Rubin, 1976) is used to define our estimand. Denote by \(Y_{i}^{1}(t)\) the potential outcome of individual \(i\) at time \(t\) if they received treatment option \(1\), and \(Y_{i}^{0}(t)\) for treatment \(0\). The causal marginal effect of a binary treatment on a continuous outcome is defined as \(\beta_{1}=E\left[Y_{i}^{1}(t)-Y_{i}^{0}(t)\right]\). The treatment and the outcome are allowed to vary in time, but our interest lies in a cross-sectional effect \(\beta_{1}\) that does not vary in time.
Suppose a certain time discretization for which there can be only one jump in the counting process \(N(t)\). For instance, suppose visits at the doctor's office can occur daily such that the time granularity is the day. If we had access to all potential outcomes under both treatments and at each time \(t\) (for the time granularity chosen above) for a random sample of participants of size \(n\), then we could estimate \(\beta_{1}\) straightforwardly using sample means. This is impossible in practice and sometimes referred to as the fundamental problem of causal inference (Holland, 1986).
On the other hand, by conducting a randomized controlled trial and randomly allocating patients to one of the two treatment options, and observing patients at prespecified visit times, there is no reason why patients allocated to, e.g., treatment \(1\), would differ than the others before receiving the treatment. One could then use the marginal structural model to estimate \(\beta_{1}\):
\[E\left[Y_{i}^{a}(t)\right]=E\left[Y_{i}^{a}(t)\mid A_{i}(t)=a\right]=\beta_{0 }+\beta_{1}a. \tag{1}\]
Under this model, \(E\left[Y_{i}^{1}(t)\right]=\beta_{0}+\beta_{1}\) and \(E\left[Y_{i}^{0}(t)\right]=\beta_{0}\), and the causal contrast of interest is the parameter \(\beta_{1}\). An outcome consistency assumption (discussed in SS2.3) is required to estimate \(\beta_{1}\) with the marginal structural model above, but it generally does not require any adjustment for confounding under random treatment allocation.
In observational data from electronic health records, unfortunately, data are affected by a confounding mechanism, whereby we observe the potential outcomes \(Y_{i}^{1}(t)\) in those who had greater chances of being treated with \(A_{i}(t)=1\), and the potential outcomes \(Y_{i}^{0}(t)\) in those who had greater chances of being treated with \(A_{i}(t)=0\) (as a consequence, \(E\left[Y_{i}^{a}(t)\mid A_{i}(t)=a\right]\neq E\left[Y_{i}^{a}(t)\right]\) in general). In addition to that confounding, the potential outcomes for an individual \(i\) are only observed at times \(t\) when \(\mathrm{dN_{i}(t)}=1\), which may also depend on patient characteristics. Therefore, we do not have access to all potential outcomes and require a set of causal assumptions to equate the estimand, which depends on \(Y_{i}^{a}(t)\), to equations depending on \(Y_{i}(t)\).
### Causal Assumptions
Five causal assumptions are required for consistent estimation of the causal marginal effect of treatment (Table 1). Other (non-causal) assumptions on the correct specification of the nuisance models are also required and discussed in SS2.4.
Figure 1: Causal diagram illustrating the assumed data generating mechanism
First, it requires outcome consistency, which implies that \(Y_{i}(t)=Y_{i}^{1}(t)\) if patient \(i\) received treatment 1 at time \(t\) and \(Y_{i}(t)=Y_{i}^{0}(t)\) otherwise and means that the treatment is well defined such that the observed outcome effectively corresponds to one of the two potential outcomes. An example of consistency violation is one in which a potential outcome is defined as the outcome under the treatment _physical exercise 5 times a week_ and in which the actual treatment received corresponds to _having done exercise 3 times a week_. The outcome observed under that regimen would not correspond to the defined potential outcome.
We assume positivity of treatment, meaning that anyone should have a chance of receiving any of the two treatment options, and positivity of observation, such that they had a chance to have their outcome observed at any time given their characteristics. For instance, scenarios in which some patient characteristics used as predictors in the treatment model are only represented in one of the two treatment groups would violate the positivity of treatment assumption, and similarly for the observed and non-observed groups.
Finally, we assume conditional exchangeability, with includes the assumptions of 1) no unmeasured confounder, i.e., all confounders \(K_{i}(t)\) of the relationship between the treatment and the outcome are available in the analysis; and 2) conditional independence of the observation indicators, i.e., adjusting for \(V_{i}(t)\) makes the observation indicator independent of other variables in the analysis. In our setting, \(K_{i}(t)\subset V_{i}(t)\) and exchangeability is conditional on \(V_{i}(t)\).
These five assumptions allow the use of so-called _G-methods_(Naimi et al., 2017; Robins & Hernan, 2008, Ch. 23, p. 553) for consistent estimation of causal effects. The methods discussed next can be thought of as being part of a larger _G-methods_ framework.
### Novel Estimator
The conditional exchangeability can be recovered by breaking the spurious associations due to the treatment and observation mechanisms via inverse weights (marginal approach), by conditioning on the sets \(A_{i}(t),K_{i}(t),V_{i}(t)\) in a regression model for the outcome and using methods such as g-computation (Robins, 1986) (standardization approach), or by using both approaches simultaneously to make it more robust to models misspecification, which is our proposal.
Using the marginal approach corresponds to using the FIPTM estimator proposed in Coulombe et al. (2021b). It consists of a doubly-weighted least squares estimator that incorporates inverse probability of treatment weights (Horvitz & Thompson, 1952; Rosenbaum & Rubin, 1983; Rosenbaum, 1987) and inverse intensity of visit weights (Lin et al., 2004). The inverse probability of treatment weights are functions of the confounders \(K_{i}(t)\) and the IIV weights are functions of the visit predictors \(V_{i}(t)\). The estimator is consistent for \(\beta_{1}\) when both weights are correctly specified. A parametric model can be used to model the treatment, and the inverse probability of treatment weights be obtained as follows:
\[\mathbf{1}\{A_{i}(t)=a\}/pr\{A_{i}(t)=a\mid K_{i}(t);\psi\} \tag{2}\]
where \(pr\{A_{i}(t)=1\mid K_{i}(t);\psi\}\), the propensity score, is the probability of receiving the treatment \(1\) as a function of predictors \(K_{i}(t)\) and parameters \(\psi\)(Rosenbaum & Rubin, 1983). A logistic regression can be used to compute an estimated propensity score. The inverse intensity of visit weights, on the other hand, can be obtained from the proportional rate model:
\[E[\mathrm{d}\mathrm{N}_{\mathrm{i}}(\mathrm{t})\mid\mathrm{V}_{\mathrm{i}}( \mathrm{t});\gamma]=\xi_{\mathrm{i}}(\mathrm{t})\exp\{\gamma^{\mathrm{T}} \mathrm{V}_{\mathrm{i}}(\mathrm{t})\}\lambda_{0}(\mathrm{t})\mathrm{dt}. \tag{3}\]
The baseline rate of observation \(\lambda_{0}(t)\) in (3) consists of the visit rate when all variables \(V_{i}(t)\) are set to their reference level. With the FIPTM estimator, the baseline rate can be dropped from the inverse intensity of visit weights without affecting the causal marginal effect of treatment estimate since removing it would still make the weights in (4) proportional to the intensity of being observed as a function of \(V_{i}(t)\). The inverse intensity of visit weights can also be stabilized, in which case the baseline rate cancels automatically in the weights and need not be estimated (Buzkova & Lumley, 2009). This leads to the following intensity of visit
\begin{table}
\begin{tabular}{|l|c|} \hline Assumption & Definition \\ \hline \hline Outcome consistency & \(Y_{i}(t)=A_{i}(t)Y_{i}^{1}(t)+\{1-A_{i}(t)\}Y_{i}^{0}(t)\) \\ \hline \hline Positivity of treatment & \(0<pr\{A_{i}(t)\mid K_{i}(t)\}<1\) \\ \hline Positivity of observation & \(0<E[\mathrm{d}\mathrm{N}_{\mathrm{i}}(\mathrm{t})\mid\mathrm{V}_{\mathrm{i}}( \mathrm{t})]<1\) \\ \hline No unmeasured confounder & \(\{Y_{i}^{0}(t),Y_{i}^{1}(t)\}\perp A_{i}(t)\mid K_{i}(t)\) \\ \hline Conditional exchangeability & \(\{Y_{i}^{0}(t),Y_{i}^{1}(t)\}\perp A_{i}(t)\mid K_{i}(t)\) and \(\{A_{i}(t),Y_{i}^{0}(t),Y_{i}^{1}(t)\}\bot\mathrm{d}\mathrm{N}_{\mathrm{i}}( \mathrm{t})\mid\mathrm{V}_{\mathrm{i}}(\mathrm{t})\) \\ \hline \end{tabular}
\end{table}
Table 1: Causal assumptions required for the proposed estimator to be consistent
weights (we take the inverse in the estimating equations), from which \(\gamma\) parameters can be estimated using the Andersen & Gill (1982) model:
\[E[\mathrm{dN_{i}(t)}\mid\mathrm{V_{i}(t)};\gamma]=\exp\{\gamma^{\mathrm{T}}\mathrm{ V_{i}(t)}\}. \tag{4}\]
Then, the FIPTM estimator solves the following equation:
\[E_{n}\left[\int_{0}^{\tau}\frac{\frac{\mathbf{1}\{A_{i}(t)=a\}}{pr\{A_{i}(t)=a \mid K_{i}(t);\hat{\gamma}\}}Y_{i}(t)-\zeta_{i}(t;\beta_{a})}{E\{\mathrm{dN_{i }(t)}\mid\mathrm{V_{i}(t)};\hat{\gamma}\}}\mathrm{dN_{i}(t)}\right]=0, \tag{5}\]
where \(\zeta_{i}(t;\beta_{a})=\beta_{0}+\beta_{1}a\) is the structural model from equation (1) and \(E_{n}\) stands for the empirical mean. A drawback of the doubly-weighted estimator is that it requires both the treatment and the observation models to be correctly specified. This is not easy in practice.
We propose the augmented AAIIW estimator (which acronym stands for _doubly augmented and doubly inverse weighted_) that is more flexible and allows two out of four different models (see Table 2) to be misspecified while the estimator remains consistent. The estimator uses the theory introduced in Robins et al. (1994), very well laid out in Funk et al. (2011), Tsiatis & Davidian (2007) and Cao et al. (2009), and further related to model-assisted estimation from the survey sampling field (see e.g., the discussion in Robins & Rotnitzky (1998)). Using the model-assisted estimation approach to justify the construction of the novel estimator may be more intuitive to the reader than using the semiparametric theory of influence functions (see e.g., Jiang et al. (2022) for related discussions), so we briefly discuss that framework. _Grosso modo_, the estimating equations of the FIPTM estimator are to be transformed twice following the model-assisted estimation approach. Based on their work in Robins et al. (1994), Robins & Rotnitzky (1998) introduce the equation
\[N\hat{T}_{diff}(\mu)=\sum_{i=1}^{N}A_{i}\mu(X_{i})+\sum_{i:A_{i}=a}A_{i}\left\{ Y_{i}-\mu(X_{i})\right\}/\pi_{i}\]
and connect it to model-assisted estimation, where \(\hat{T}_{diff}(\mu)\) is a designed-based standard difference estimator for the parameter \(E[A_{i}Y_{i}]\) of interest, and \(\mu(X)\) is a function of \(X\). Theorem 1 in Robins & Rotnitzky (1998) implies that the class of such estimators \(\hat{T}_{diff}(\mu)\) contains all the semiparametric estimators and that the asymptotic variance of the estimator evaluated at \(\mu_{eff}(x)=E[Y_{i}\mid X_{i}=x]\) in their notation leads to the smallest variance possible. We use that construction twice, starting first with \(A_{i}=\mathbf{1}\{A_{i}(t)=a\}\), \(\pi_{i}=pr\{A_{i}(t)=a\mid K_{i}(t)\}\) and "\(\mu(X_{i})\)" \(=E\left[Y_{i}(t)\mid A_{i}(t)=a,K_{i}(t)\right]\) using their notation. Once this projection is obtained, we use the strategy a second time, with the "\(Y_{i}\)" now corresponding to the previous expression obtained, and with "\(A_{i}\)" \(=\mathrm{dN_{i}(t)}\), "\(\pi_{i}\)" \(=pr\{\mathrm{dN_{i}(t)}=1\mid\mathrm{V_{i}(t)}\}\) and a novel "\(\mu(X_{i})\)" function corresponding to the expectation of the previous term before augmentation (in equation (6) below, this expectation corresponds to \(E[\eta_{i}(t)\mid A_{i}(t)=a,K_{i}(t),V_{i}(t)]\)). By construction, the novel proposed estimator is the most efficient among its class of semiparametric estimators, which also includes the FIPTM estimator.
The correspondence between the model-based estimation approach and the geometry of influence functions is in that the novel estimator's influence function corresponds to sequential projections of the FIPTM's influence function onto spaces orthogonal to the residuals from the weight models and orthogonal to projections due to the outcome mean models onto the spaces of \(K_{i}(t)\) and \(V_{i}(t)\). The novel estimator is obtained by solving the following estimating equations, which are augmented versions of the equations in 5:
\[E_{n}\left[\int_{0}^{\tau}\frac{\eta_{i}(t)}{E\{\mathrm{dN_{i}(t) }\mid\mathrm{V_{i}(t)};\hat{\gamma}\}}\mathrm{dN_{i}(t)}\right]-E_{n}\left[ \int_{0}^{\tau}\frac{\mathrm{d}\mathrm{M_{i}(t)}\mathrm{E}\{\eta_{i}(t)\mid \mathrm{A_{i}(t)=a,K_{i}(t),V_{i}(t)}\}}{E\{\mathrm{dN_{i}(t)}\mid\mathrm{V_{i }(t)};\hat{\gamma}\}}\right]\] \[=0, \tag{6}\]
where the nuisance terms are estimated using parametric models, with
\[\eta_{i}(t)=\frac{\mathbf{1}\{A_{i}(t)=a\}}{pr\{A_{i}(t)=a\mid K_{i}(t);\hat{ \gamma}\}}Y_{i}(t)-\frac{\mathbf{1}\{A_{i}(t)=a\}-pr\{A_{i}(t)=a\mid K_{i}(t); \hat{\gamma}\}}{pr\{A_{i}(t)=a\mid K_{i}(t);\hat{\gamma}\}}\mu_{a}\{K_{i}(t); \hat{\alpha}_{K}\}-\zeta_{i}(t;\beta_{a}),\]
with \(\mathrm{d}\mathrm{M_{i}(t)}=\mathrm{d}\mathrm{N_{i}(t)}-\xi_{i}(t)\exp\{\hat{ \gamma}^{\mathrm{T}}\mathrm{V_{i}(t)}\}\hat{\lambda}_{0}(t)\mathrm{d}t\) the martingale residual for the observation process. The conditional outcome mean models in the augmented terms are \(\mu_{a}\left\{K_{i}(t);\alpha_{K}\right\}=E[Y_{i}(t)\mid A_{i}(t)=a,K_{i}(t); \alpha_{K}]\) and \(\mu_{a}\left\{V_{i}(t);\alpha_{V}\right\}=E[Y_{i}(t)\mid A_{i}(t)=a,V_{i}(t); \alpha_{V}]\). The latter model arises when taking the expectation \(E\left[Y_{i}(t)\mid A_{i}(t)=a,V_{i}(t)\right]\) in the term \(E\{\eta_{i}(t)\mid A_{i}(t)=a,K_{i}(t),V_{i}(t)\}\) in equation 6
(more details are found in Supplementary Material A and B discussed later). For the novel estimator, the baseline rate \(\lambda_{0}(t)\) in (3) must be estimated before calculating the inverse intensity of visit weights. The inverse intensity of visit weights in the equations for the AAIIW are the inverse of \(E[\mathrm{dN_{i}(t)}\mid\mathrm{V_{i}(t)};\hat{\gamma}]=\hat{\lambda}_{0}(t) \exp\{\hat{\gamma}^{\tau}\mathrm{V_{i}(t)}\}\). We use the following Breslow's estimator:
\[\hat{\lambda}_{0}(t)=\frac{\sum_{i=1}^{n}\mathbf{1}\{\mathrm{d}\mathrm{N_{i}( t)}=1\}}{\sum_{i=1}^{n}\mathbf{1}\{\mathrm{d}\mathrm{N_{i}(t)}=1\}\exp\left\{ \hat{\gamma}^{\tau}\mathrm{V_{i}(t)}\right\}},\]
(Cox, 1972). Table 2 shows the possible combinations of correctly specified models leading to a consistent AAIIW estimator. For the estimator to be unbiased, at least one of the two models related to confounders (either the treatment or the outcome mean model conditional on the confounders) and at least one of the two models related to the observation predictors (either the observation or the outcome mean model conditional on the observation predictors) must be correctly specified. Correct specification for a nuisance model requires that the corresponding data-generating mechanism can be modelled parametrically and that there exists a true set of parameters leading to the actual data-generating mechanism that we can estimate consistently (we denote the true sets by \(\psi_{0}\), \(\gamma_{0}\), \(\alpha_{K0}\) and \(\alpha_{V0}\) for the treatment, observation, and two conditional mean outcome models, respectively). For instance, for the treatment model, it means that the data generating mechanism \(pr\{A_{i}(t)=a\mid K_{i}(t)\}=pr\{A_{i}(t)=a\mid K_{i}(t);\psi_{0}\}\), that we can model this data generating mechanism using the correct functional formats for covariates \(K_{i}(t)\) in the model, and that estimators \(\hat{\psi}\) converge in probability to the true parameters \(\psi_{0}\).
Naturally, the AAIIW estimator is unbiased under specific combinations of two correctly specified models (Table 2, see the proof of multiple robustness in Supplementary Material A) _but also_ when three out of the four models in Table 2 are correctly specified, or when all models are correctly specified. Thus, using the same nuisance models as those used in the FIPTM estimator, if they are correctly specified, leads to an unbiased estimator, but the advantage of the AAIIW estimator is that it also has several other opportunities to be unbiased.
In practice, the AAIIW estimator can be computed by first estimating the parameters from all nuisance models (using a logistic regression, a proportional rate model with e.g., coxph in R, and two linear models for the outcome means conditional on \(K_{i}(t)\) or \(V_{i}(t)\)). The estimates can be plugged into the estimating equations of the AAIIW. A root solver such as uniroot in R can be used then to estimate \(\beta_{0}\) and that estimate be further plugged into the second estimating equation, which is solved for \(\beta_{1}\).
### Efficiency and Asymptotic Properties
The asymptotic variance of the AAIIW estimator can be derived using theory on two-step estimators (Newey & McFadden, 1994) or deriving the variance of its influence function (Tsiatis, 2006). We use the latter approach. Derivations are shown in Supplementary Material B, in which we also show that the AAIIW estimator is more efficient than the FIPTM estimator when all nuisance models are correctly specified for both estimators. We find that under correct nuisance models specification, the FIPTM asymptotic variance equals to
\[\sigma^{2}_{FIPTM}=E\left[\frac{\{Y_{i}^{1}(t)-\mu_{1}\}^{2}}{\rho\{V_{i}(t) \}e_{1}\{K_{i}(t)\}}\right]+E\left[\frac{\{Y_{i}^{0}(t)-\mu_{0}\}^{2}}{\rho\{ V_{i}(t)\}e_{0}\{K_{i}(t)\}}\right],\]
where \(\mu_{a}=E[Y_{i}^{a}(t)]\), \(\rho\{V_{i}(t)\}=E[\mathrm{dN_{i}(t)}\mid\mathrm{V_{i}(t)};\gamma_{0}]\), and \(e_{a}\{K_{i}(t)\}=pr\{A_{i}(t)=a\mid K_{i}(t);\psi_{0}\}\). The augmented AAIIW estimator is more efficient, with asymptotic variance
\[\sigma^{2}_{AAIIW}=\sigma^{2}_{FIPTM}-\mu_{1}^{2}E\left[\frac{1+e_{1}\{K_{i}( t)\}}{e_{1}\{K_{i}(t)\}}\right]-\mu_{0}^{2}E\left[\frac{1+e_{0}\{K_{i}(t)\}}{e_{0} \{K_{i}(t)\}}\right].\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Scenario & \(pr\{A_{i}(t)=a\mid K_{i}(t);\psi\}\) & \(\mu_{a}\{K_{i}(t);\alpha_{K}\}\) & \(E\left[\mathrm{dN_{i}(t)=1\mid\mathrm{V_{i}(t)};\gamma}\right]\) & \(\mu_{a}\{V_{i}(t);\alpha_{V}\}\) \\ \hline \hline (a) & \(\check{\check{\gamma}}\) & \(\check{X}\) & \(\check{\check{\gamma}}\) & \(\check{X}\) \\ \hline (b) & \(X\) & \(\check{\check{\gamma}}\) & \(\check{X}\) & \(\check{\check{\gamma}}\) \\ \hline (c) & \(X\) & \(\check{\check{\gamma}}\) & \(\check{\check{\gamma}}\) & \(\check{X}\) \\ \hline (d) & \(\check{\check{\gamma}}\) & \(X\) & \(X\) & \(\check{\check{\gamma}}\) \\ \hline \end{tabular}
\end{table}
Table 2: Quadruple robustness of AAIIW: AAIIW is consistent under Scenarios (a)–(d): \(\check{\check{\gamma}}\) means correctly specified and \(X\) means no requirement
Furthermore, the theory of influence function can be used to show the asymptotic normality of the estimator. The AAIIW estimator is consistent and converges to the true causal effect of the binary treatment, so its limiting distribution is normally distributed around \(\beta_{1}\).
## 3 Simulation study
### Comparators
We compared four different estimators in large simulation studies: an ordinary least squares estimator that does not account for confounding or informative observation (OLS); an inverse probability of treatment-weighted estimator that accounts for the confounding process with a propensity score that is either correctly specified (IPT\({}_{c}\)) or not correct (IPT\({}_{nc}\)); a doubly-weighted estimator that corresponds to the FIPTM estimator from Coulombe et al. (2021) with either both the inverse intensity of visit and inverse probability of treatment weights correctly specified (DW\({}_{c}\)), only the inverse probability of treatment weights correctly specified (DW\({}_{iptc}\)), only the inverse intensity of visit weights correctly specified (DW\({}_{iuc}\)), or both weights misspecified (DW\({}_{nc}\)); and the novel AAIIW estimator with either all four nuisance models correctly specified (AAIIW\({}_{c}\)), both weight models correctly specified and both conditional outcome models misspecified (AAIIW\({}_{s.a}\), with s.a referring to _scenario a_ in Table 2), both conditional outcome models correctly specified and both weights misspecified (AAIIW\({}_{s.b}\)), the inverse intensity of visit weights and the outcome model conditional on confounders being the only correctly specified models (AAIIW\({}_{s.c}\)), and the inverse probability of treatment weights and the outcome model conditional on observation predictors being the only correctly specified models (AAIIW\({}_{s.d}\)).
The data-generating mechanism was strongly inspired by similar simulation studies presented in Buzkova and Lumley (2009); Coulombe et al. (2021, 2022) and is described in much more detail in Supplementary Material C. The data-generating mechanism included a set of confounders at baseline repeated through follow-up, a time-varying binary treatment, a set of observation predictors that varied in time, and irregular observation of the outcome. The main results for 1000 simulations using a nonhomogeneous Poisson rate to simulate the observation times of the outcome and a sample of size 1000 are presented in the following section. In another set of simulations, we replaced the nonhomogeneous Poisson rate with a nonhomogeneous Bernoulli probability for the observation indicator and used a logistic regression instead of the Andersen and Gill model to fit the probability of observation at each time point. These results are presented in Supplementary Material D (Suppl. Fig. 2 and 3), along with the results under a sample of size 250 instead of 1000 (Suppl. Fig. 1) and all Monte Carlo biases and mean square errors (Suppl. Table 1). In both settings using either the Poisson rate of the Bernoulli probability, we tested four different sets of \(\gamma\) parameters in the observation model, including one set of zeros (which we call "set 1" in the results) corresponding to uninformative observation.
### Results
The distributions of 1000 estimates obtained with each estimator using a sample of size 1000 patients are presented in Fig. 2.
The results are as expected. First, the ordinary least squares estimator is strongly biased in all \(\gamma\) parameter settings 1) to 4). In scenario 1) in which we expected no bias due to the visit process, the inverse probability of treatment-weighted estimator IPTc is empirically unbiased and the inverse probability of treatment-weighted estimator using a wrong treatment model, IPTnc, exhibits bias. In scenarios 2) to 4), both inverse probability of treatment-weighted estimators are biased since they do not account properly for the outcome observation process.
The doubly-weighted estimator DWc is consistently unbiased as it accounts properly for both types of bias. When the visit process is uninformative, in scenario 1) for the observation process, it is also unbiased even when the inverse intensity of visit weights are not correctly specified, as long as the inverse probability of treatment weights are correctly specified. In scenarios 2) to 4), the doubly-weighted estimator is only unbiased in settings in which its two weight models are correctly specified.
The multiply robust AAIIW estimator is empirically unbiased in all scenarios 1) to 4) for the observation process, whenever using one of the four combinations of correctly specified models shown in Table 2 or when all four nuisance models are correctly specified. It exhibits particularly small variance when the two conditional outcome mean models are correctly specified (scenario b from Table 2) or, as expected when all four models are correctly specified.
Results for the second set of simulations using the Bernoulli probability to simulate the observations, and those for a sample of size 250 are presented in Supplementary Material D. As expected, the estimators are more variable when using a sample size of 250, although the same patterns in the comparison of estimators are observed (Suppl. Fig. 1). Similar results are observed when using the Bernoulli probability instead of the Poisson rate for the simulation of observation indicators (Suppl. Fig. 2 and 3). The simulations using the Bernoulli probability did not require the use of Breslow's estimator for the baseline rate, which would partly explain the smaller variances observed overall (e.g., compare Fig. 1 and Suppl. Fig. 2).
## 4 Motivating Example
We applied the proposed AAIIW estimator and different more naive comparators to data from the _Add Health_ study in the United State (Harris, 2009, 2013; Harris and Udry, 2022). It consists of a longitudinal study with multiple waves. The study started in 1994 when a pool of adolescents representative from the United States was selected. These adolescents grew up to become adults during the study. At each wave, they were asked to fill out questionnaires.
We have access to data from the first four waves of the _Add Health_ study, corresponding to the years 1994-1995, 1996, 2001-2002 and 2008-2009 respectively. Although these data do not consist of electronic health records, and that adolescents in the _Add Health_ study were observed at pre-specified observation times, they tended not to be observed at each wave for all variables measured in the study, and one could consider that the four waves represent four consecutive time points when there could be or no a visit or an observation time, just like in medical records. Various types of information such as demographics, health status, nutrition, family dynamics, sexual activity and substance use were collected for the study. Some questions varied across the four waves and we focused on a causal research question for which we have data at all four waves. Our goal was to estimate the marginal causal effect of counselling (psychotherapy) on alcohol consumption. We think that the effect of counselling on alcohol consumption is mediated by the depressive mood of adolescents and that their mood can be affected by counselling and may in its turn affect alcohol consumption (see the assumed data generating mechanism in Fig. 1b). Two important challenges we wished to consider in the analysis are the irregular observation of the outcome and, because the study is observational, the potential confounding of the psychotherapy-alcohol consumption relationship.
We selected potential confounders for that relationship, which included the teen's age, sex, socioeconomic status, weight in pounds, and whether they smoked at least once in the previous month. The socioeconomic status was computed by summing two variables that we transformed, namely the parents' total income in 1994 before taxes and one of the parents' education, usually that of the resident mother (Harris, 2009). The parent's total income was transformed into quintiles (1 to 5 with 5 being the highest). One of the parents' education was categorized in 5 levels corresponding to _1- 8th grade or less or never went to school_, _2-more than 8th grade but did not graduate high school_, _3-went to a business, trade or vocational school instead of high school, high school graduate, or completed a general educational development program_, _4-went to a business, trade or vocational school after high school, went to college but did not graduate_, or _5-graduated from college or university, or professional training beyond a 4-year college or university training_. Socioeconomic status was defined as the sum of the two transformed 5-category variables.
The analysis dataset contained several missing values. Unless we had enough information in the dataset to replace missing values in the variables age and sex (e.g., if age was measured at a previous wave and it could be used for extrapolation), we used multiple imputations by chained equations (Rubin, 1988; Azur et al., 2011) to impute missing values in these variables as well as in variables socioeconomic status, smoking status, weight, depressive mood, and the exposure to counselling. We used these variables as predictors each time to impute each variable using fully conditional specification. The alcohol outcome was kept as missing when it was not measured.
The outcome was defined using the question _Think of all the times you had a drink during the past 12 months. How many drinks did you usually have each time?_. It consisted of a self-assessed number of drinks the adolescent would consume, on average, each time they consumed alcohol. It ranged from 0 to 90. In this application, the outcome tended to be assessed at each of the four waves (i.e., not irregularly and with most data not being missing in the outcome). To assess the advantage of our approach, we simulated missingness in the outcome and assessed the different estimators in that setting, knowing the true underlying missingness mechanism.
Assuming that all potential confounders as well as the mediator (depressive mood) and the exposure (counselling) affect the chance of observing the alcohol consumption outcome, the outcome observation
Figure 2: Results of the simulation studies with a sample size of 1000 using a nonhomogeneous Poisson rate to simulate the observation indicators and the Andersen and Gill model with Breslow estimator to estimate the inverse intensity of visit weights. Each boxplot represents the distribution of 1000 estimates for the corresponding estimator. The dashed line represents the gold standard, i.e., the true value for the marginal effect of exposure that equals to 1. Different strengths of the visit process on covariates are represented with scenarios 1) \(\gamma=(0,0,0,0,0,-5)\) (i.e., no bias due to the visit process expected); 2) \(\gamma=(0.5,0.3,-0.5,-2,0,-3)\); 3) \(\gamma=(0.5,-0.5,-0.2,-1,1,-3)\); and 4) \(\gamma=(-1,-0.8,0.1,0.3,-1,-3)\).
(i.e., the opposite of missingness) was simulated using the following model across the four waves:
\[\begin{array}{l}E[\mathrm{d}\mathrm{N}_{\mathrm{i}}(\mathrm{t})\mid\text{age, sex, counselling, depressive mood, socioeconomic status}]=\text{expit}\{18-\\ 0.3\text{ age}(t)+0.8\text{ sex}+1.8\text{ counselling}(t)-3\text{ depressive mood}(t)-1.3\text{ socioeconomic status}\}\end{array}\]
for \(t\in 1,2,3,4\), where \(\text{expit}(\cdot)=\exp(\cdot)/\left\{1+\exp(\cdot)\right\}\).
We conducted the analysis and fit a propensity score model as a function of age, sex, weight, socioeconomic status and smoking. We fit two different proportional rate models for the observation of the outcome, one correctly specified (as a function of age, sex, counselling, depressive mood, and socioeconomic status) and one that was not correctly specified (as a function of the sinus of age and the depressive mood, therefore not including the right format for the age variable and missing some important variables in the model). The estimators compared in the application are a standard ordinary least squares estimator (not adjusted for confounding nor for the observation process), an inverse intensity of visit-weighted estimator that does not account for confounding but does account for the observation process (we tested the two sets of the inverse intensity of visit weights), a doubly-weighted estimator corresponding to the FIPTM estimator (incorporating the inverse probability of treatment weights based on our assumptions on the potential confounders, and inverse intensity of visit weights - we tested the two sets of the inverse intensity of visit weights here again), and the AAIIW estimator in which we incorporated the inverse probability of treatment weights and the two different sets of the inverse intensity of visit weights, one at a time.
We found an important difference in the proportion of males and females across both exposure groups, with females reporting higher rates of counselling, \(14\%\) more smokers in the counselling group than the other, and a greater depressive mood in those receiving counselling (Supplementary Material E, Suppl. Table 2). These variables were, therefore, considered potential confounders in our analysis, except for the mediator depressive mood that should not be conditioned upon in the confounding set. After inverse probability of treatment-weighting, the two exposure groups are similar with respect to all potential confounders (Suppl. Table 2).
In the outcome observation model, we found modest differences in the depressive mood and counselling rates between those for whom the alcohol consumption was observed and the others (Supplementary Material E, Suppl. Table 3). There was also a slight difference in age means and female sex proportions, as expected (since we simulated the missingness mechanism ourselves). After inverse intensity of visit weighting, most differences vanished, with age means, depressive mood means, and female proportions that are much closer (Suppl. Table 3).
In this application using data from the _Add Health_ study, both the adjustment for confounding and the one for outcome missingness bring the estimates for the marginal effect of exposure to counselling towards the null. For instance, the inverse probability of treatment-weighted estimator is closer to \(0\) than the standard ordinary least squares estimator that does not adjust for confounding or the observation process. The estimator using the correctly adjusted inverse intensity of visit weights (IIV\({}^{\dagger}\)) slightly brings the estimate towards the null when compared with using the wrong inverse intensity of visit weights (IIV\({}^{\ddagger}\)). Combining both adjustments, the estimate for the causal marginal effect of counselling on alcohol consumption goes from \(0.62\) (\(95\%\) confidence interval \(0.39\), \(0.75\)) with no adjustment at all, to \(0.35\) (\(0.12\), \(0.51\)) when using the correct inverse intensity of visit weights (and a propensity score based on our assumptions) in the multiply robust AAIIW estimator.
Of most interest is the comparison of the FIPTM and the AAIIW estimators in this application. Using a wrong model for the calculation of inverse intensity of visit weights leads to an estimate of \(0.46\) (\(95\%\) confidence interval \(0.24\), \(0.67\)) with the FIPTM and \(0.28\) (\(95\%\) confidence interval \(0.04\), \(0.54\)) with the
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline OLS & IPT\({}^{\diamond}\) & IIV\({}^{\dagger}\) & IIV\({}^{\ddagger}\) \\ \hline
0.62 (0.39, 0.75) & 0.34 (0.15, 0.48) & 0.64 (0.40, 0.77) & 0.72 (0.49, 0.92) \\ \hline \hline FIPTM\({}^{\diamond,\dagger}\) & FIPTM\({}^{\diamond,\ddagger}\) & AAIIW\({}^{\diamond,\dagger}\) & AAIIW\({}^{\diamond,\ddagger}\) \\ \hline
0.35 (0.12, 0.50) & 0.46 (0.24, 0.67) & 0.35 (0.12, 0.51) & 0.28 (0.04, 0.54) \\ \hline \end{tabular}
\end{table}
Table 3: Estimates (95% bootstrap percentiles confidence intervals) of the marginal effect of counselling on the average number of alcoholic beverages consumed, _Add Health_ study, United States, 1996-2008
AAIIW estimator. Both estimators lead to similar estimates when using the correctly specified inverse intensity of visit weights. Those results indicate that in a setting in which we would not know the true observation mechanism, the AAIIW estimator might still lead to an estimate of the causal effect closer to the null (around \(0.2\) or \(0.3\) here) while we know that the FIPTM risks being biased when its weights are not correctly modelled. This indicates that the true effect is probably closer to \(0.2\) or \(0.3\) in this application (and indeed, the \(95\%\) confidence interval for the AAIIW estimator using the wrong inverse intensity of visit weights, which may be protected against bias since its outcome mean model conditional on the observation predictors has a chance at being correctly specified, is realistic with a lower bound closer to 0).
## 5 Discussion
This work proposed the first quadruply robust estimator that is consistent when only two out of four nuisance models (the treatment, the visit, and two conditional outcome mean models) are correctly specified, as long as the combination of correctly specified models is one shown in Table 2. The proposed estimator is particularly useful for observational settings subject to confounding and irregular observation times when researchers suspect they can specify correctly at least one of two models related to confounders (either the prescription mechanism, or the outcome biological mechanism), and one of two models related to the outcome observation (either the observation of the outcome or the outcome biological mechanism). The estimator also allows mediators and other variables that could be on the causal path from the treatment to the outcome to affect the observation model and yet be adjusted for.
In addition to being more robust than its only alternative, the FIPTM which is a doubly-weighted estimator, the AAIIW estimator is also the most efficient estimator in its class of semiparametric estimators. In simulation studies, the AAIIW was demonstrated to be robust and empirically as efficient as the FIPTM when the two weight models are correctly specified but much more efficient in other scenarios (such as when all four models used in its estimating equations are correctly specified or when the two outcome mean models conditional on confounders or observation predictors are correctly specified).
In an application to the _Add Health_ study in the United States, we have found a difference between more naive estimators and the multiply robust AAIIW estimator in the estimation of the causal marginal effect of therapy counselling on alcohol consumption (e.g., a causal effect of 0.35 more drinks with counselling therapy (95% confidence interval 0.12, 0.51) versus 0.62 more drinks (95% confidence interval 0.39, 0.75) with the most naive estimator). It is possible that unmeasured confounding remains and that a better adjustment would have brought these estimates even closer to the null. Sensitivity analyses could be used to assess the effect of unmeasured confounding or visit predictors that were not accounted properly in the estimator (see Smith et al. (2022) for informative observation or see e.g., Schneeweiss (2006); VanderWeele & Arab (2011) for unmeasured confounding).
The consistency of our proposed estimator relies on specific combinations of correctly specified nuisance models listed in Table 2 and some classical causal assumptions mentioned in SS2. An analyst using the novel approach, in collaboration with an expert from the substantive research field, should identify the confounders of the relationship between the exposure and outcome and the outcome observation predictors at risk of creating spurious associations between the exposure and the outcome (conditional exchangeability assumption). The use of a causal diagram can help in depicting the open paths by which dependencies that are not due to causal effects arise. It is not enough to include these variables in the weight models (inverse probability of treatment or inverse intensity of visit weights) or in the conditional outcome mean models discussed in this manuscript. A model that is correctly specified implies that the functional format of the predictors in the model is correctly specified. Second, the analyst should ensure that the observations included in the analysis meet the two positivity assumptions for the treatment and the observation models. They might decide to remove from the analysis a patient with a set of characteristics that are only represented in one of the two exposure groups (otherwise, the positivity of treatment assumption may be violated) or with a set of characteristics that are only represented when the outcome is not observed (otherwise violating the positivity of observation assumption). Third, the trivial causal assumption of consistency of the outcome must be met. In our application to the _Add Health_ study, for instance, we assume that the outcome observed in those who claimed to have received counselling is truly equal to their potential outcome under counselling, and vice-versa. Measurement errors, or patients not having filled in the information properly or truthfully could alter the estimates.
Interesting future avenues of work include the use of more flexible methods, such as machine learning methods, to correctly model the outcome mean as a function of confounders or observation predictors. The framework of targeted maximum likelihood estimation (Van Der Laan & Rubin, 2006; Schuler & Rose, 2017) could be used for that purpose. Another advantage of the proposed estimator is that it is based on
the general framework of generalized estimating equations (Zeger & Liang, 1986) and could therefore be extended straightforwardly to account for other types of outcomes (e.g., binary outcomes). The extension to categorical or continuous exposure is also possible.
## Acknowledgement
We thank Professor Marie Davidian at North Carolina State University for the enriching discussions we had as part of a virtual research internship of author JC in the Department of Statistics at North Carolina State University in 2021-2022.
This research uses data from Add Health, funded by grant P01 HD31921 (Harris) from the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), with cooperative funding from 23 other federal agencies and foundations. Add Health is currently directed by Robert A. Hummer and funded by the National Institute on Aging cooperative agreements U01 AG071448 (Hummer) and U01AG071450 (Aiello and Hummer) at the University of North Carolina at Chapel Hill. Add Health was designed by J. Richard Udry, Peter S. Bearman, and Kathleen Mullan Harris at the University of North Carolina at Chapel Hill.
## Supplementary material
Supplementary material available at the end of this document includes:
* Supplementary material A: Proof of the multiple robustness of the novel estimator under the different scenarios presented in Table 2.
* Supplementary material B: Sketch-proof for the efficiency of the AAIIW estimator.
* Supplementary material C: Details of the simulation studies.
* Supplementary material D: Results of the simulation studies for a sample size of 250 with the non-homogeneous Poisson rate for the observation indicators, for sample sizes 250 or 1000 using the Bernoulli probability for the observation of the outcome, and Monte Carlo bias and mean square error of each estimator, in all scenarios tested.
* Supplementary material E: Tables of characteristics in the _Add Health_ study stratified by two weighting strategies.
|
2307.00653 | Neuro-Symbolic Sudoku Solver | Deep Neural Networks have achieved great success in some of the complex tasks
that humans can do with ease. These include image recognition/classification,
natural language processing, game playing etc. However, modern Neural Networks
fail or perform poorly when trained on tasks that can be solved easily using
backtracking and traditional algorithms. Therefore, we use the architecture of
the Neuro Logic Machine (NLM) and extend its functionality to solve a 9X9 game
of Sudoku. To expand the application of NLMs, we generate a random grid of
cells from a dataset of solved games and assign up to 10 new empty cells. The
goal of the game is then to find a target value ranging from 1 to 9 and fill in
the remaining empty cells while maintaining a valid configuration. In our
study, we showcase an NLM which is capable of obtaining 100% accuracy for
solving a Sudoku with empty cells ranging from 3 to 10. The purpose of this
study is to demonstrate that NLMs can also be used for solving complex problems
and games like Sudoku. We also analyze the behaviour of NLMs with a
backtracking algorithm by comparing the convergence time using a graph plot on
the same problem. With this study we show that Neural Logic Machines can be
trained on the tasks that traditional Deep Learning architectures fail using
Reinforcement Learning. We also aim to propose the importance of symbolic
learning in explaining the systematicity in the hybrid model of NLMs. | Ashutosh Hathidara, Lalit Pandey | 2023-07-02T20:04:01Z | http://arxiv.org/abs/2307.00653v1 | # Neuro-Symbolic Sudoku Solver
###### Abstract
Deep Neural Networks have achieved great success in some of the complex tasks that humans can do with ease. These include image recognition/classification, natural language processing, game playing etc. However, modern Neural Networks fail or perform poorly when trained on tasks that can be solved easily using backtracking and traditional algorithms. Therefore, we use the architecture of the Neuro Logic Machine (NLM) [5] and extend its functionality to solve a 9X9 game of Sudoku. To expand the application of NLMs, we generate a random grid of cells from a dataset of solved games and assign up to 10 new empty cells. The goal of the game is then to find a target value ranging from 1 to 9 and fill in the remaining empty cells while maintaining a valid configuration. In our study, we showcase an NLM which is capable of obtaining 100% accuracy for solving a Sudoku with empty cells ranging from 3 to 10. The purpose of this study is to demonstrate that NLMs can also be used for solving complex problems and games like Sudoku. We also analyse the behaviour of NLMs with a backtracking algorithm by comparing the convergence time using a graph plot on the same problem. With this study we show that Neural Logic Machines can be trained on the tasks that traditional Deep Learning architectures fail using Reinforcement Learning. We also aim to propose the importance of symbolic learning in explaining the systematicity in the hybrid model of NLMs.
Neural Logic Machines Symbolic Learning Neuro Symbolic Sudoku Solver Neural Networks Deep Reinforcement Learning.
## 1 Introduction
The groundbreaking results of the modern deep learning models have proved that they are the ideal tools to solve complex problems, however the lack of systematicity in these models have been a problem for some time. Recent research has focussed on this issue by generating hybrid models which combine Neural Networks with Symbolic Learning. In past, researchers have attempted to create hybrid models for tasks such as Language Translation [1], Curriculum Learning [2], Learnable Recursion Logic [3] and the Synthesizing complex logic based on Input-Output example pairs [4], etc. By testing one such model, called the Neural Logic Machine [5], we emphasise on the relevance of symbolic learning in solving complex problems on which modern deep learning methods may fail. More specifically, we validate the NLM model on a different mathematical problem to realize their true potential as well as analyse their performance. Importantly NLMs can utilize the knowledge gained from generated rules to achieve a perfect generalization in several tasks. Therefore, we also aimed to test the NLMs ability to recover these lifted rules and apply them in the later stages of curriculum learning - when the complexity of the problem rises. To accomplish this, we gradually changed the number of empty cells in the grid while training.
We test the architecture of Neural Logic Machines [5] for solving a complex puzzle called Sudoku using our own set of predicates as input. In this experiment, we closely analyse the performance of this model and compare it with traditional algorithms on the same problem. To perform this experiment, we completed three main tasks. First, we trained
the NLM on sudoku grids with pre-defined empty cells, the number of which increased as training progressed. This approach, where the complexity of the problem increases over training, is known as curriculum learning. Secondly, we used symbolic learning with reinforcement rewards to award the model every time a valid configuration of the empty cells was achieved. Finally, the convergence time of the NLM and Backtracking algorithm was compared using a graph plot. Towards the end of the experiment, we successful tested the model with random sudoku grids.
In the later sections, we elaborate upon the robustness of the algorithm and systematicity of the network layers.
### Key Contributions
Our major contributions in this paper are:
#### Extending the Applications of Neural Logic Machines:
The NLM is trained and tested on a completely different problem set (e.g., Sudoku puzzles) to expand its scope in wide areas of applications. In [5], linear space problems (e.g., sorting arrays) are used to test this model, whereas, this paper focuses on using 2-dimensional problem set on the same model. Instead of function approximation, the reinforcement training algorithm 'REINFORCE' is used to estimate the gradients using a policy-gradient mechanism and calculate the gradients in a non-differentiable trainable setting.
#### Time Complexity and Comparison with Backtracking:
Upon successful implementation of NLM on a 9X9 Sudoku grid, their convergence time is compared with Backtracking algorithm and demonstrated using a graphical representation. A thorough comparison of NLM with backtracking is also mentioned in the result section.
#### Testing the Robustness of the Neural Logic Machine:
While the NLM [5] is tested with tasks like list sorting, path finding and BlocksWorld games, we have chosen a more complex problem; solving a 9X9 sudoku puzzle with upto 10 empty cells. To sort an array, we need to compare elements with each other and swap if needed. Whereas, in Sudoku, we need to fill the gaps with appropriate numbers checking rows, columns and sub-matrices. This makes the problem more complex.
## 2 Related work
The rising demand to train Neural Networks for performing complex tasks has generated great attention among the researchers. However, their lack of systematicity and inability to generalize for a greater set of inputs has lead them to perform poorly on more systematic tasks. To address these challenges, [5] proposed the Neural Logic Machine, which can solve problems requiring systems to perform actions by following systematic sets of rules. In [5], NLMs utilizes the relationships of objects obtained from quantifiers and logic predicates to solve BlocksWorld games, list sorting and path-finding tasks. The study done in [5] highlights the difference between conventional RNNs (Recurrent Neural Networks) with their proposed NLM, addressing the RNN's difficulty on smaller lists, and failure to sort slightly larger lists during testing. The reason behind that is RNNs trained on smaller lists will not be able to systematically generalize it for larger lists whereas NLM can.
An alternate approach to function approximators has been used with NLM called the REINFORCE algorithm [6], which is used for policy gradient optimization and estimates the gradient using a Monte-Carlo method. This is commonly used in deep reinforcement learning where the actions are sampled and the neural network can not perform backpropagation since sampling is non-differentiable operation. In such non-differentiable settings, we instead use gradient estimation techniques.
In 2019, Wang et al. [7] proposed a novel deep-learning architecture called the SATNet, which is a differentiable maximum satisfiability solver that uses CNNs. It is an approximate-differentiable solver which works on a fast coordinate descent approach for solving semidefinite programs (SDP) associated with the Maximum Satisfiability problem.
While the previous studies have focused on using NLMs on different problem sets [5] or solving Sudoku puzzles with fully Deep Learning approaches [7], our experiment emphasizes the combination of Symbolic Learning with Deep Learning, as well as a hybrid architecture to solve a new sets of complex problems. Lastly, this experiment also focuses on realizing the true potential of NLMs in different areas of applications.
## 3 Proposed Methodology
Figure 1 illustrates our proposed model architecture for the Neuro-Symbolic Sudoku Solver - a Hybrid architecture of Deep Reinforcement Learning techniques and Symbolic Learning. The first half of the model constitutes the learning phase, where the Sigmoid function acts as the activation function between hidden layers, and the SoftMax function activates the output layer. The input layer consists of 4 neurons each accepting a certain type of parameter. The flexibility to allocate a certain type of input to each neuron, leads to greater systematicity in the model. For instance, out of the four neurons in the input layer, the first neuron, \(i_{0}\), accepts only a nullary predicate; \(i_{1}\) accepts a unary predicate; \(i_{2}\) accepts a binary predicate, and \(i_{3}\) receives a ternary predicate as an input.
The problem of solving Sudoku puzzles requires checking for each row, column, and submatrix to maintain a valid configuration of the Sudoku grid. In order to check this constraint, the coordinates of the rows and columns along with the target values are passed to the input layer. Therefore, neurons \(i_{2}\) and \(i_{3}\) receive input as binary and ternary predicates, while neurons \(i_{0}\) and \(i_{1}\) receive Null inputs. In contrast to this experiment, [5] shows how their problems require predicates for a different set of neurons in the NLM. For example, [5] uses \(i_{1}\) and \(i_{2}\) since unary and binary predicates are required to solve the array sorting problem. Once the set of predicates is received by the input layer, the input for the following layers are reduced or expanded based on the arity of the previous layer.
The output from the SoftMax layer is fetched by the Reinforcement Learning (RL) module, which constitutes Phase 2 of the architecture (see figure 2). The RL module takes care of three main functionalities: Allocating a +1 positive reward for a fully solved grid, a negative reward of -0.01 for every move and performing environmental resets after checking all target values. The RL Module will trigger an environmental reset if none of the target values from 1-9 can fill an empty cell while maintaining a valid configuration of the Sudoku grid. During this reset, all filled values are emptied, the sudoku grid is reinitialized, and a new iteration begins.
Given an unsolved Sudoku grid, we have implemented a multistage architecture to solve the grid step-by-step. Figure 2 illustrates the high-level architecture of the Neuro-Symbolic Sudoku solver, which aligns with problems experimented in the original NLM paper.
The first step in this implementation diagram consists of calculating the boolean predicates. There are three important rules to solve a sudoku grid: The solver needs to put a number in an empty cell such that the resulting grid configuration remains valid. Here, valid configurations refers to the states where each number in every row, column, and 3x3 submatrix is distinct. With the help of these conceptual rules, lifted Boolean predicates may be created.
Lifted rules are generalized rules which apply to a task to crack its solvability. These rules are applicable to any example in that task domain, irrespective of its configuration or complexity. These rules can be seen as the simplest fundamental rules of solving a system. We define the predicate isRow(r, x), which computes whether number x exists anywhere in row r. Similarly, predicate isColumn(c, x) computes whether number x exists anywhere in column c and predicate isSubMat(r, c, x) computes whether number x exists in the 3x3 submatrix containing the cell (r, c). Based on above definition of the predicates, isRow(r, x) and isColumn(c, x) are binary predicates and both result in [9, 9] shaped tensors whereas isSubMat(r, c, x) is a ternary predicate that results in a [9, 9, 9] shaped tensor.
It is also worth mentioning that in this study we do not input the unsolved grid into the input layer of the neural network. Instead, we compute the predicates as described above and pass those predicates as a set of input. Since there
Figure 1: Model Architecture of Neural Logic Machine
are two binary predicates, we concatenate the values of isRow(r, x) and isColumn(c, x) and call the resultant tensor as a stacked binary predicate. Now, these predicates can be feeded into the input layer of the neural network.
The last layer of the NLM logic machine computes the SoftMax value and provides the empty cell position, (r, c), as well as the target value to place in the empty cell. Here comes the role of the reinforcement module. The Reinforcement module checks if placing x into (r, c) makes a valid sudoku grid or not. Based on the previous assertion, it generates the next state and computes the positive reward (if the next state is not a valid sudoku grid) or negative reward (if the next state is not a valid sudoku grid). The reinforcement algorithm also checks if the next state generated is a solved Sudoku grid. In this case, we break from the iteration and print the output, otherwise, we repeat the same steps.
However, the above strategy may take indefinite number of steps to find a solution. To prevent this, we have defined an upper bound on the number of steps the algorithm may take. The proposed algorithm yields a success rate of 1 if it solves the grid, and a 0 otherwise.
### Training Details
The hyper-parameters and the training details of the NLM for solving a 9x9 sudoku puzzle are shown in table 1.
## 4 Result and Analysis
Our experiment involves multiple settings (number of empty cells, dimensions and optimal steps) of the grid, which are often modified during the testing phase to understand the performance of the model and obtain a pattern from the results. To begin the experiment, the number of empty cells and the maximum steps in the sudoku grid are limited to 3 and 81 respectively. These are then gradually incremented as the model trains. As these parameters change over time, the complexity to solve the problem also increases. However, our result suggests that NLMs can perfectly confront this complexity and yields 100% accuracy in most of the cases.
To give a better understanding on how the model performs with different parameters, we have demonstrated the success rate with respect to each parameter that the model was trained on. Table 1 shows the comparison and performance under each setting that is tested on our modified version of the NLM. The model when tested with the minimum number of empty cells (nr empty) 3 and max steps set to 81, gives a success rate of 0.94. However, when the maximum number of steps (max steps) are increased from 81 to 150, the model receives a perfect score of 1.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Problem** & **Epochs** & **Batch Size** & **Loss** & **Optimizer** & **LR** & **RL Rewards** & \(\gamma\) \\ \hline
**Sudoku Puzzle** & 50 & 4 & Softmax-CE & Adam & 0.005 & +1,-0.01 & 0.99 \\ \hline \end{tabular}
\end{table}
Table 1: Training Details of Neuro-Symbolic Sudoku Solver
Figure 2: High Level System Architecture
A similar case is also observed when the model receives 5 empty cells with 81 and 729 max steps. With fewer optimal steps, the model yields a comparatively lower score as compared to 3 empty cells. However, when the model does not have a restriction on the maximum number of steps (set to a max of 729), it performs well and gets 100% accuracy. From this we conclude that success rate is directly related to the maximum number of steps allowed by the model. Another inference obtained from Table 2 is that when number of empty cells increases keeping Max. steps constant, success rate drops. This shows the inverse relation between success rate and empty cells. Therefore, the success rate of NLM model is strongly determined by the relation of:
\[\textit{Success Rate}\propto\textit{max steps}*\frac{1}{emptycells} \tag{1}\]
In addition to fine-tuning the model, we have also drawn a time complexity analysis of the NLM with traditional backtracking algorithms for solving Sudoku puzzles. Both the NLM and backtracking algorithms are provided with the same set of grids and their time to solve the complete grid is highlighted in Figure 3. The motivation behind this is to showcase the difference in the principle working of both the algorithms and analyze their convergence time (with limited training in the case of NLMs).
The backtracking algorithm takes a constant average time of nearly 0.00045 seconds to solve the same set of 9X9 grids, on the other hand, the NLM takes a considerably higher amount of time to converge. (It is also worth mentioning that Figure 3 demonstrates the time taken by the NLM with 729 maximum number of steps). The reason that backtracking converges faster is due to the fact that it solves the grid in an optimal number of steps (i.e., \(MaxStepsBacktracking=NumberOfEmptyCells\)). In Figure 3, the peak time in the case of the NLM denotes the instances in which the environment was reset due the formation of an invalid configuration of the sudoku grid.
\begin{table}
\begin{tabular}{c|c|c}
**No. of Empty Cells** & **Max. Steps** & **Success Rate** \\ \hline
3 & 81 & 0.94 \\
3 & 150 & 1.00 \\
3 & 400 & 1.00 \\
3 & 729 & 1.00 \\
5 & 81 & 0.80 \\
5 & 150 & 0.96 \\
5 & 400 & 0.99 \\
5 & 729 & 1.00 \\
8 & 80 & 0.68 \\
8 & 150 & 0.92 \\
8 & 400 & 0.98 \\
8 & 729 & 1.00 \\ \end{tabular}
\end{table}
Table 2: Comparison of different training parameters
Figure 3: Comparison of NLM and backtracking convergence time
During this instance, the model first receives a negative reward through the reinforcement module and then resets the environment once there are no possible target values to test. In this case, the NLM again tries to fill the empty cells but with a different set of target values from the beginning. However, even with 10 empty cells, our modified version of the NLM always takes less than 2.0 seconds to converge.
## 5 Conclusion and Future Work
The focus of this study is to tackle one of the drawbacks of the traditional Neural Networks i.e.,'systematicity'. Where Neural Networks perform poorly, NLMs can solve some the same task with 100% accuracy. NLMs have been trained and tested by [5] on various tasks which Deep Learning models have failed to solve or converge. In our paper, we added to the existing applications of their architecture and solved a more complex problem to test the robustness of Neuro Logic Machines.
While the Neuro Logic Machines failed to converge for Sudoku puzzles faster than the backtracking algorithm, it is evident from this study that a Neuro Logic Machine can be trained to solve tasks where conventional Deep Learning models may fail. Lastly, because the NLM receives a random combination of grids and number empty cells from, we are confident that the high success rate of NLMs is not due to the model's over-fitting. Thus, with this experiment, we have been able to strengthen the argument [5] that NLM can solve tasks with 100% accuracy without relying on over-fitting. In Section 4, we also deduce that that the success rate is directly associated with the number of empty cells and the maximum number of steps that model is allowed to take.
To conclude, Neuro Logic Machines can solve complex problems using a hybrid approach of Reinforcement and Symbolic Learning. In future work, we intend to cover the fine-tuning and convergence rate of the algorithm. We propose that the applications of NLMs can be extended further with even more games (e.g., Ken Ken puzzles) and mathematical problems (such as search tasks). We also anticipate that the architecture will cover problems where NLM have not yielded a success rate of 100%.
## 6 Acknowledgment
We thank Dr. Leake, professor at Indiana University Bloomington, for being our instructor and guiding us through this experiment and thereby supporting our work.
### Conflict of Interest
The authors declare that they have no conflict of interest.
Figure 4: Separate convergence time for Backtracking and NLM for same problem
### Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
|
2306.17006 | Statistically Enhanced Learning: a feature engineering framework to
boost (any) learning algorithms | Feature engineering is of critical importance in the field of Data Science.
While any data scientist knows the importance of rigorously preparing data to
obtain good performing models, only scarce literature formalizes its benefits.
In this work, we will present the method of Statistically Enhanced Learning
(SEL), a formalization framework of existing feature engineering and extraction
tasks in Machine Learning (ML). The difference compared to classical ML
consists in the fact that certain predictors are not directly observed but
obtained as statistical estimators. Our goal is to study SEL, aiming to
establish a formalized framework and illustrate its improved performance by
means of simulations as well as applications on real life use cases. | Florian Felice, Christophe Ley, Andreas Groll, Stéphane Bordas | 2023-06-29T15:02:22Z | http://arxiv.org/abs/2306.17006v1 | # Statistically Enhanced Learning: a Feature Engineering Framework to Boost (any) Learning Algorithms
###### Abstract
Feature engineering is of critical importance in the field of Data Science. While any data scientist knows the importance of rigorously preparing data to obtain good performing models, only scarce literature formalizes its benefits. In this work, we will present the method of Statistically Enhanced Learning (SEL), a formalization framework of existing feature engineering and extraction tasks in Machine Learning (ML). The difference compared to classical ML consists in the fact that certain predictors are not directly observed but obtained as statistical estimators. Our goal is to study SEL, aiming to establish a formalized framework and illustrate its improved performance by means of simulations as well as applications on real life use cases.
Significance statementStatistically Enhanced Learning (SEL) is a promising approach to improving learning performance. This work provides a formal definition of SEL and presents a framework for understanding its different components. The framework identifies the intersections between Statistics, Enhanced (data processing), and Learning, and defines different levels of SEL features. This work will enable researchers and practitioners to better understand and apply SEL to a wide range of learning problems.
## Introduction
In the field of Machine Learning (ML), the preparation and pre-processing of the data is often considered equally or even more important than the model itself. Students in data science are usually taught that 80% of the workload on an ML project is about preparing the data, while the remaining 20% are concerned with the actual choice of ML model [1]. This is in sharp contrast to the focus put on the modeling part in comparison to the data preparation and its benefit to models. As an illustration, the top 15 questions on Stack Overflow for the keywords "Machine learning" count 57.5 times more views than the top questions for "Data preparation" or "Data engineering"1.
Footnote 1: “Machine learning” counts 831,126 views covering 94 answers while “Data preparation” counts 14,591 views and 18 answers. Accessed on June 29\({}^{\text{th}}\), 2023.
In this work, we therefore introduce Statistically Enhanced Learning, abbreviated SEL, which is a statistical feature engineering framework that allows building new features/covariates which cannot be directly observed. The idea is to enhance the performance of existing learning algorithms by extracting specifically targeted information that is not directly given by the data. This allows adding the information of an unobserved or mis-measured signal under the form of a statistical covariate with a clear meaning. As we shall demonstrate, SEL works for any type of data (tabular, computer vision, text) and is a general approach to improve any learning algorithm. We refer to learning as the general
term, since it considers the large spectrum of data driven learning techniques (from classical statistical to advanced deep learning models). As we will see, contributions from different domains (statistics, machine learning, econometrics, computer science,...) have already unknowingly used SEL as feature engineering technique. By our formalizing framework we will thus reunite and structure seemingly distinct approaches, which will shed new light on feature engineering.
We distinguish three levels of SEL features with increasing technical complexity:
* [leftmargin=*]
* [leftmargin=*]
* [leftmargin=*]
* [leftmargin=*]
* [rightmargin=*]
* [leftmargin=*]
* [rightmargin=*]
* [leftmargin=*]
* [rightmargin=*]
* [leftmargin=*]
* [rightmargin=*]
* [leftmargin=*]
[MISSING_PAGE_POST]
play in the opposite situation, when \(\mathbf{X}\) does not contain all the signals influencing the target \(\mathbf{Y}\) (see Figures 0(b)-0(d)). Instead, we know that other factors \(\mathbf{W}\) have a direct influence on \(\mathbf{Y}\) but they cannot be observed. The modeler then uses new substitute variables, denoted \(\mathbf{X}^{s}\) in the diagram, to represent the missing signal. In other words, since the relation \(\mathbf{Y}=f(\mathbf{X},\mathbf{W})\) cannot be explicitly written because \(\mathbf{W}\) is not observed, the modeler substitutes \(\mathbf{W}\) by \(\mathbf{X}^{s}\) and focuses on estimating the relation \(\mathbf{Y}=f(\mathbf{X},\mathbf{X}^{s})\). The three levels of SEL correspond to the following representations.
1. The dashed link in Figure 0(b) exists but cannot be observed so the scientist has to use \(\mathbf{Z}\) as an alternative source of information to model the phenomenon \(Y\). In situations such as household consumption used to represent unobserved standard of living [3], the substitute variables are proxy variables. [7] enforces this practice by underlining the benefit of such proxies.
2. The link between the variable of interest \(\mathbf{W}\) and the target \(\mathbf{Y}\) as illustrated in Figure 0(c) is indirect. In situations like sports predictions, we know that the information contained in \(\mathbf{X}\) is not sufficient to model \(\mathbf{Y}\) accurately. The maturity of players as the true missing signal \(\mathbf{W}\) is too important to be ignored as a predictor. The players' ages (variable \(\mathbf{Z}\)) are a good indicator (proxy) for their maturity. However, the ages of individual players cannot be used alone as predictors and would be meaningless to the model. Hence, instead the players' average age (\(\mathbf{X}^{s}\)) should be used. Therefore, SEL in \(\mathbf{X}^{s}\) acts as a representative summary of the information contained in \(\mathbf{Z}\).
3. In the last situation, depicted in Figure 0(d), the signal \(\mathbf{W}\) causing \(\mathbf{Y}\) cannot be observed either but SEL is used to estimate the relation via \(\mathbf{X}^{s}\). SEL is no longer a proxy with the goal to add information but instead an actual estimation of the missing signal \(\mathbf{W}\). In a wind energy prediction example, we wish to input an estimate of the to-be-expected wind speed (\(\mathbf{W}\) in Figure 0(d)) based on recent wind speed measurements. After defining the time window, EWMA allows obtaining such an estimate (\(\mathbf{X}^{s}\)) by weighting the recency of the individual measurements. The true to-be-expected wind speed of course cannot be observed.
Based on this Granger causality discussion, we wish to stress again that the principle of SEL is not to add any information in order to marginally increase predictive performance as it is the case in linear regression with the coefficient of determination \(R^{2}\) that mechanically increases even for variables that are just weakly related to \(Y\). SEL rather is a means to recover information from signals that cannot be detected.
We shall now properly contextualize SEL within the domains of learning and data science, hereby contributing a new overview of various domains that is insightful _per se_. We define Statistically Enhanced Learning as inherited from three different fields:
* Learning: General field aiming to process some input and generate predictions. It can be as general as machine and deep learning as illustrated in Figure 2 below, which is taken from [8]. Quoting [9]: "_Using this data we build a prediction model, or learner, which will enable us to predict the outcome for new unseen objects. A good learner is one that accurately predicts such an outcome._"
* Enhanced (data processing): Field that includes data preparation steps to later enhance the learning performance. It can be compared to information processing as defined by [10] and includes steps such as data cleaning and data preparation.
* Statistics: In a wide sense, field that includes descriptive statistics, inference, modeling and causality.
We visually represent these fields in Figure 3. As they are overlapping domains, we can identify their intersections as follows:
* Statistical learning = Learning \(\cap\) Statistics: any learning algorithm that we work with (e.g., linear regression, Random forests, Neural networks). It is an interdisciplinary field by nature as it intersects with artificial intelligence and areas of engineering or other disciplines [9].
* Feature engineering = Learning \(\cap\) Data processing: First and crucial part before the modeling exercise. It consists in the preparation of the data set by processing the input data (cleaning, scaling, etc.) [11]
* Data mining = Statistics \(\cap\) Data processing: Part of the literature which consists in extracting information/knowledge from data [12]. Extracted information can be used in inferential statistics, learning or generated business knowledge and metrics.
SEL is at the intersection of these fields as illustrated in Figure 3. It consists of augmenting a data set with easy-to-understand features from either SEL level in order to improve the performance of any learning algorithm. It thus allows retrieving information about a missing signal. As they are not measured directly, SEL variables bear an extra layer of uncertainty, which one can quantify thanks to the requirement of SEL covariates to be of statistical nature. We also note
that SEL is creating a bridge between the fields of statistics and machine learning which all too often are considered as competitors/distinct [13].
We shall now provide examples of contributions whose methodology falls under the umbrella of SEL. We analyze these works under the formal SEL framework and, hence, shew new light on them. To show the universal applicability of SEL, we will consider a subdivision based on the type of data.
Tabular linear settingEconometricians often face the problem of unobserved variables to model complex phenomena. Important variables to a model (such as income, willingness to pay, feelings) are either not available to the modeler or not measurable. The use of proxy variables, hence SEL 1, is a solution to compensate the lack of necessary signals. As already mentioned, [3] analyze the benefit of the use of proxies such as household consumption from demographic surveys to represent standards of living and show that even weak proxy variables can still capture the desired signal from the unobserved feature. They also claim that adding proxies can help reduce the inconsistency of the estimated parameters.
The two-stage least squares approach (aka. 2SLS, [2]) is another indirect modeling strategy which is used to incorporate a feature that cannot directly be included in the regression model when it contradicts the OLS assumptions (when the covariates \(X\) are not independent from the residuals \(\varepsilon\)). Therefore, including an instrumental variable (IV) as a two-stage approach is preferred by first regressing the conflicting variable \(Z\) as a function of some regressors \(W\), thus we regress \(Z=\beta_{W}W+\varepsilon^{\prime}\) for some error term \(\varepsilon^{\prime}\). The estimated variable \(Z\) then feeds the main regression model by \(Y=\beta_{X}X+\beta_{Z}\hat{Z}+\varepsilon\), and this is clearly an instance of SEL 3. A generalization of IV methods [14] formalizes the framework which can be applied to discrete modeling or in the context of high heteroscedasticity.
Figure 3: Statistically Enhanced Learning as the intersection between three fields
Figure 2: Venn diagram for Artificial Intelligence and Learning (inspired from [8])
It also happens frequently that observed variables are noisy and therefore cannot be used as predictors. In such cases, modelers pre-process those variables via statistical methods such as kernel composition/decomposition or Fourier transforms [15]. Such techniques help deal with mis-measured variables (or measured with noise) and prepare a cleansed signal that will enhance the learning step. This approach falls under the umbrella of SEL 3.
Tabular nonlinear setting[16] use moments of some longitudinal features to classify abnormal bitcoin network addresses. They add the first four moments (namely mean, variance, skewness and kurtosis) from time-dependent variables to extract intrinsic information of the variable to classify bitcoin addresses. Their method outperforms existing models and shows the high importance of the moment-based variables, hence of SEL 2.
In the context of football goals prediction of national teams, [17, 18] use a so-called _hybrid_ approach to estimate unobserved variables and augment the data set for a Random Forest model. On the one hand, they build a novel covariate as the average age of players, which is of course SEL 2. On the other hand, they add a statistical feature which aims to represent the strength of the two opponents. To this end, they consider historical games of all national teams over an 8-year-period preceding the tournament whose matches they intend to predict and model the joint distribution of goals scored by home and away teams (\(i\) and \(j\), respectively) by the bivariate Poisson distribution. Hereby, the parameters \(\lambda_{i}\) and \(\lambda_{j}\) represent the mean parameters of the Poisson process and are assumed to be of the form \(\log(\lambda_{i})=\beta_{0}+(r_{i}-r_{j})+h\cdot\mathds{1}(\text{team $i$ playing at home})\), where \(\beta_{0}\in\mathbb{R}\) is a common intercept and \(h\in\mathbb{R}\) is the effect of playing at home. The real-valued parameters \(r_{i}\) and \(r_{j}\) are the strength parameters of the home team \(i\) and away team \(j\), and they are estimated by means of weighted maximum likelihood. The weights are chosen such that more importance is given to more recent matches. These estimated strength parameters are then included as a new covariate to the final model for predicting scores. As shown in [17, 18], this approach helps reduce the RMSE and allows even outperforming the bookmakers, which are a golden standard in sports prediction. This hybrid approach clearly falls in the category of SEL 3, and the authors also showed that the SEL 3 variables have the highest variable importance in their Random Forest model.
Computer visionTo analyze and classify images, [19] build a data set only composed of moment features to determine whether an image contains a hidden message in a picture or not. The moments are derived from the wavelet subbands of an image to represent the color histogram of a picture. These extracted color distributions help create features (by means of moments) that are important covariates sensitive to the change of colors and help improve the detection of hidden messages. This approach in computer vision falls under the umbrella of SEL 2. Other contributions also extract moments from images [20] or temporal signals [21] for classification purposes.
Natural Language ProcessingIn the field of text classification and analysis, the main challenge consists in capturing information carried by the word tokens. Some techniques consist in counting characters in a text to create new features for the data set [22, 23], which is part of SEL 2. A more advanced approach of counting words, which however still falls under SEL 2, is the Term Frequency-Inverse Document Frequency (TF-IDF) technique [24], which weights the counts by the frequency of the words appearing in a corpus, thus representing the importance of a word. Another popular approach to deal with textual inputs is Word2Vec [25]. This semantic-based technique uses neural networks with embeddings to produce numeric representation of words in a high-dimensional vector space. The trained model helps compare words with the vector of semantically similar words. At first sight, Word2Vec does not appear to be part of SEL 3 as it is a complex machine learning feature extraction method, however very recently [26] showed that, under a copula-based statistical model for text data, Word2Vec can be interpreted as a statistical estimation method for estimating the point-wise mutual information, hence qualifying it as part of SEL 3.
[27] uses a combination of TF-IDF and Word2Vec to classify text into defined sentiment categories. In our framework, their approach can be perceived as a double enhancement, as an SEL 2 technique is applied on SEL 3 type features.
A unifying frameworkAs these examples show, our proposed Statistically Enhanced Learning is a general framework that gives a structure to hitherto distinct approaches. For illustrative purpose, we summarize them in the diagram from Figure 4.
Somewhat related to SEL is the recently proposed Probabilistic Random Forest [36], which sets itself in the field of mismeasured variables. It is an adaptation of Breiman's Random Forest [37] to account for the noise of measured features. It considers quadruplets of the form \((x_{i},\Delta x_{i},y_{i},\Delta y_{i})\) instead of the usual pair \((x_{i},y_{i})\), where \(\Delta x_{i}\) (resp., \(\Delta y_{i}\)) represents the uncertainty when measuring \(x_{i}\) (resp. \(y_{i}\)). In particular, the authors assume that each observed value is drawn from some normal distribution where \(X_{i}\sim\mathcal{N}(x_{i},\Delta x_{i})\), so the additional quantity \(\Delta x_{i}\) is added to the model and can be considered as a statistically estimated quantity. Indeed, in fields such as astronomy, data often come from multiple sources (e.g., satellites) where the same observation is measured by different instruments. The measure then contains uncertainty. Not only did the authors include this additional source of information, but they adapted the
Random Forest logic to account for this uncertainty. The split from a node in a tree depends on this quantity \(\Delta x_{i}\) and is no longer a boolean true or false. This gives the model probabilistic considerations that improve its performance when uncertainty in measurements increases, but also allows deriving probability distributions of the target.
So far we have defined Statistically Enhanced Learning, presented its detailed structure, contextualized it within the realm of Data Science and Artificial Intelligence, and showed how existing approaches from the literature are embraced by SEL. Next, we will demonstrate the learning performance enhancement of SEL by means of various examples, starting with synthetic data.
#### Benchmarking with simulated data
By means of Monte Carlo simulations, we will compare the performance of ML models with SEL covariates versus regular ML models. For our simulations, we consider \(n=1,500\) observations and \(p\) predictors, whose values are simulated from a Gaussian distribution. Before computing the response variable, we generate some underlying process \(\mathbf{Z}_{i},i=1,\ldots,n\) of length \(m=400\) for each of the \(n\) individuals. This process follows a Cauchy distribution whose parameter \(\mu\) (the location parameter) will directly constitute a variable in the data set. Formally, for an individual \(i\), the regression function writes
\[\mathbf{Y}_{i}=\beta^{\prime}\mathbf{X}_{i}+\beta_{\mu}\mu_{i}^{2}+\varepsilon_ {i} \tag{1}\]
where \(\mathbf{X}_{i}\in\mathbb{R}^{p}\) is the \(p\)-dimensional vector of observed covariates, \(\varepsilon_{i}\sim\mathcal{N}(0,1)\) is the residual term, \(\mu_{i}\) is the location parameter of the underlying Cauchy process \(\mathbf{Z}_{i}\) and \(\beta\in\mathbb{R}^{p}\) and \(\beta_{\mu}\in\mathbb{R}\) are the parameters to be learnt/estimated. We assume that we cannot observe the parameter \(\mu_{i}\) but only the underlying process \(\mathbf{Z}_{i}\).
To learn the regression function (1) from data, we prepare three models using XGBoost [38]. A first baseline model considers that we can only observe the matrix of \(p\) covariates \(\mathbf{X}\). A second model - that we denote "SEL 2" - uses the underlying process \(\mathbf{Z}\) and computes the empirical mean, so that for each individual \(i\) we have \(X_{i}^{s}=m^{-1}\sum_{j=1}^{k}Z_{i,j}\). A last model - denoted "SEL 3" - estimates the parameter of the Cauchy distribution via MLE from the underlying process \(\mathbf{Z}\). We then have \(X_{i}^{s}=\hat{\mu}_{i}\). Since we consider that we do not know the form of the actual variable in the
Figure 4: A list of Statistically Enhanced Learning methods by type of data
model, we input the estimated parameter \(X_{i}^{s}\) with no further transformation as an additional variable into the XGBoost algorithm.
For replicability purposes, the full details on the simulations and data generation can be found in the repository referred in the Code availability Section.
We report in Figure 5 the ratio of RMSE for the baseline model versus the SEL approaches as a function of the number of variables \(p\). Simulations are run \(10,000\) times to derive values and credible intervals.
We can observe that the performance of the SEL models tends to be consistently better than the baseline approach, in particular when only few observable variables are present in the model. This suggests that the relative importance of the SEL features is quite high compared to the other covariates. We also analyzed the performance of the XGBoost model with the SEL 3 variable for one iteration with \(p=10\). The formula used to generate our data in this case is defined by
\[Y=-1.04X_{0}-1.32X_{1}+4.50X_{2}-1.69X_{3}+0.53X_{4}+1.34X_{5}+3.35X_{6}+4.10X_ {7}-0.99X_{8}+0.98X_{9}+4.50\mu^{2}+\varepsilon. \tag{2}\]
When analyzing the feature importance of the model using TreeSHAP [39, 40], we can observe that the estimated parameter \(\hat{\mu}\) (denoted "SEL" in Figure 5(a)) comes as the most important variable of the model. Other features are as important as their weight from the formula defined in equation (2).
Figure 5: Relative RMSE of SEL with \(s=1\) generated feature versus baseline approach (the lower the better)
Figure 6: Analysis of the model with TreeSHAP for the SEL model with \(p=10\) variables
Furthermore, we can also see in Figure 5(b) that the model is able to correctly learn the quadratic relation between the SEL variable \(\hat{\mu}\) and the target \(\mathbf{Y}\). Our nonlinear model can learn the correct relationship between the SEL variable and the target, which highlights its predominant importance in the model.
#### Application to image data
In order to show the benefit of the SEL methodology on a large spectrum of use cases, we apply it to computer vision data sets. We use three common data sets for model benchmarking. The first one, the MNIST data set [41], is composed of \(60,000\) images of hand written digits from 0 to 9. The data set is widely used in machine learning and served as a set for comparing human performance versus machines in reading handwritten digits. The second one, the Fashion-MNIST [42] data set, is a derivative of the original MNIST data with clothes images. It is also composed of 10 categories of images from T-Shirts, trousers to bags and boots. The third data set we use is CIFAR-10 [43]. This database is composed of \(60,000\) colored images of animals and vehicles. It consists of 10 categories ranging from dog, horse to airplane or truck.
To compare our methodology on the aforementioned data bases, we use the same deep learning architecture on the three data sets. We use the VGG architecture [44], which consists of stacking double convolutional layers2 to process the image before feeding fully connected layers before the final classification. This Convolutional Neural Network (CNN) architecture is widely used in computer vision and we will refer to it as the _vanilla_ approach. For the SEL approach, we use the same architecture but add more variables on top of the convolutional layers. For each of our three data sets, we extract the distribution of colors on the images [45]. The MNIST and Fashion-MNIST data being in black and white, the color histogram will correspond to the gray intensity. The CIFAR-10 being in color, we will extract three histograms from the RGB (red, green and blue) representation of the images. An illustration of a color histogram for a colored image (displayed in Figure 7(a)) can be found in Figure 7. From the histogram, we then compute the first four moments (mean, standard deviation, skewness and kurtosis) that will correspond to our SEL 2 features to add to the model.
Footnote 2: Stack two layers of 2-dimensional convolutions with max-pooling
Our deep learning VGG architecture is only augmented with fully connected layers in parallel of the CNN to ingest the information from the SEL covariates. If we were to compare the vanilla and SEL models, we can consider the vanilla architecture as a constrained model of our SEL, where weights for the fully connected layers to learning from the moment variables are set to zero.
\begin{table}
\begin{tabular}{c c c} Dataset & Vanilla & SEL \\ \hline MNIST & 99.01\% & 99.05\% \\ Fashion & 90.57\% & 91.17\% \\ CIFAR-10 & 69.09\% & 69.49\% \\ \end{tabular}
\end{table}
Table 1: Comparison of model classification accuracy for regular CNN architecture versus SEL augmented model
Figure 7: Color histogram from horse example image in Figure 7(a)
We report the classification performance of both methodologies on our different data sets in Table 1. We observe that the SEL features consistently help the model's performance. Although the accuracy uplift can be modest, such a gain can sometimes be crucial in highly regulated sectors (such as financial institutions, security industry, etc.), where the performance of a model needs to be as high as possible and not acceptable below certain thresholds. Note that our goal in this exercise is not to reach the least error rate on these data sets (some literature already focuses on this objective by using fine-tuned state-of-the-art model architectures). Our aim is rather to illustrate that data augmentation via SEL is beneficial to any model, even for a fixed architecture.
We can further observe that the modest uplift of performance on the MNIST data set can be explained by an already high performance of the model, but also by the limited variability of color/gray over the image where the color on the object to detect may be quite monotonous. As a comparison, the Fashion-MNIST images represent real clothes and objects in their original setting and might have multiple colors. The gradient of colors can thus be larger, and the SEL features added to the model contain more information. As we can see in Figure 8, the colors in an image of a horse and of an airplane can be very different. Although one of the goals of the convolutional layers is to recognize the color when analyzing the pixels in the image, having a higher level of information with, for example, moments from the color histogram helps the model eliminate obvious non-candidates more easily.
### Application to sports data
Other applications of data augmented machine learning are in the fields of sports. [46] compared different regression approaches to model the number of goals scored in international handball games. They concluded that the Gaussian distribution is the most appropriate distribution to model the number of goals scored by a team and use a LASSO-penalized approach to predict future games. Taking the same data, we compare the benefit of adding a SEL variable on different models to predict the number of goals scored by the home team during international handball games.
We construct a SEL variable of level 2 using the first moment of the (Gaussian) distribution of past scored goals for each team. This moment is computed as the average number of goals scored by a team and can be interpreted as a form of strength estimation. We then train a LASSO regression, Random Forest, XGBoost and Neural Network with and without the SEL variable. The "regular" case corresponds to the same set of features as used in [46]. The case with SEL adds the team's strength with moment estimation on top of these features. We report the RMSE for each model on the training and test set in Table 2. As can be seen, the Random Forest with SEL features performs with the least error on the test set, followed by the LASSO with SEL. We notice that all models benefit from the adjunction of the SEL covariate.
We further analyze the performance of the Random Forest with SEL features and the regular LASSO by extracting the features of importance in Figure 9. We can observe that the feature Rank is considered by both Random Forest and LASSO models as the most important, and that the new SEL feature comes directly second, showing its high influence.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & Feature set & Training & Test \\ \hline LASSO & Regular & 4.70 & 4.56 \\ & SEL & 4.05 & 4.39 \\ Random Forest & Regular & 1.73 & 4.04 \\ & SEL & 1.69 & 4.00 \\ XGBoost & Regular & 0.35 & 4.52 \\ & SEL & 0.33 & 4.47 \\ Neural Net & Regular & 4.96 & 4.80 \\ & SEL & 4.89 & 4.73 \\ \hline \hline \end{tabular}
\end{table}
Table 2: RMSE on train and test sets for regular versus SEL models
Figure 8: Examples of two different categories in the CIFAR-10 data base
The features of importance for the Random Forest have been extracted using the built-in module in Scikit-learn, while for the LASSO we have derived the importance by taking the absolute value of the estimated coefficients presented in Table 4 from [46].
The order of important variables between the original model of [46] and our fitted Random Forest is respected; adding a SEL variable of level 2 helps reduce the error of the model while the feature is considered as highly important.
## Code availability
The code and software materials have been deposited to the GitHub page at [https://github.com/florianfelice/StatisticallyEnhancedLearning](https://github.com/florianfelice/StatisticallyEnhancedLearning).
|
2305.11127 | Unfaulting mechanisms of Frank loops in fluorite oxides | Unfaulting of Frank loops in irradiated fluoride oxides are of significance
to microstructural evolution. However, the mechanisms have not been directly
observed. To this end, we utilize molecular dynamics to reveal the atomistic
details related to the unfaulting process of interstitial Frank loop in
ThO$_2$, which involve nucleation of single or multiple Shockley partial pairs
at the loop circumference. The unfaulting is achieved via a synchronous shear
of the partial pairs to remove the extrinsic stacking fault in the cation
sublattice and the intrinsic stacking fault in the anion sublattice. The strong
oxygen motion at the dislocation core may reduce the activation barriers of
dislocation nucleation and migration. These findings provide a fundamental
understanding of the transformation of faulted loops in irradiated ThO$_2$, and
could be transferable to other fluorite systems. | Miaomiao Jin, Jilang Miao, Yongfeng Zhang, Marat Khafizov, Kaustubh K. Bawane, Boopathy Kombaiah, David H. Hurley | 2023-05-18T17:18:25Z | http://arxiv.org/abs/2305.11127v1 | # Unfaulting mechanisms of Frank loops in fluorite oxides
###### Abstract
Unfaulting of Frank loops in irradiated fluoride oxides are of significance to microstructural evolution. However, the mechanisms have not been directly observed. To this end, we utilize molecular dynamics to reveal the atomistic details related to the unfaulting process of interstitial Frank loop in ThO\({}_{2}\), which involve nucleation of single or multiple Shockley partial pairs at the loop circumference. The unfaulting is achieved via a synchronous shear of the partial pairs to remove the extrinsic stacking fault in the cation sublattice and the intrinsic stacking fault in the anion sublattice. The strong oxygen motion at the dislocation core may reduce the activation barriers of dislocation nucleation and migration. These findings provide a fundamental understanding of the transformation of faulted loops in irradiated ThO\({}_{2}\), and could be transferable to other fluorite systems.
Dislocation loop, unfaulting, fluorite structure
**Declaration:** No conflict of interest
Aggregation of radiation-induced defects can form dislocation loops, which are commonly recognized under transmission electron microscopes (TEM) in irradiated crystalline specimens [1; 2]. In fluorite-structured compounds, such as \(\mathrm{UO}_{2}\), \(\mathrm{ThO}_{2}\), \(\mathrm{CeO}_{2}\), and \(\mathrm{ZrO}_{2}\), the character of dislocation loops identified in irradiated samples depends on the irradiation conditions. In \(\mathrm{UO}_{2}\) irradiated with heavy ions, only perfect loops (\(\mathbf{b}=a_{0}/2\langle 110\rangle\), where \(a_{0}\) is the lattice parameter) are reported, while neutron irradiation leads to both perfect and faulted loops (\(\mathbf{b}=1/3\langle 111\rangle\)) loops. In Kr-irradiated \(\mathrm{ThO}_{2}\), both perfect and faulted loops are observed [3]. In Kr- and Xe-irradiated \(\mathrm{CeO}_{2}\), only faulted loops are confirmed [4]. In \(\mathrm{ZrO}_{2}\), faulted loops were identified under high-voltage electron microscope [5]. These loops are recognized as interstitial loops, and the formation is believed to be mainly due to the diffusion and clustering of radiation-induced cation defects [3; 4]. In some other cases with electron irradiation, faulted loops consisting of only oxygen interstitials were also proposed in \(\mathrm{CeO}_{2}\)[6], however, such a high unbalance of local charge may not be energetically stable [7].
Elastic theory indicates that the perfect loop is more energetically favorable than the faulted loop at large sizes due to the energy penalty from the stacking fault [8], causing faulted loops thermodynamically unstable as size increases. As the perfect loop resultant from loop unfaulting is mobile to interact with other defects (e.g., dislocation networks, grain boundaries, voids, etc), the unfaulting process is of significance to understanding the microstructure evolution. In \(\mathrm{ZrO}_{2}\) at 200 \({}^{o}\)C, faulted loops show fast growth and instantaneously transform into a perfect loop under high-voltage electron microscope [5]. In Kr-irradiated \(\mathrm{ThO}_{2}\), the transformation from faulted loops to perfect loops is implied in the loop statistics: i) loop size tends to saturate and ii) the loop density tends to decrease with increasing dose. Although various indirect results suggest the unfaulting, to the authors' knowledge, the unfaulting process in fluorite oxides has not yet been directly resolved in experiments and simulations.
As the microscopic details remain largely challenging in experiments, molecular dynamics (MD) simulations have been frequently used to examine the loop unfaulting process in various lattice structures [9; 10; 11; 12]. Based on the understanding of face-centered cubic (FCC) metals, which are pertinent to fluorite structures, to remove the stacking fault enclosed by the loop, there are various mechanisms involving moving Shockley partials (\(\mathbf{b}=a/6<112>\)) to sweep the fault [12]. Removal of intrinsic stacking fault (SF) can be realized via a single
sweep of Shockley partial, while for extrinsic SF, double sweeps of Shockley partials are required. The Shockley partials may nucleate at the loop line or within the loop. The resultant dislocation structure depends on the loop habit plane, stacking fault energy, and external stress [12]. Although the understanding of FCC systems is largely established, one key issue to resolve is whether such understanding is transferable to the fluorite oxides, where oxygen atoms may mediate the unfaulting process.
To pursue a mechanistic understanding, we utilize large-scale MD simulations to probe the details of loop unfaulting in ThO\({}_{2}\), which is expected to apply to other fluorite systems. Here we use ThO\({}_{2}\) as a demonstration to alleviate the concern that unpaired \(f\) electrons as in UO\({}_{2}\) and CeO\({}_{2}\) may cause serious inaccuracy in the interatomic potential with the loop structures. The classical MD method as implemented in LAMMPS [13] is used with an embedded-atom method potential for ThO\({}_{2}\) by Cooper et al. [14]. This potential takes a fixed non-formal charge model for both Th (+2.2208) and O (-1.1104) atoms, and can well reproduce mechanical properties and defect energies as compared to first-principles calculations and experimental measurements. Periodic boundary conditions, and Nose-Hoover thermostat [15; 16] (if applicable) are utilized. All the constructed systems are charge neutral to avoid numerical issues resulting from periodic boundary conditions.
To be consistent with neutron/ion-irradiated experiments, we only focus on loops of interstitial nature. To enable a strong driving force to initiate the loop transformation within the MD timescale, a large faulted loop (20 nm in diameter) is created, with a total of 974,685 atoms and a box size of around 40 nm \(\times\) 40 nm \(\times\) 10 nm (FIG. 1a). This loop size guarantees a higher formation energy than that of a perfect loop, and such size is scarcely observed in ThO\({}_{2}\)[3]. In fact, transformation may start to occur in nanometer-sized loops, as suggested in UO\({}_{2}\)[17; 18]. The loop is created by inserting three layers, i.e., O-Th-O 111 layers; such stacking has been suggested to be energetically favorable in other fluorite oxides, e.g., UO\({}_{2}\)[18; 19], CeO\({}_{2}\)[4; 19] and ZrO\({}_{2}\)[5]. The stacking of layers is carefully controlled to ensure correct stacking (FIG. 1b). Although hexagonal-shaped loops are frequently observed in FCC metals, the loops identified in fluoride oxides commonly appear in circular shape [5; 20], which is hence assumed in the current study. Atom visualization is accomplished with the OVITO package [21].
In the current calculations, loop stoichiometry (i.e., each inserted layer has the exact same number of atoms to enforce loop charge neutrality) is imposed to avoid numerical
issues. Such an assumption is partly supported by the MD simulations of loop formation in CeO\({}_{2}\)[19]. However, the diffusivity of oxygen interstitials is much higher than that of Th interstitials [3; 22] and the formation of an initial loop may involve reconfiguration of small interstitial clusters [23]. Hence, the dislocation loop can be off-stochiometric. To quantify how strongly these oxygen atoms bind to the loop, FIG. 1c shows a heatmap of the individual oxygen binding energy in a 10 nm diameter faulted loop by calculating the energy difference of the loop with respect to placing the loop constituent oxygen atom in the far field. The core oxygen atoms are much more strongly bonded than the periphery oxygen atoms due to the local lattice distortion at the dislocation core. As strong pipe diffusion was identified in UO\({}_{2}\)[24], it is possible that under irradiation conditions, the loop tends to be oxygen-rich via trapping oxygen interstitials. This may reduce the energy barrier for loop unfaulting due to instability from charged loops, which is implied in the electron irradiation experiments of ion-irradiated ZrO\({}_{2}\), where fast loop growth and disappearance were observed when only oxygen defects are created [7; 25].
The elastic strain field [26] is computed near the loop as shown in FIG. 2a-b for volumetric strain (\(\epsilon_{V}\)) and shear strain (\(\epsilon_{YZ}\)), respectively. As expected, the strain is exceptionally high
Figure 1: (a) Faulted dislocation loop configuration in ThO\({}_{2}\). (b) Enlarged local atom arrangement, showing the local stacking of Th and O layers. (c) Binding energies of individual oxygen atoms in the loop (loop Th atoms are not shown).
near the core region. The varying field of shear strain parallel to the plane indicates a possible double-lobe contrast under TEM, as similarly indicated in reference [7] with diminished shear strain near the center. To reveal the effect of strain field on atom kinetics, system annealing at 1500 K and zero external pressure is performed. FIG. 2c demonstrates a heatmap of oxygen displacements (Th atoms are removed due to negligible displacement) after 400 ps. Large displacement of oxygen atoms mainly occurs at the loop line, consistent with the previous understanding of pipe diffusion in ThO\({}_{2}\) and UO\({}_{2}\)[24; 27]. Given the unstable state of oxygen atoms at the dislocation core, it is expected that Shockley partials are easier to nucleate at the loop circumference than within the loop.
The unfaulting process can be in principle initiated due to shear stress or thermal vibration [9; 10]. Hence, two methods have been attempted to initiate the transformation, i.e., shear and high temperature. The shear is achieved by static shear of the simulation box, via iterative i) rigid 0.1 A displacement of the top layer along \(y\)-axis with the bottom layer fixed, and ii) subsequent energy minimization. A sudden unfaulting due to mechanical instability occurs when the simulation box is sheared to 8.5 % as a result of lattice translation, without any indication of Shockley partial nucleation (see
Figure 2: Distribution of volumetric strain \(\epsilon_{V}\) (a), shear strain \(\epsilon_{YZ}\) (b). (c) shows the oxygen displacement \(|\Delta r|\) after 400 ps relaxation at 1500 K.
This phenomenon is consistent with previous simulations of loop unfaulting in FCC Cu [9]. It suggests that thermal activation is necessary to initiate the unfaulting process. Hence, the subsequent simulations are run at 1,500 K at zero external pressure to capture the unfaulting process. For a reasonable computational cost, the initial configuration is slightly sheared (2.68 %) along the \(y\)-axis, as this effectively reduces the energy barrier for the nucleation of Shockley partial at a specific temperature. For reference, high-temperature annealing up to 2,500 K of non-sheared structure does not exhibit the unfaulting process on the nanosecond timescale. FIG. 3 demonstrates the complete unfaulting process within a span of 55 ps: Shockley partial is nucleated from the loop line, and quickly sweeps across the fault plane loop, which ultimately converts the original loop to a perfect one (see Supplementary video 2). In another case with a 5.15 % sheared simulation cell, a more complex unfaulting process is revealed, which involves the nucleation of multiple Shockley partials along the loop line (FIG. 4). The propagation of the arc segments contributes to the unfaulting simultaneously, leading to reactions between Shockley partials within the loop plane. Since the nucleated Shockley partials are not necessarily of the same type within the (111) plane, the reaction can cause defect formation from local lattice incompatibility (see Supplementary video 3). At the end of the unfaulting, the defects gradually disappear due to atom diffusion. In both unfaulting patterns, by analyzing the faults in both the Th and O layers, two common points are identified: i) the extrinsic SF is removed in a single sweep of Shockley partial; and ii) Both Th and O sublattice stacking faults are removed in a synchronous manner.
We note some important observations of loop unfaulting behavior in this system. First, it is feasible to have both single and multiple partial nucleation. The large-sized loop utilized in the current work provides a strong driving force to unfaulting. Such nucleation behavior is ultimately determined by the local stress and temperature-activated atomic events. It has also been reported in the unfaulting of both vacancy and interstitial faulted loops in Cu, Al, and Ni [2; 10; 12]. Second, unlike the commonly proposed mechanisms for loop unfaulting where a Shockley partial loop can be nucleated within the loop to explain experimentally observed loop contrast [10; 12], only Shockley partial nucleation at the loop perimeter is observed in this system; this is presumably due to that the strong strain field and oxygen mobility near the dislocation core (FIG. 1) can facilitate the partial nucleation upon thermal fluctuation. Third, we have not observed two-stage unfaulting as reported in metals [2; 9; 12], where a first set of Shockley partials sweeps the extrinsic stacking fault, converting it into
an intrinsic stacking fault and a second set removes the remaining fault. To explain the non-existence of any intrinsic stacking fault ribbon, it is only possible that a joint Shockley partial pair sweep the fault simultaneously, with one shearing the above and the other shearing the below atoms, respectively. Hence, the total reaction is written as,
\[1/6[\bar{1}2\bar{1}]+1/6[2\bar{1}\bar{1}]+1/3[111]\to 1/2[110]\] \[A\delta+B\delta+D\delta\to DC\] (Thompson tetrahedron notation)
Herein, \(A\delta+B\delta\rightarrow\delta C(1/6[11\bar{2}])\), in the direction of applied shear. The obtuse angle between the two partials suggests attractive interaction, which can justify the close range of two joint partials. Based on energetic estimation proportional to \(\mathbf{b}^{2}\), \(|A\delta|^{2}+|B\delta|^{2}>|\delta C|^{2}\), but overall \(|A\delta|^{2}+|B\delta|^{2}+|D\delta|^{2}>|DC|^{2}\), leading to energy reduction.
To explain the concurrent removal of faults in both Th and O layers given the sweep of a joint Shockley partial pair, the details of layer stacking are presented in the following. The layer stacking sequence along [111] is depicted in FIG. 5a for the perfect fluorite structure,
Figure 3: Loop unfaulting process with single Shockley partial nucleation site. Top row: dislocation types, middle row: Th stacking fault, bottom row: O stacking fault.
where cation, anion, and octahedral site layers are interleaved to form an FCC stacking sequence [18]. We denote the sequence of cations,...abc..., the anions...ABC..., and the octahedral sites,...\(\alpha\beta\gamma\)..., and the combined sequence is...aB\(\gamma\)AbC\(\alpha\)BcA\(\beta\)C.... To construct the faulted loop, three layers (O-Th-O) are inserted into the octahedral site layer, forming...aB\(\gamma\)AbC/_BaC/_BaC/_B_C_... (FIG. 5b), where the italic font indicates the fault layers. Hence, the interstitial loop means extrinsic stacking fault for cation layers and intrinsic stacking fault for anion layers in the corresponding sublattices. Using the Thompson tetrahedron notation, on (111) plane, only the following Shockley partials can be accommodated, i.e., \(A\delta\), \(\delta A\), \(B\delta\), \(\delta B\), \(C\delta\), and \(\delta C\). To align with simulation observations, with respect to the fault cation layer \(a\), layers above and below \(a\) must be simultaneously sheared by a Shockley partial, \(1/6[\bar{1}2\bar{1}]\) (\(A\delta\)) and \(-1/6[2\bar{1}\bar{1}]\) (\(-B\delta\), note the sign comes from the inverse view for the above and below), respectively, resulting in...bC\(\alpha\)BcACaBAbC\(\alpha\)B... (FIG. 5c). Therefore, with respect to the layer \(a\), the relative displacement of the adjacent layers (both cation and anion layers) should be around \(a_{0}/\sqrt{6}\), and diminishes as distance increases; this is
Figure 4: Loop unfaulting process with multiple Shockley partial nucleation sites. Top row: dislocation types, middle row: Th stacking fault, bottom row: O stacking fault.
confirmed by the atom displacement field after unfaulting (FIG. 5d shows the displacement along \(y\)-axis).
The O layers are generally commensurate with Th layers in displacement, nevertheless, large displacement for O atoms close to the whole loop plane is also observed (not shown in FIG. 5d). It is attributed to the pipe diffusion mechanism as the Shockley partial sweeps across the plane. Similar to dislocation nucleation, the high O diffusivity may reduce the energy barrier of dislocation migration as it can induce local off-stoichiometry, causing a metastable state of dislocation segments. After loop unfaulting, the perfect loop shows high mobility along the Burgers' vector direction (FIG. 5e) given the applied shear strain. Under irradiation conditions, these perfect loops can react with other extended defects to become dislocation networks. Importantly, defective cation and anion atoms are found only exist at the dislocation core of the perfect dislocation loops. Hence, unfaulting can be of significance to controlling the total defect accumulation in ThO\({}_{2}\). Similar to FCC metals, a thermodynamic critical size of faulted loops should exist in fluoride oxides, on the order of a few nanometers based on the formation energy of loops [3]. Such critical size is generally smaller than that of metals due to its high stacking fault energy (e.g., 1.76 J/m\({}^{2}\) in ThO\({}_{2}\)[3], 1.808 J/m\({}^{2}\) in UO\({}_{2}\)[18], and 78.5 mJ/m\({}^{2}\) in Cu [28]). However, due to the nucleation barrier of Shockley partials and interaction with other radiation-induced defects, this thermodynamic critical size may not be particularly meaningful. For example, in heavy ion irradiated CeO\({}_{2}\), no perfect loop was observed for loops up to 20-30 nm in diameter, and in ThO\({}_{2}\) and UO\({}_{2}\) with various irradiation types, faulted loops are frequently observed beyond a few nanometer sizes [3; 20; 29; 30]. Finally, it is worth noting that the unfaulting, specifically partial nucleation, can be strongly affected by the radiation condition. For example, only faulted loops are identified in proton-irradiated ThO\({}_{2}\)[31] while both types of loops are found in Kr-irradiated ThO\({}_{2}\)[3]; this contrast could suggest that pronounced damage cascade disturbance assists in the partial nucleation, which warrants further investigation.
In summary, with large-size stoichiometric circular Frank loops, we have identified two unfaulting patterns involving single and multiple Shockley partial pairs under combined shear strain and high temperature to explain the Frank loop unfaulting in fluorite ThO\({}_{2}\). Each Shockley pair is jointly moving in a synchronous manner to transform the stacking faults into perfect stacking. Cation and anion layers are commensurate during loop unfaul
ing and perfect loop migration. The high mobility of anions along dislocation line likely reduces the activation barriers of dislocation activities, which remains to be confirmed.
###### Acknowledgements.
This work is supported by the Center for Thermal Energy Transport under Irradiation, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, United States, Office of Basic Energy Sciences, and The Pennsylvania State University.
|
2306.07215 | Efficient and Robust Quantization-aware Training via Adaptive Coreset
Selection | Quantization-aware training (QAT) is a representative model compression
method to reduce redundancy in weights and activations. However, most existing
QAT methods require end-to-end training on the entire dataset, which suffers
from long training time and high energy costs. In addition, the potential label
noise in the training data undermines the robustness of QAT. We propose two
metrics based on analysis of loss and gradient of quantized weights: error
vector score and disagreement score, to quantify the importance of each sample
during training. Guided by these two metrics, we proposed a quantization-aware
Adaptive Coreset Selection (ACS) method to select the data for the current
training epoch. We evaluate our method on various networks (ResNet-18,
MobileNetV2, RetinaNet), datasets(CIFAR-10, CIFAR-100, ImageNet-1K, COCO), and
under different quantization settings. Specifically, our method can achieve an
accuracy of 68.39\% of 4-bit quantized ResNet-18 on the ImageNet-1K dataset
with only a 10\% subset, which has an absolute gain of 4.24\% compared to the
baseline. Our method can also improve the robustness of QAT by removing noisy
samples in the training set. | Xijie Huang, Zechun Liu, Shih-Yang Liu, Kwang-Ting Cheng | 2023-06-12T16:20:36Z | http://arxiv.org/abs/2306.07215v3 | # Efficient Quantization-aware Training with Adaptive Coreset Selection
###### Abstract
The expanding model size and computation of deep neural networks (DNNs) have increased the demand for efficient model deployment methods. Quantization-aware training (QAT) is a representative model compression method to leverage redundancy in weights and activations. However, most existing QAT methods require end-to-end training on the entire dataset, which suffers from long training time and high energy costs. Coreset selection, aiming to improve data efficiency utilizing the redundancy of training data, has also been widely used for efficient training. In this work, we propose a new angle through the coreset selection to improve the training efficiency of quantization-aware training. Based on the characteristics of QAT, we propose two metrics: error vector score and disagreement score, to quantify the importance of each sample during training. Guided by these two metrics of importance, we proposed a quantization-aware adaptive coreset selection (ACS) method to select the data for the current training epoch. We evaluate our method on various networks (ResNet-18, MobileNetV2), datasets(CIFAR-100, ImageNet-1K), and under different quantization settings. Compared with previous coreset selection methods, our method significantly improves QAT performance with different dataset fractions. Our method can achieve an accuracy of 68.39% of 4-bit quantized ResNet-18 on the ImageNet-1K dataset with only a 10% subset, which has an absolute gain of 4.24% compared to the baseline.1
Footnote 1: [https://github.com/HuangOwen/QAT-ACS](https://github.com/HuangOwen/QAT-ACS)
## 1 Introduction
Deep learning models have achieved remarkable achievements across various applications, including computer vision [1; 2; 3] and natural language processing [4; 5; 6]. The outstanding performance of these models can be attributed to their large number of parameters and the availability of large-scale extensive training datasets. For example, Swin-L [7] of input size \(224^{2}\) has a total number of parameters of 197M with FLOPs of 34.5G. The language model GPT-3 [8] boasts astonishing 175 billion parameters and the pre-training is performed on the dataset with more than 410 billion tokens. The high latency, large model size, and large training set scale have become the most significant challenge for the training and deployment of deep learning models, especially on edge devices with computation limitation and storage constraints.
To address these challenges and enable the effective deployment of deep learning models, many model compression methods are proposed recently to enhance the computational efficiency of deep learning models. These model compression techniques include quantization [9; 10; 11; 12], pruning [13; 14; 15; 16], knowledge distillation [17; 18; 19], and compact network design [20; 21]. Among the aforementioned methods, quantization methods have been the most widely adopted because they have the advantage of promising hardware affinity across different architectures [22; 23; 24]. To minimize the performance gap between the quantized and full-precision models, quantization-aware training
(QAT) is often utilized. Although QAT has improved the inference efficiency of the target model, QAT is computation-intensive and requires more time than full-precision training. Thus, training efficiency is important when QAT is applied to different networks and datasets.
Coreset selection techniques aim to mitigate the high training cost and improve data efficiency. Specifically, coreset selection methods leverage the redundancy in training datasets and select the most informative data to build a coreset for training. Previous methods select data based on feature [25; 26], error [27], gradient [28], and decision boundary [29] and can achieve notable efficiency improvement for full-precision training. Motivated by the success of coreset selection, we target to apply the coreset selection method to QAT to improve the training efficiency of QAT by selecting informative samples and pruning redundant ones. However, existing coreset selection methods are not designed for quantization, and severe computation overhead during selection may hinder the application of QAT. A coreset selection tailored for QAT is needed for efficient training.
In this paper, we start by analyzing the impact of removing one specific sample from the training set and identify that the error vector score (EVS) is a good theoretical approximation for the importance of each sample. As knowledge distillation is involved during QAT, we also propose the disagreement score (DS) measuring the prediction gap between the quantized model and full-precision models. Based on these two metrics, we propose an adaptive coreset selection (ACS) method to select training samples that fit the current training objective considering current epochs, ACS, and DS. An overview of the proposed coreset selection method tailored for QAT is shown in Fig. 1.
We demonstrate the superiority of our ACS in terms of effectiveness and efficiency on different networks (ResNet-18, MobileNet-V2), datasets (CIFAR-100, ImageNet-1K), and quantization settings. For 2-bit weights-only quantization of MobileNetV2 on the CIFAR-100 dataset, QAT based on our ACS can achieve a mean accuracy of 67.19% with only 50% training data used for training per epoch. For 4-bit quantization of ResNet-18 on the ImageNet-1K dataset, QAT based our ACS can achieve top-1 accuracy of 68.39% compared to the 64.15% of random selection when only 10% training data is selected for the training of every epoch. Our proposed ACS can help accelerate the QAT without substantial accuracy degradation. In summary, our contribution can be summarized as follows:
* We are the first to investigate the training data redundancy of Quantization-aware training from the perspective of coreset selection and find that importance varies among samples.
* We propose two metrics, error vector score (EVS) and disagreement score (DS), to quantify the importance of each sample to the QAT based on theoretical analysis of loss gradient.
* We propose a quantization-aware Adaptive Coreset Selection (ACS) method, which adaptively selects informative samples that fit current training stages based on EVS and DS.
* We verify our method ACS on different network architectures, datasets, and quantization settings. Compared with previous coreset selection methods, our ACS for QAT can significantly improve training efficiency with lower performance degradation.
Figure 1: An overview of the adaptive Coreset Selection (ACS) for Quantization-aware training.
Related Work
QuantizationQuantization methods are a powerful tool to improve the efficiency of model inference. The core insight is replacing full-precision weights and activations with lower-precision representation. Based on the characteristics of quantization intervals, these methods can be categorized into uniform and non-uniform quantization. While uniform quantization [9; 10; 11] with uniform interval are more hardware-friendly and efficient, Non-uniform quantization [30; 31; 32], due to the flexibility of representation, it can minimize the quantization error and achieve better performance than uniform schemes. In addition, the quantization methods can also be classified as quantization-aware training (QAT) [9; 11; 12] and post-training quantization (PTQ) [33; 34; 35] based on whether to retrain a model with quantized weights and activations or start with a pre-trained model and directly quantize it without extensive training. QAT can better retrain the performance and works with extremely low-precision quantization such as 2-bit, while PTQ can only achieve 8-bit and 6-bit quantization. However, QAT is more computation-intensive due to the extra training. In this work, we focus more on the efficiency of QAT methods.
Coreset SelectionCoreset selection targets improving the data efficiency by identifying the informative training samples to be retained and pruning the redundant samples. The essential part of coreset selection methods is quantifying each sample's importance. Previous works can be mainly classified into the following categories: **Geometry-based Methods:** These methods assume that the samples close to each other in the feature space similarly influence the training. These redundant data points should be removed to improve efficiency. Some representative works include Contextual diversity (CD) [25], k-Center-Greedy [26], Sorscher et al. [36], Moderate Coreset [37], and Herding [38]. **Decision Boundary-based Methods:** These methods select samples distributed near the decision boundary. These samples are difficult for the model to separate. The works that fall in this method include Adversarial Deepfool [39], and Contrastive Active Learning (CAL) [29]. **Gradient/Error-based Methods:** These methods assume the training samples are more informative if they contribute more to the error or loss during the training. The contribution to the loss can be approximated with the gradient on the sample. Forgetting events [40] are also an indicator of error during training. This category includes works such as EL2N [27], Importance sampling [41; 42], CRAIG [43], GradMatch [28], and AdaCore [44]. **Optimization-based Methods:** These methods formulate the coreset selection as a bilevel optimization problem. The outer level objective is the selection of samples, and the inner objective is the optimization of model parameters. Borsos et al. [45], Glister [46], Zhou et al. [47], and Retrieve [48].
These methods perform well in some given scenarios (specific subset fraction, specific outlier distribution, etc.). However, none are applied to quantization-aware training and consider the requirements for QAT. We will further show that the majority of these methods cannot outperform random sampling during QAT. Moreover, these methods either require early training of the target model for several epochs [40; 27] or time-consuming search [26] and optimization [46; 48], which leads to heavy computation overhead during QAT.
## 3 Importance of Each Sample in QAT
In this section, we will first introduce quantization-aware training (QAT) and derive the gradient of real-value weights under cross-entropy and SGD in Sec. 3.1. We then analyze the change of gradient when a specific sample is removed from a training batch to investigate the importance of each training sample in Sec. 3.2. We propose an approximation of this gradient change considering the prediction error without introducing any memory overhead. We further prove that less training data is required when knowledge distillation is applied to QAT in Sec. 3.3. Another importance metric based on the prediction gap between the quantized student and full-precision teacher model is introduced.
### Preliminaries of QAT
During QAT, the real-value data \(x^{r}\) is converted to \(b\)-bit quantized representation \(x^{q}=q_{b}(x^{r})\) by the quantizer \(q_{b}\). Given the scale factor \(s\) of the quantizer, the number of positive quantization levels \(Q_{P}\), and the number of negative quantization levels \(Q_{N}\), we can have the quantizer \(q_{b}\) as:
\[x^{q}=q_{b}(x^{r})=s\times\lfloor\text{clip}(x^{r}/s,-Q_{N},Q_{P})\rceil, \tag{1}\]
where \(\lfloor\cdot\rceil\) is the rounding function that rounds the input to the nearest integer, \(\text{clip}(x,r_{\text{low}},r_{\text{high}})\) return \(x\) with all value below \(r_{\text{low}}\) set to be \(r_{\text{low}}\) and all values above \(r_{\text{high}}\) set to be \(r_{\text{high}}\). For the unsigned quantization, \(Q_{N}=0,Q_{P}=2^{b}-1\). While for the quantization of signed data, \(Q_{N}=2^{b-1},Q_{P}=2^{b-1}-1\). To solve the problem that the gradient cannot back-propagate in Equation 1 during QAT, the straight-through estimator (STE) [49] is utilized to approximate the gradient. In the back-propagation of QAT with STE, the gradient of the loss \(\mathcal{L}\) with respect to the real-value data \(x^{r}\) is set to be:
\[\frac{\partial\mathcal{L}}{\partial x^{r}}=\frac{\partial\mathcal{L}}{ \partial x^{q}}\cdot\textbf{1}_{-Q_{N}\leq x^{r}/s\leq Q_{P}}, \tag{2}\]
where **1** is the indicator function that outputs 1 within the quantization limit and 0 otherwise.
The training set of QAT is denoted as \(\mathcal{T}=\{(x_{i},y_{i})\}_{i=1}^{N}\), where input \(x_{i}\in\mathbb{R}^{d}\) are vectors of length \(d\) and \(y\in\{0,1\}^{M}\) are one-hot vectors encoding labels. The neural network to be trained is denoted as \(f(w^{r},x)\), where real-value weights are \(w^{r}\). We use cross-entropy loss \(\mathcal{L}(\hat{p},y)=\sum_{m=1}^{M}y^{(m)}\log p^{(m)}\) in the QAT and \(p(w^{q},x)\) is a probability vector denoting the output of the quantized neural network \(f(w^{q},x)\). Stochastic gradient descent (SGD) is used for the optimization. Suppose the real-value weights at iteration \(t\) are \(w^{r}_{t}\) and the batch of input samples is \(\mathcal{T}_{t-1}\subseteq\mathcal{T}\), the weights are updated following
\[w^{r}_{t}=w^{r}_{t-1}-\eta\sum_{(x,y)\in\mathcal{T}_{t-1}}g_{t-1}(x,y), \tag{3}\]
where \(\eta\) denotes the learning rate and \(g_{t-1}(x,y)=\nabla\mathcal{L}_{t-1}(p(w^{r}_{t-1},x),y)\) is the gradient of cross entropy loss.
### Error Vector Score
To look into the importance of each sample \(\{(x_{i},y_{i})\}\), we measure the difference of the expected magnitude of the loss vector on the training set \(\mathcal{T}\) and another training set which only removes one specific sample \(\mathcal{T}^{\prime}=\mathcal{T}\setminus\{(x_{i},y_{i})\}\). For simplicity, we approximate all the training dynamics in continuous time. Based on the chain rule and STE of quantization, we have the change of loss \(\mathcal{L}\) at time \(t\) on sample \((x,y)\) of batch \(\mathcal{T}\) as:
\[\frac{d\mathcal{L}}{dt}\bigg{|}_{(x,y),\mathcal{T}_{t}}=g_{t}(x,y)\frac{dw^{q }_{t}}{dt}=g_{t}(x,y)\frac{dw^{r}_{t}}{dt}. \tag{4}\]
According to the discrete-time dynamic of real-value weights in Eq. 3, we have \(\frac{dw^{q}_{t}}{dt}\approx w^{r}_{t}-w^{r}_{t-1}=-\eta\sum_{(x,y)\in\mathcal{ T}_{t-1}}g_{t-1}(x,y)\). To measure the contribution of a specific sample \((x_{i},y_{i})\), we measure the change of loss with and without the sample. For a given data batch \(\mathcal{T}\), if the sample \((x_{i},y_{i})\notin\mathcal{T}\), we can ignore the change in \(\frac{d\mathcal{L}}{dt}|_{(x,y),\mathcal{T}_{t}}\). For any sample \((x_{j},y_{j})\in\mathcal{T},j\neq i\) in the same batch, the importance \(\mathcal{I}(x_{i},y_{i})\) is measured as:
\[\mathcal{I}(x_{i},y_{i})=\left\|\left.\frac{d\mathcal{L}}{dt}\right|_{(x_{j},y _{j}),\mathcal{T}}-\left.\frac{d\mathcal{L}}{dt}\right|_{(x_{j},y_{j}), \mathcal{T}^{\prime}}\right\|. \tag{5}\]
According to the chain rule, we have
\[\frac{d\mathcal{L}}{dt}\bigg{|}_{(x_{j},y_{j}),\mathcal{T}}=\frac{d\mathcal{L }(p(w^{q}_{t},x_{j}),y_{j})}{dw^{q}_{t}}\frac{dw^{q}_{t}}{dw^{r}_{t}}\frac{dw^ {r}_{t}}{dt} \tag{6}\]
According to the characteristics of STE shown in Eq. 2, \(\frac{dw^{q}_{t}}{dw^{r}_{t}}=1\) holds for all input within the clipping range. Following the updating rule of weights in Eq. 3, the gradient \(\frac{dw^{r}_{t}}{dt}=-\eta\sum_{(x^{*},y^{*})\in\mathcal{T}}g_{t-1}(x^{*},y^{*})\). The only difference between the training set \(\mathcal{T}\) and \(\mathcal{T}^{\prime}\) is the existence of sample \((x_{i},y_{i})\). Thus, we have
\[\mathcal{I}(x_{i},y_{i})=\left\|\left.\frac{d\mathcal{L}}{dw^{q}_{t}}(\left. \frac{dw^{r}_{t}}{dt}\right|_{(x_{j},y_{j}),\mathcal{T}}-\left.\frac{dw^{r}_{ t}}{dt}\right|_{(x_{j},y_{j}),\mathcal{T}^{\prime}})\right\|=\eta\left\|\frac{d \mathcal{L}}{dw^{q}_{t}}\cdot g_{t-1}(x_{i},y_{i})\right\|. \tag{7}\]
We use \(\frac{dC}{dw_{t}^{q}}\) as a simplification of \(\frac{d\mathcal{L}(p(w_{t}^{q},x_{i}),y_{i})}{dw_{t}^{q}}\), which is only dependent on the current training sample \((x_{j},y_{j})\) but not dependent on the sample \((x_{i},y_{i})\) that is removed from batch \(\mathcal{T}\). Since learning rate \(\eta\) and the gradient of loss w.r.t. quantized weights \(\frac{d\mathcal{L}}{dw_{t}^{q}}\) are sample-agnostic, the importance of the sample \((x_{i},y_{i})\) in this batch \(\mathcal{T}\) is only related to the gradient norm of cross-entropy loss of this sample \(||g_{t-1}(x_{i},y_{i})||\). The examples with a larger gradient norm expectation have more influence on the supervised training of other data, which means they are important for QAT and should be included in the coreset. We can select data with high importance by sorting by the norm of gradient, which is also covered in previous works [27]. However, storing the loss gradient for comparison requires extra memory and is hard to transfer between network architectures. We approximate the norm gradient with the norm of the error vector, which is defined as follows.
**Definition 1** (Error Vector Score): _The error vector score of a training sample \((x,y)\) at iteration \(t\) id defined to be \(d_{\text{EVS}}=\|p(w_{t}^{q},x)-y\|\)._
For any input \(x\in\mathbb{R}^{d}\), gradient norm \(||g_{t}(x,y)||\) is a non-random score. We take its expectation over random minibatch sequence and random initialization to get the expected gradient norm \(\mathbb{E}||g_{t}(x,y)||\), which can also be indicated as
\[\mathbb{E}\left\|g_{t}(x,y)\right\|=\mathbb{E}\left\|\sum_{m=1}^{M}\frac{d \mathcal{L}_{t}(p(w_{t}^{q},x),y)^{T}}{df_{t}^{(m)}}\frac{df_{t}^{(m)}}{dw_{t }^{q}}\right\|, \tag{8}\]
where \(\frac{df_{t}^{(m)}}{dw_{t}^{q}}\) denotes the gradient of \(m\)-th logit on weights. Since we are using cross-entropy as the loss function, the gradient of loss \(\mathcal{L}\) on the \(m\)-th logit output \(f_{t}^{(m)}\) follows \(\frac{d\mathcal{L}_{t}(p(w_{t}^{q},x),y)^{T}}{df_{t}^{(m)}}=p(w_{t}^{q},x)^{(m )}-y^{(m)}\). Previous works [50; 51] empirically observe that \(\frac{df_{t}^{(m)}}{dw_{t}^{q}}\) are similar across different logits \(m\in M\) and training sample \((x,y)\). Thus, the gradient norm can be approximated by the error vector score \(d_{\text{EVS}}\) during important sample selection. Different from the previous method [27] also leveraging the error metrics, no early training is required and we only use the current quantized model prediction \(p(w_{t}^{q},x)\) during QAT.
### Disagreement Score and Knowledge Distillation
Intrinsically, a quantized classification network should learn an ideal similar mapping \(f\) from input sample \(x\) to the output logits \(f(w,x)\) as a full-precision network, and the gap between the quantized prediction \(p(w^{q},x)\) of the student model and real-value prediction \(p_{\mathbf{T}}(w^{r},x)\) of teacher model \(\mathbf{T}\) needs to be minimized. Based on this insight, knowledge distillation (KD) is widely used during QAT with a full-precision model as the teacher, which can also be seen in previous works [52; 53; 54; 55]. The loss function is designed to enforce the similarity between the output distribution of the full-precision teacher and the quantized student model:
\[\mathcal{L}_{KD}=-\frac{1}{N}\sum_{m}^{M}\sum_{i=1}^{N}p_{\mathbf{T}}^{(m)}(w ^{r},x_{i})\log(p^{(m)}(w^{q},x_{i})) \tag{9}\]
where the KD loss is defined as the cross-entropy between the output distributions \(p_{\mathbf{T}}\) of a full-precision teacher and a quantized student on the same input \(x\) but different weights representation \(w^{r}\) and \(w^{q}\). \(x_{i}\) is one of the input samples from the training set. \(m\) and \(N\) denote the classes and total training sample numbers, respectively.
Note that this process can be regarded as the distribution calibration for the student network and one-hot label is not involved during QAT. Since the loss function used for knowledge distillation is still cross-entropy, and we still assume we use SGD for the optimization, most conclusions in Sec. 3.2 still hold by replacing the one-hot ground truth label \(y\) with full-precision teacher's prediction \(p_{\mathbf{T}}(w_{t}^{r},x)\). Thus, we propose the disagreement score as follows.
**Definition 2** (Disagreement Score): _The disagreement score of a training sample \((x,y)\) at iteration \(t\) is defined to be \(d_{\text{DS}}=\|p(w_{t}^{q},x)-p_{\mathbf{T}}(w_{t}^{r},x)\|_{2}\)._
The core difference between error vector score \(d_{\text{EVS}}\) and disagreement score \(d_{\text{DS}}\) is the target label. While \(d_{\text{EVS}}\) uses one-hot hard labels, the \(d_{\text{DS}}\) uses the distilled soft labels. We empirically notice that
the data needed for the training is reduced when knowledge distillation is applied, which is helpful for our coreset selection with a small data fraction. We will further demonstrate the advantage in terms of training data requirements using the soft label based on Vapnik-Chervonenkis theory [56], which decomposes the classification error \(R(f_{s})\) of a classifier \(f_{s}\in\mathcal{F}_{s}\) as
\[R(f_{s})-R(f)\leq O\left(\frac{|\mathcal{F}_{s}|\mathrm{c}}{n^{\alpha_{s}}} \right)+\varepsilon_{s}, \tag{10}\]
where \(O(\cdot)\) denotes the asymptotic approximation and \(\varepsilon_{s}\) is the approximation error of \(\mathcal{F}_{s}\) with respect to \(\mathcal{F}\). \(f\in\mathcal{F}\) denotes the real target function. \(|\cdot|_{\mathrm{C}}\) is the VC-Dimension of the function class measuring its capacity. \(n\) is the number of total training data. \(\frac{1}{2}\leq\alpha_{s}\leq 1\) is an indicator measuring the difficulty of the problems. For non-separable and difficult problems, \(\alpha_{s}=\frac{1}{2}\), which means the classifier learns at a slow rate of \(O(n^{-\frac{1}{2}})\). While for separable and easy problems, \(\alpha_{s}=1\), indicating the classifier learns at a fast rate. In our setting, if the quantized model \(f_{q}\in\mathcal{F}_{q}\) directly learns from the hard labels, the difficulty of the problem is high, and we assume \(\alpha_{q}=\frac{1}{2}\), we have
\[R(f_{q})-R(f)\leq O\left(\frac{|\mathcal{F}_{q}|\mathrm{c}}{\sqrt{n}}\right)+ \varepsilon_{q}, \tag{11}\]
where \(\varepsilon_{q}\) is the approximation error of the quantized model. However, if we first train the real-value full-precision teacher model \(f_{r}\in\mathcal{F}_{r}\) and then utilize knowledge distillation to learn the representation from the teacher, the difficulty of the learning becomes easier, and we assume \(\alpha_{r}=1\), we have
\[R(f_{r})-R(f)\leq O\left(\frac{|\mathcal{F}_{r}|\mathrm{c}}{n}\right)+ \varepsilon_{r},R(f_{q})-R(f_{r})\leq O\left(\frac{|\mathcal{F}_{q}|\mathrm{c }}{n^{\alpha_{qr}}}\right)+\varepsilon_{qr}, \tag{12}\]
where the \(\varepsilon_{r},\varepsilon_{qr}\) denotes the approximation error of the \(\mathcal{F}_{r}\) with respect to \(\mathcal{F}\) and approximation error of the \(\mathcal{F}_{q}\) with respect to \(\mathcal{F}_{r}\), respectively. Compared to the direct learning quantized model from the hard label shown in Eq. 11, the knowledge distillation with real-value teacher \(f_{r}\) yields the classification error as follows:
\[R(f_{q})-R(f)\leq O\left(\frac{|\mathcal{F}_{r}|\mathrm{c}}{n}\right)+ \varepsilon_{r}+O\left(\frac{|\mathcal{F}_{q}|\mathrm{c}}{n^{\alpha_{qr}}} \right)+\varepsilon_{qr}\leq O\left(\frac{|\mathcal{F}_{q}|\mathrm{c}+| \mathcal{F}_{r}|\mathrm{c}}{n^{\alpha_{qr}}}\right)+\varepsilon_{r}+ \varepsilon_{qr}. \tag{13}\]
Following previous studies on knowledge distillation [57; 58], soft labels contain more information than hard labels for each sample. Thus, we have \(\varepsilon_{r}+\varepsilon_{qr}\leq\varepsilon_{q}\) and \(O\left(\frac{|\mathcal{F}_{q}|\mathrm{c}+|\mathcal{F}_{r}|\mathrm{c}}{n^{ \alpha_{qr}}}\right)\leq O\left(\frac{|\mathcal{F}_{q}|\mathrm{c}}{\sqrt{n}}\right)\). Combining these two inequalities, we have the inequality
\[O\left(\frac{|\mathcal{F}_{q}|\mathrm{c}+|\mathcal{F}_{r}|\mathrm{c}}{n^{ \alpha_{qr}}}\right)+\varepsilon_{r}+\varepsilon_{qr}\leq O\left(\frac{| \mathcal{F}_{q}|\mathrm{c}}{\sqrt{n}}\right)+\varepsilon_{q}, \tag{14}\]
which means when the number of training samples \(n\) is the same, the upper bound of classification error based on the soft label is lower. When we want to achieve the same upper bound of classification error \(R(f_{q})-R(f)\) using these two techniques, learning from soft labels requires fewer data. This is the core reason we use knowledge distillation and disagreement score \(d_{\text{DS}}\) to select the coreset.
## 4 Adaptive Coreset Selection for QAT
In Sec. 3 we propose to use \(d_{\text{EVS}}\) and \(d_{\text{DS}}\) to select the coreset for QAT. While \(d_{\text{DS}}\) could help select those samples that produce large performance gaps between quantized and full-precision models, \(d_{\text{EVS}}\) targets more at the error of quantized prediction. These two metrics cover different characteristics of training data and we need both to improve the diversity of our coreset. For different stages of QAT, the different metrics should be considered to select samples that fit the current training objective. Previous research [59] has shown that QAT should start with the hard label to help a better initialization for the quantized model and then use soft labels to guide it to better local minima. In light of this, we propose Adaptive Coreset Selection (ACS) for QAT to select the important samples considering current training epoch t, \(d_{\text{EVS}}\), and \(d_{\text{DS}}\) adaptively.
For the given current training epoch \(t\) and the total training epoch \(E\), we propose a cosine annealing weights coefficient \(\beta(t)=\cos(\frac{t}{2E}\pi)\) to consider two metrics simultaneously and balance between them. The final selection metric is a linear combination of \(d_{\text{EVS}}\) and \(d_{\text{DS}}\) as follows:
\[d_{\text{ACS}}(t)=\beta(t)d_{\text{EVS}}(t)+(1-\beta(t))d_{\text{DS}}(t) \tag{15}\]
As we have \(\beta(0)=1\) and \(\beta(E)=0\), in the early stage, the selection is mainly based on the error vector score \(d_{\text{EVs}}\). When the quantized model converges, we focus more on the disagreement score \(d_{\text{DS}}\) in later epochs. We perform coreset selection every \(R\) epoch, where \(R\) is determined before the training. The pseudo-code for our ACS algorithm is shown in Alg. 1.
There are two major advantages of our proposed quantization-aware ACS. The first lies in the adaption to the training phase when knowledge distillation is applied. As soft labels retain more information about the target than hard labels, we should encourage the quantized student model to learn sequentially on hard labels first and soft labels then. This implicit learning hierarchy is observed in QKD [59] and is named "self-learning" and "tutoring". With the proposed ACS fully aware of this hierarchy, the selected coreset helps stabilize the training and guarantee faster convergence. The second advantage is related to the diversity of training data. More training samples could be covered in the coreset of different epochs and the coverage of the original full dataset contributes to the development of a more robust model. Note that only when the optimal data sequence and high training sample diversity are achieved simultaneously, the performance of QAT will be significantly better. We will demonstrate in the appendix that even when all data are covered but the order is random, the accuracy of our quantized model will be negatively influenced.
## 5 Experiments
### Experiment Settings
Datasets and networksThe experiments are conducted on CIFAR-100 dataset [60] and ImageNet-1K dataset [61]. We only perform basic data augmentation in PyTorch [62], which includes _RandomResizedCrop_ and _RandomHorizontalFlip_ during training, and single-crop operation during evaluation. We evaluate MobileNetV2 [20] on the CIFAR-100 dataset and evaluate ResNet-18 [2] on the ImageNet-1K dataset. The _width multiplier_ is set to be 0.5 for MobileNetV2.
BaselinesFor a fair comparison, we choose multiple coreset selection methods from different categories as our baseline. The selected methods include: Random Sampling, EL2N-Score [27], Forgetting [40], Glister [46], kCenterGreedy [26], Contextual Diversity (CD) [25], Moderate Coreset [37]. For methods that involve early training, we set training epochs as 5. We give a more detailed introduction to the technical details of these methods in the appendix.
Training and selection detailsFor MobileNetV2 on CIFAR-100, we train the network for 200 epochs. We use the learning rate of 0.01, weight decay of 5e-4, batch size of 512, and SGD as the optimizer. For ResNet-18 on ImageNet-1K, we train the network for 120 epochs. We use the learning rate of 1.25e-3, no weight decay, batch size of 512, and Adam as the optimizer. We use the quantization method and full-precision pre-trained model following LSQ+ [12]. All the experiments on carried out on 2 NVIDIA RTX 3090 GPUs. For CIFAR-100 experiments, we choose \(R=20\) and report the results of \(\mathcal{S}\in\{10\%,20\%,30\%,40\%,50\%\}\). Each experiment of different settings is repeated 5 times. For ImageNet-1K experiments, we choose \(R=10\) and report the results of \(\mathcal{S}\in\{10\%,30\%,50\%,60\%,70\%,80\%\}\). Each experiment of different settings is conducted once.
```
Input: Training dataset \(T=\{(x_{i},y_{i})\}_{i=1}^{n}\); Real-value network with weights \(\mathbf{W}^{*}\). Coreset data fraction \(S\), total training epochs \(E\), Selection interval \(R\). Initial coreset \(T_{\text{ACS}}(t)=\emptyset\) Output: Quantized network with weights \(\mathbf{W}^{*}\) Initialize quantized weights \(\mathbf{W}^{*}\) from \(\mathbf{W}^{*}\) following Eq. 1 for\(t\in[0,...,E-1]\)do if\(\mathcal{S}R=0\)then \(\theta(t)=(\theta^{*}\pi)\) for\((x_{i},y_{i})\in T\)do \(d_{\text{KS}}(x_{i},t)=\|p(\mathbf{W}_{i}^{*},\mathbf{x}_{i})-\mathbf{p}_{ \mathbf{Y}}\|_{\mathbf{2}}\) \(d_{\text{KS}}(x_{i},t)=\|p(\mathbf{W}_{i}^{*},\mathbf{x}_{i})-\mathbf{p}_{ \mathbf{Y}}(\mathbf{W}_{i}^{*},\mathbf{x}_{i})\|_{\mathbf{2}}\) \(d_{\text{KS}}(x_{i},t)=\beta(t)d_{\text{KS}}(x_{i},t)+(1-\beta(t))d_{\text{ GS}}(x_{i},t)\) endfor Sort \(d_{\text{ACS}}(x_{i},t)\), select top \(S\%\) samples to replace \(T_{\text{ACS}}(t)\) else \(T_{\text{ACS}}(t)\gets T_{\text{ACS}}(t-1)\) endif Train \(\mathbf{W}^{*}\) on \(T_{\text{ACS}}(t)\) following Eq. 9 endfor
```
**Algorithm 1**Adaptive Coreset Selection for QAT
### Benchmarking Previous Coreset Method
The comparison of the QAT Top-1 accuracy of MobileNetV2 on CIFAR-100 and ResNet-18 on ImageNet-1K is shown in Tab. 1 and Tab. 2. We note that most previous methods cannot exceed random selection in our QAT setting, and the few surpass the baseline only on specific data fractions. For example, kCenterGreedy [26] shows a satisfying performance when the subset size is large (50%
for CIFAR-10, 70%/80% for ImageNet-1K), but fails to demonstrate effectiveness on small coreset size. Our method outperforms previous state-of-the-art selection methods on all subset fractions \(\mathcal{S}\) by a great margin. Specifically, the ResNet-18 accuracy of 10% subset fraction on ImageNet-1K using our method is 68.39%, which achieves an absolute gain of 4.24% compared to the baseline method.
Efficiency analysis and effect of adaptive epochsThe choice of selection interval \(R\) is vital in our algorithm, as too large \(R\) will fail to help adaption and improve data diversity, and too small \(R\) will introduce too much computation overhead and undermine the efficiency. We apply grid search on \(R\) and empirically prove that \(R=10\) is optimal for our ImageNet-1K training. The results of accuracy and training time are shown in Tab. 3. We can observe from the results that \(R=10\) achieves a similar performance as \(R=5\) with a shorter training time. In fact, as no back-propagation of the gradient is involved during the computation of \(d_{\text{EVS}}\) and \(d_{\text{DS}}\), the computation overhead is acceptable under most settings with \(R>5\) compared with previous methods involving training. The only method from the selected baselines that has similar efficiency to ours is Moderate Coreset [37], which also only forward on samples and sorts the metrics of distance. We report the detailed training time of all previous methods in the appendix.
Ablation study and effect of annealing strategyAs we propose two metrics for coreset selection: \(d_{\text{EVS}}\) and \(d_{\text{DS}}\). Therefore, it is important to analyze how we should balance between them and the contribution of both metrics. We try the following different annealing strategies and settings: (1) fixed (\(\beta(t)=0.5\)); (2) linear (\(\beta(t)=1-\frac{t}{E}\)); (3) square root (\(\beta(t)=1-\sqrt{\frac{t}{E}}\)); (4) quadratic (\(\beta(t)=1-(\frac{t}{E})^{2}\)); (5) cosine (\(\beta(t)=\cos(\frac{t}{2E}\pi)\)); (6) \(d_{\text{EVS}}\) only (\(\beta(t)=1\)); (7) \(d_{\text{DS}}\) only (\(\beta(t)=0\)); The results of coreset selection of 4-bit quantized ResNet-18 on ImageNet-1K are listed in Tab. 4. The performance gap with different
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method/Fraction (\%) & 10\% & 20\% & 30\% & 40\% & 50\% \\ \hline Random & 62.25\(\pm\)0.91 & 64.07\(\pm\)0.65 & 65.22\(\pm\)0.33 & 65.55\(\pm\)0.64 & 66.24\(\pm\)0.94 \\ EL2N Score [27] & 63.02\(\pm\)0.62 & 64.04\(\pm\)1.56 & 65.56\(\pm\)0.75 & 64.55\(\pm\)0.81 & 65.31\(\pm\)0.37 \\ Forgetting [40] & 60.17\(\pm\)0.33 & 63.07\(\pm\)0.59 & 65.04\(\pm\)0.80 & 65.26\(\pm\)0.84 & 65.96\(\pm\)1.24 \\ Glister [46] & 56.54\(\pm\)1.16 & 60.37\(\pm\)1.05 & 61.32\(\pm\)0.55 & 63.03\(\pm\)0.98 & 64.78\(\pm\)0.91 \\ kCenterGreedy [26] & 60.15\(\pm\)0.83 & 62.65\(\pm\)0.51 & 63.78\(\pm\)0.65 & 64.83\(\pm\)0.59 & 66.27\(\pm\)1.10 \\ CD [25] & 60.33\(\pm\)0.78 & 62.62\(\pm\)0.90 & 64.04\(\pm\)1.03 & 64.78\(\pm\)0.38 & 65.31\(\pm\)0.37 \\ Moderate [37] & 75.92\(\pm\)0.27 & 60.43\(\pm\)0.66 & 62.83\(\pm\)0.61 & 63.56\(\pm\)0.75 & 64.45\(\pm\)1.55 \\ \hline Ours & **63.67\(\pm\)0.77** & **65.91\(\pm\)0.72** & **66.41\(\pm\)0.75** & **66.85\(\pm\)0.49** & **67.19\(\pm\)0.54** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of Top-1 Accuracy of different coreset selection methods on QAT of quantized MobileNetV2 on CIFAR-100 dataset with different subset fractions. The bit-width is 2/32 for weights/activations. Each experiment is repeated 5 times with random seeds. The mean accuracy and standard deviation are reported. When full data are selected (\(\mathcal{S}=100\%\)), the accuracy is 68.07\(\pm\)0.9.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method/Fraction (\%) & 10\% & 30\% & 50\% & 70\% & 80\% \\ \hline Random & 64.15 & 68.53 & 70.49 & 70.94 & 71.06 & 71.96 \\ EL2N Score [27] & 61.71 & 67.31 & 70.14 & 70.89 & 71.54 & 71.88 \\ Forgetting [40] & 63.09 & 67.77 & 70.14 & 71.00 & 71.36 & 71.82 \\ Glister [46] & 63.25 & 68.94 & 70.92 & 71.39 & 71.93 & 72.22 \\ kCenterGreedy [26] & 62.98 & 68.56 & 70.35 & 71.13 & 71.59 & 71.96 \\ CD [25] & 63.22 & 68.74 & 70.90 & 71.27 & 71.78 & 72.11 \\ Moderate [37] & 62.39 & 68.06 & 70.43 & 70.62 & 71.56 & 71.99 \\ \hline Ours & **68.39** & **71.09** & **71.59** & **72.00** & **72.19** & **72.31** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of Top-1 Accuracy of different coreset selection methods on QAT of quantized ResNet-18 on ImageNet-1K with different subset fractions. The bitwidth for quantized ResNet-18 is 4/4 for weights/activations. When full data are selected (\(\mathcal{S}=100\%\)), the accuracy is 72.46%.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Fraction (\%) & 10\% & 30\% & 50\% & 70\% \\ \hline fixed & 67.95 & 70.83 & 71.21 & 72.01 \\ linear & 68.37 & **71.11** & 71.46 & 72.18 \\ sqrt & 68.15 & 71.03 & 71.44 & 72.15 \\ quadratic & 68.11 & 70.95 & 71.35 & 72.10 \\ \hline \hline \(d_{\text{EVS}}\) only & 68.06 & 70.63 & 71.41 & 71.96 \\ \(d_{\text{DS}}\) only & 67.07 & 70.24 & 71.43 & 72.08 \\ \hline cosine & **68.39** & 71.09 & **71.59** & **72.19** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Analysis on annealing strategy \(\beta(t)\).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method/Fraction (\%) & 10\% & 30\% & 50\% & 60\% & 70\% \\ \hline Random & 64.15 & 68.53 & 70.49 & 70.94 & 71.06 & 71.96 \\ EL2N Score [27] & 61.71 & 67.31 & 70.14 & 70.89 & 71.54 & 71.88 \\ Forgetting [40] & 63.09 & 67.77 & 70.14 & 71.00 & 71.36 & 71.82 \\ Glister [46] & 63.25 & 68.94 & 70.92 & 71.39 & 71.93 & 72.22 \\ kCenterGreedy [26] & 62.98 & 68.56 & 70.35 & 71.13 & 71.59 & 71.96 \\ CD [25] & 63.22 & 68.74 & 70.90 & 71.27 & 71.78 & 72.11 \\ Moderate [37] & 62.39 & 68.06 & 70.43 & 70.62 & 71.56 & 71.99 \\ \hline Ours & **68.39** & **71.09** & **71.59** & **72.00** & **72.19** & **72.31** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Analysis of adaptive epochs \(R\).
annealing strategies is close as they all follow the trends to use \(d_{\text{EVS}}\) in the early epochs of training and \(d_{\text{DS}}\) in the late training epochs. Among all these annealing strategies, cosine annealing is slightly better. When only one metric is used for the selection, the performance will drop. We also notice that \(d_{\text{EVS}}\) and \(d_{\text{DS}}\) are complementary as \(d_{\text{EVS}}\) works well with small data fractions and \(d_{\text{DS}}\) has better performance with large data fractions.
Visualization and AnalysisWe visualize the loss landscape [63] of MobileNetV2 training on the full CIFAR-100 dataset, 10% random subset of CIFAR-100, and 10% coreset of CIFAR-100 based on our methods shown in Fig. 2. We can see from the results that QAT on coreset with our method has a more centralized and smoother loss compared to the baseline methods, which reflects that our method helps improve the training stability of QAT.
The Top-1 Accuracy comparison of different methods, networks, and datasets is visualized in Fig. 2(a) and Fig. 2(b), which is a straightforward demonstration of the remarkable selection quality of our method ACS. We also visualize the distribution of disagreement scores \(d_{\text{DS}}\) and error vector score \(d_{\text{EVS}}\) in Fig. 2(c) and Fig. 2(d). The setting is the same as the MobileNetV2 experiment listed in Tab. 1. We can see from the results that the mean of \(d_{\text{DS}}\) shifts to zero during the QAT, which proves that \(d_{\text{DS}}\) is a useful metric to quantify the importance of each sample. The distribution discrepancy between \(d_{\text{EVS}}\) and \(d_{\text{DS}}\) proves the necessity of considering both metrics to select diverse data into our coreset.
## 6 Conclusion
This is the first work focusing on the training and data efficiency of quantization-aware training (QAT) from the perspective of coreset selection. By removing samples from the training batches and analyzing the loss gradient, we theoretically prove that the importance of each sample varies significantly for QAT. Error-vector score and disagreement score are proposed to quantify this importance. Considering the training characteristics of QAT, we propose an Adaptive Coreset Selection (ACS) method to better adapt to different training phases and improve training data diversity. Extensive experiments on various datasets, networks, and quantization settings further demonstrate the effectiveness of our method.
Figure 3: (a) The Top-1 Accuracy of the 2/32(W/A)-bit quantized MobileNetV2 on the CIFAR-100 with different data fractions. (b) The Top-1 Accuracy of the 4/4(W/A)-bit quantized ResNet-18 on the ImageNet-1K with different data fractions. (c) Disagreement scores \(d_{\text{DS}}\) Distribution of MobileNetV2 for different epochs (epochs 10, 50, and 80). (d) Comparison of the distribution of disagreement scores \(d_{\text{DS}}\) and error vector score \(d_{\text{EVS}}\) of MobileNetV2 in the same epoch (epoch 10).
Figure 2: Visualization of the loss landscape [63] of 2-bit quantized MobileNetV2 trained on the CIFAR-100 (a) full dataset, (b) random subset, and (c) our coreset. |
2305.10239 | Valuation of a Financial Claim Contingent on the Outcome of a Quantum
Measurement | We consider a rational agent who at time $0$ enters into a financial contract
for which the payout is determined by a quantum measurement at some time $T>0$.
The state of the quantum system is given in the Heisenberg representation by a
known density matrix $\hat p$. How much will the agent be willing to pay at
time $0$ to enter into such a contract? In the case of a finite dimensional
Hilbert space, each such claim is represented by an observable $\hat X_T$ where
the eigenvalues of $\hat X_T$ determine the amount paid if the corresponding
outcome is obtained in the measurement. We prove, under reasonable axioms, that
there exists a pricing state $\hat q$ which is equivalent to the physical state
$\hat p$ on null spaces such that the pricing function $\Pi_{0T}$ takes the
form $\Pi_{0T}(\hat X_T) = P_{0T}\,{\rm tr} ( \hat q \hat X_T) $ for any claim
$\hat X_T$, where $P_{0T}$ is the one-period discount factor. By "equivalent"
we mean that $\hat p$ and $\hat q$ share the same null space: thus, for any
$|\xi \rangle \in \mathcal H$ one has $\langle \bar \xi | \hat p | \xi \rangle
= 0$ if and only if $\langle \bar \xi | \hat q | \xi \rangle = 0$. We introduce
a class of optimization problems and solve for the optimal contract payout
structure for a claim based on a given measurement. Then we consider the
implications of the Kochen-Specker theorem in such a setting and we look at the
problem of forming portfolios of such contracts. Finally, we consider
multi-period contracts. | Lane P. Hughston, Leandro Sánchez-Betancourt | 2023-05-17T14:27:08Z | http://arxiv.org/abs/2305.10239v2 | # Valuation of a Financial Claim Contingent on the
###### Abstract
We consider a rational agent who at time \(0\) enters into a financial contract for which the payout is determined by a quantum measurement at some time \(T>0\). The state of the quantum system is given by a known density matrix \(\hat{p}\). How much will the agent be willing to pay at time \(0\) to enter into such a contract? In the case of a finite dimensional Hilbert space, each such claim is represented by an observable \(\hat{X}_{T}\) where the eigenvalues of \(\hat{X}_{T}\) determine the amount paid if the corresponding outcome is obtained in the measurement. We prove, under reasonable axioms, that there exists a pricing state \(\hat{q}\) which is equivalent to the physical state \(\hat{p}\) on null spaces such that the pricing function \(\Pi_{0T}\) takes the form \(\Pi_{0T}(\hat{X}_{T})=P_{0T}\operatorname{tr}(\hat{q}\hat{X}_{T})\) for any claim \(\hat{X}_{T}\), where \(P_{0T}\) is the one-period discount factor. By "equivalent" we mean that \(\hat{p}\) and \(\hat{q}\) share the same null space: thus, for any \(|\xi\rangle\in\mathcal{H}\) one has \(\langle\bar{\xi}|\hat{p}|\xi\rangle=0\) if and only if \(\langle\bar{\xi}|\hat{q}|\xi\rangle=0\). We introduce a class of optimization problems and solve for the optimal contract payout structure for a claim based on a given measurement. Then we consider the implications of the Kochen-Specker theorem in such a setting and we look at the problem of forming portfolios of such contracts.
**Key words: Quantum mechanics, quantum measurement, contingent claims, discount bonds, absence of arbitrage, rate of return, density matrices, Gleason's theorem, Kochen-Specker theorem.**
**Key words: Quantum mechanics, quantum measurement, contingent claims, discount bonds, absence of arbitrage, rate of return, density matrices, Gleason's theorem, Kochen-Specker theorem.**
## I Introduction
By "quantum finance" we mean the valuation, optimization and risk management of financial contracts for which the outcomes (in the form of one or more payments made between the various parties involved) are contingent on the results of one or more quantum measurements. The financial contracts that we consider can be easily implemented, at least in principle, in a suitable laboratory. Our investigations are entirely within the scope of standard quantum mechanics and we are not concerned here with modifications of the standard framework or with interpretive issues. The resulting theory of quantum finance is distinctly non-Kolmogorovian, inheriting as it does the full generality of quantum probability.
Let time \(0\) be the present and \(T\) a fixed time in the future. We consider the situation where an agent \(A\) enters into a contract with another agent \(B\) in accordance with which \(A\) pays \(B\) an amount \(H_{0}\) ("the price") at time \(0\) and then \(B\) pays \(A\) an amount \(H_{T}\) (the "payout") at time \(T\), where \(H_{T}\) is contingent in some specified way on the outcome of a quantum measurement. More elaborate setups can be considered, with multiple payments as time progresses and multiple agents; but for simplicity we look at a one-period market involving two agents.
As an example, suppose the payout is determined by a measurement of the spin of a spin one-half particle along the \(z\)-axis. The outcome of such a measurement either gives \(+\frac{1}{2}\hbar\), corresponding to spin up along that axis, or \(-\frac{1}{2}\hbar\), corresponding to spin down. Henceforth, we work with physical units such that \(\hbar=1\). For the basics of quantum theory, see, for instance, reference [28]. We fix a two-dimensional Hilbert space \({\cal H}^{2}\) and on it we introduce the usual observable for the spin along the \(z\)-axis, given by the Hermitian operator
\[\hat{S}_{z}=\tfrac{1}{2}|z_{1}\rangle\langle\bar{z}_{1}|-\tfrac{1}{2}|z_{2} \rangle\langle\bar{z}_{2}|, \tag{1}\]
where \(|z_{1}\rangle\) is a unit Hilbert space vector corresponding to the upward direction along the \(z\)-axis and \(|z_{2}\rangle\) denotes an orthogonal unit Hilbert space vector corresponding to the downward direction along the \(z\)-axis. Thus, \(\langle\bar{z}_{1}|z_{1}\rangle=1\), \(\langle\bar{z}_{2}|z_{2}\rangle=1\), \(\langle\bar{z}_{1}|z_{2}\rangle=0\), \(\langle\bar{z}_{2}|z_{1}\rangle=0\), and the possible outcomes of the measurement are the eigenvalues of \(\hat{S}_{z}\), which are \(+\frac{1}{2}\) and \(-\frac{1}{2}\).
The probabilities of these outcomes are determined by the _state_ of the system, which is represented by a density matrix \(\hat{p}\). The density matrix in quantum theory has a status that is analogous in certain respects to that of the probability measure in classical probability theory. The density matrix is assumed to be a positive Hermitian operator with trace unity, which in the case of a two-dimensional Hilbert space takes the form
\[\hat{p}=p_{1}|\psi_{1}\rangle\langle\bar{\psi}_{1}|+p_{2}|\psi_{2}\rangle \langle\bar{\psi}_{2}|, \tag{2}\]
for some orthonormal basis \(\{|\psi_{1}\rangle,|\psi_{2}\rangle\}\) in \({\cal H}^{2}\), where \(p_{1}\geq 0,\,p_{2}\geq 0\), and \(p_{1}+p_{2}=1\). In general, such a matrix will have rank two, but if \(p_{1}=0\) or \(p_{2}=0\) then it will have rank one. A state with rank one is called a "pure" state. The probability for a given outcome is the trace of the product of the state and the projection operator onto the Hilbert subspace associated to the eigenvalue corresponding to that outcome (the "Born rule"). Thus we have
\[\mbox{Prob }\big{(}S_{z}=\tfrac{1}{2}\big{)}=\langle\bar{z}_{1}|\hat{p}|z_{1} \rangle,\quad\mbox{Prob }\big{(}S_{z}=-\tfrac{1}{2}\big{)}=\langle\bar{z}_{2}|\hat{p}|z_{2}\rangle. \tag{3}\]
In the case of a contingent claim where the payout is determined by the result of such a spin measurement, it should be clear that the claim itself can also be represented by a Hermitian operator on \({\cal H}^{2}\), in this case, an operator of the form
\[\hat{Z}_{T}=z_{1}|z_{1}\rangle\langle\bar{z}_{1}|+z_{2}|z_{2}\rangle\langle \bar{z}_{2}|, \tag{4}\]
where \(z_{1}\) denotes the payment made to agent \(A\) in the case the measurement outcome is spin \(+\frac{1}{2}\) and \(z_{2}\) is the payment made to \(A\) when the measurement outcome is spin \(-\frac{1}{2}\). One can think of such a contract as being an example of a so-called real option [15, 27, 44]. Payments are understood to be made in some fixed numeraire or unit of account. Thus, we conclude that _a contingent claim for which the payouts are determined by the result of a quantum measurement can be represented by an observable, in the usual quantum mechanical sense, whose eigenvalues correspond to the possible cash flows at time \(T\)_.
Among the various observables that can be represented in the form (4) there is a special observable that takes the form
\[\hat{P}_{0T}=1|z_{1}\rangle\langle\bar{z}_{1}|+1|z_{2}\rangle\langle\bar{z}_ {2}|, \tag{5}\]
which pays one unit of account at time \(T\), regardless of the outcome of the spin measurement. This is evidently a "risk-free" asset, since the payout is fixed and guaranteed, and we write
\[\hat{P}_{0T}=\hat{\bf 1}. \tag{6}\]
Here \(\hat{\bf 1}\) denotes the identity operator on \({\cal H}^{2}\). The risk-free asset \(\hat{P}_{0T}\) can be thought of as a discount bond that pays one unit of account (e.g., one "dollar") at maturity \(T\). It has the interesting property that it does not depend on the specific choice of axis along which the spin measurement is taken.
In addition to contracts of the form (4), we can more generally consider contracts of the same type, but where the measurement of the spin is taken along some other axis. Each such contract is characterized by (a) the choice of a basis in Hilbert space along which the spin measurement is made, together with (b) the payouts that take place as a consequence of the results of the measurement. Indeed, it is a theorem that any positive Hermitian operator \(\hat{Z}_{T}\) on \({\cal H}^{2}\) other than multiples of the identity can expressed uniquely in the form (4) for some choice of the orthonormal basis \(\{|z_{1}\rangle,|z_{2}\rangle\}\) in \({\cal H}^{2}\), modulo multiplicative phase factors.
To complete the discussion we need to determine the _price_ paid by agent \(A\) to agent \(B\) in exchange for the payout corresponding to \(\hat{X}_{T}\). In short, we need a _pricing function_ that maps each observable \(\hat{X}_{T}\) to a corresponding price \(X_{0}\).
## II Existence of pricing operator
It will be useful going forward to generalize our considerations to the case of a Hilbert space \({\cal H}\) of arbitrary finite dimension \(n\). As usual, we can write \(|\xi\rangle\) for a typical element of \({\cal H}\) and \(\langle\bar{\xi}|\) for its complex conjugate. The observable that determines the payout will in the generic situation be a non-degenerate Hermitian operator \(\hat{X}_{T}\) on this space and hence admit \(n\) distinct real eigenvalues, each corresponding to a distinct cash flow.
For example, if the quantum system admits \(n\) different energy levels, and the underlying physical observable being measured is the energy of the system, then the contract will in general result in a different cash flow \(\{x_{j}\}_{j=1,2,\ldots,n}\) for each of the possible energy outcomes. For the financial observable representing such a contract we can write
\[\hat{X}_{T}=\sum_{j=1}^{n}x_{j}|x_{j}\rangle\langle\bar{x}_{j}|,\quad\langle \bar{x}_{j}|x_{k}\rangle=\delta_{jk} \tag{7}\]
for some orthonormal basis \(\{|x_{j}\rangle\}_{j=1,2,\ldots,n}\) in Hilbert space. More generally, the set of all financial observables associated with a given Hilbert space will include some that are degenerate in the sense that the same payout will result for two or more distinct values of the outcome \(j\). Such a degeneracy can result either because there is a degeneracy in the spectrum of the underlying physical observable, or because two or more distinct eigenvalues of the physical observable are assigned the same cash flow. An example of the latter is a unit discount bond, for which \(x_{j}=1\) for all \(j=1,2,\ldots,n\) even though the underlying energy levels may be distinct. Then the identity operator on \({\cal H}\) represents such a bond and again we write (6) in that case.
Another example of a degenerate observable is the analogue of a so-called Arrow-Debrue (A-D) security [1], which for each value of \(j\) has the payout \(x_{j}=\mathds{1}\{j=k\}\) for some fixed value of \(k\). Here \(\mathds{1}\{E\}\) denotes the indicator function for the event \(E\). Thus \(x_{j}=1\) if \(j=k\), and \(x_{j}=0\) if \(j\neq k\). The A-D securities are represented by pure projection operators, each with payout unity or zero, depending on the result of the underlying quantum measurement, whose outcome is also unity or zero. Thus the set of all Arrow-Debrue type contracts is precisely the set of all pure projection operators on \({\cal H}\).
The state of a quantum system in \(n\) dimensions is represented by a positive semidefinite Hermitian matrix with trace unity. Such a matrix can be put in the form
\[\hat{p}=\sum_{j=1}^{n}p_{j}|\psi_{j}\rangle\langle\bar{\psi}_{j}|, \tag{8}\]
for some orthonormal basis \(\{|\psi_{j}\rangle\}_{j=1,2,\ldots,n}\), with \(p_{j}\geq 0\) for \(j=1,2,\ldots,n\) and \(\sum_{j=1}^{n}p_{j}=1\).
In the case of a density matrix of maximal rank with distinct eigenvalues, this basis is uniquely determined up to phase factors. If the density matrix is of maximal rank but with a degenerate spectrum, the basis is determined modulo unitary transformations on the degenerate subspaces. In particular, in the case of a density matrix of lower rank, the basis is determined at best only up to an arbitrary unitary transformation of the basis vectors that span the null space of the density matrix.
Given two density matrices \(\hat{p}\) and \(\hat{q}\), we say that \(\hat{q}\) is _absolutely continuous_ with respect to \(\hat{p}\) if the null space of \(\hat{p}\) is a subspace of the null space of \(\hat{q}\). Thus, \(\hat{q}\) is absolutely continuous with respect to \(\hat{p}\) if and only if for all \(|\psi\rangle\in{\cal H}\) such that \(\hat{p}|\psi\rangle=0\) it holds that \(\hat{q}|\psi\rangle=0\). We say that \(\hat{p}\) and \(\hat{q}\) are _equivalent_ if each is absolutely continuous with respect to the other, that is to say, if they share the same null space. For example, in four dimensions the operators
\[\hat{p}=p_{1}|\psi_{1}\rangle\langle\bar{\psi}_{1}|+p_{2}|\psi_{2}\rangle \langle\bar{\psi}_{2}|+p_{3}|\psi_{3}\rangle\langle\bar{\psi}_{3}|+p_{4}|\psi _{4}\rangle\langle\bar{\psi}_{4}| \tag{9}\]
and
\[\hat{q}=q_{1}|\psi_{1}^{\prime}\rangle\langle\bar{\psi}_{1}^{\prime}|+q_{2}| \psi_{2}^{\prime}\rangle\langle\bar{\psi}_{2}^{\prime}|+q_{3}|\psi_{3}\rangle \langle\bar{\psi}_{3}|+q_{4}|\psi_{4}\rangle\langle\bar{\psi}_{4}| \tag{10}\]
are equivalent when \(p_{1}>0,p_{2}>0,p_{3}=0,p_{4}=0\), \(q_{1}>0,q_{2}>0,q_{3}=0,q_{4}=0\), if \(\{|\psi_{1}\rangle,|\psi_{2}\rangle\}\) and \(\{|\psi_{1}^{\prime}\rangle,|\psi_{2}^{\prime}\rangle\}\) are related by a unitary transformation in the Hilbert subspace orthogonal to the subspace spanned by \(|\psi_{3}\rangle\) and \(|\psi_{4}\rangle\). This is because the two density matrices share a common null space, spanned by the orthogonal vectors \(|\psi_{3}\rangle\) and \(|\psi_{4}\rangle\). It is easy to see that "equivalence" in this sense is an equivalence relation in the usual mathematical sense, and it follows that all density matrices of maximal rank are equivalent.
We shall say that a claim \(\hat{X}_{T}\) is _positive_ if \(\langle\bar{\psi}|\hat{X}_{T}|\psi\rangle\geq 0\) for all \(|\psi\rangle\in{\cal H}\) and _strictly positive_ if \(\langle\bar{\psi}|\hat{X}_{T}|\psi\rangle>0\) for all \(|\psi\rangle\in{\cal H}\). By (7) one sees that \(\hat{X}_{T}\) is positive if and only if \(x_{j}\geq 0\) for all \(j\) and strictly positive if and only if \(x_{j}>0\) for all \(j\). It is legitimate to consider financial contracts resulting in negative cash flows as well, but it will suffice for our purpose to look at financial contracts with positive cash flows.
Let us now consider a one-period market represented by the set of all positive claims on an \(n\)-dimensional Hilbert space. If \(\hat{X}_{T}\) and \(\hat{Y}_{T}\) are two claims, then so is the linear combination
\[\hat{Z}_{T}=a\hat{X}_{T}+b\hat{Y}_{T} \tag{11}\]
for \(a,b\geq 0\). Hence the space of positive claims has a natural convex structure. It should be evident that the experiments underlying the claims \(\hat{X}_{T}\) and \(\hat{Y}_{T}\) are in general different and that the experiment underlying \(\hat{Z}_{T}\) is different yet again. If we write these claims in their diagonalized forms
\[\hat{X}_{T}=\sum_{j=1}^{n}x_{j}|x_{j}\rangle\langle\bar{x}_{j}|,\quad\hat{Y}_{ T}=\sum_{j=1}^{n}y_{j}|y_{j}\rangle\langle\bar{y}_{j}|, \tag{12}\]
with respect to the relevant basis vectors, one sees that the payouts and basis vectors associated with these claims are uniquely determined, up to the usual ambiguities associated with degeneracies and null spaces, and at the same time the payouts and basis vectors of (11) are represented by the decomposition
\[\hat{Z}_{T}=\sum_{j=1}^{n}z_{j}|z_{j}\rangle\langle\bar{z}_{j}|. \tag{13}\]
Here we see a curious feature arising in the analysis of such financial instruments: if we have two contracts, each with positive payouts, depending on separate measurements, then any linear combination of the operators corresponding to the two contracts, with positive coefficients, will give rise to the operator corresponding to yet another contracts, with a different set of payouts, depending on still another measurement. Thus, a linear combination (11) is _not_, generally, to be understood as representing a "portfolio" of its constituents (we consider how to model portfolios in Section VI). One is tempted, nonetheless, to conjecture that the price of the contract represented by a linear combination of two contracts should equal the corresponding linear combination of the prices of the constituents. But it is not obvious that this will be the case, since the new contract involves a different payout structure and a different experiment - so we do not simply wish to _assume_ linearity in general.
We can, however, quite reasonably assume that such a linear relationship holds in certain special situations. In particular, if two contracts \(\hat{U}_{T}\) and \(\hat{V}_{T}\) depend on the outcome of _the same experiment_, and differ from one another only in the amounts paid for the various outcomes of the experiment, then the price of the contract
\[\hat{W}_{T}=a\hat{U}_{T}+b\hat{V}_{T} \tag{14}\]
should indeed be equal to the corresponding linear combination of the prices of \(\hat{U}_{T}\) and \(\hat{V}_{T}\). More precisely, if the operators \(\hat{U}_{T}\) and \(\hat{V}_{T}\)_commute_, then the prices should be additive. The reasoning is as follows. If \(\hat{U}_{T}\) and \(\hat{V}_{T}\) commute, we can find a common orthogonal basis \(\{|w_{j}\rangle\}_{j=1,2,\ldots,n}\) in which both are diagonalized:
\[\hat{U}_{T}=\sum_{j=1}^{n}u_{j}|w_{j}\rangle\langle\bar{w}_{j}|,\quad\hat{V}_{ T}=\sum_{j=1}^{n}v_{j}|w_{j}\rangle\langle\bar{w}_{j}|. \tag{15}\]
Then if we form the linear combination (14) we obtain
\[\hat{W}_{T}=\sum_{j=1}^{n}(au_{j}+bv_{j})|w_{j}\rangle\langle\bar{w}_{j}|, \tag{16}\]
showing that the payouts for \(\hat{W}_{T}\) are given by linear combinations of the payouts of the constituents. Thus, for commuting observables it is obvious that the price of a linear combination of contracts should be the corresponding linear combination of the prices of the individual contracts; but it is not obvious that linearity extends to non-commuting contracts.
At this point, it may be helpful if we codify our assumptions somewhat more explicitly. As usual, we write \(\mathbb{R}^{+}=\{x\in\mathbb{R}:x\geq 0\}\). We fix a quantum system with state \(\hat{p}\) on an \(n\)-dimensional Hilbert space \(\mathcal{H}\) and write \(\mathcal{V}^{+}\) for the cone for positive contracts on \(\mathcal{H}\). Thus our market is characterized by the triple \(\{\mathcal{H},\hat{p},\mathcal{V}^{+}\}\). Let us write \(P_{0T}\) for the price of a unit discount bond. Our goal is to assign a price to each contract \(\hat{X}_{T}\in\mathcal{V}^{+}\).
By a _pricing function_ on the market \(\{{\cal H},\hat{p},{\cal V}^{+}\}\) in a one-period setting we mean a mapping \(\Pi_{0T}:{\cal V}^{+}\to{\mathbb{R}}^{+}\) satisfying the following:
**Axiom (1).** For all \(\hat{X}_{T}\in{\cal V}^{+}\) it holds that \(\Pi_{0T}[\hat{X}_{T}]=0\) if and only if \({\rm tr}(\hat{p}\hat{X}_{T})=0\).
**Axiom (2).** If the \(m\) contracts represented by the Hermitian matrices \(\{\hat{X}_{T}^{k}\}_{k=1,2,\,\ldots,m}\) commute, then for all \(\{a_{k}\geq 0\}_{k=1,2,\,\ldots,m}\) one has
\[\Pi_{0T}\biggl{[}\sum_{k=1}^{m}a_{k}\,\hat{X}_{T}^{k}\biggr{]}=\sum_{k=1}^{m}a _{k}\,\Pi_{0T}\Bigl{[}\hat{X}_{T}^{k}\Bigr{]}. \tag{17}\]
**Axiom (3).**\(\Pi_{0T}[\,\hat{\bf 1}\,]=P_{0T}\).
The axioms can be interpreted as follows. Axiom (1) ensures the absence of arbitrage: the price of a positive contract vanishes if and only if the expected payout vanishes. Axiom (2) ensures that the pricing function is linear when it acts on a collection of contracts represented by commuting observables. Axiom (3) fixes the price of the risk-free asset. Then we obtain the following general characterization of the price of a contract:
**Proposition 1**.: _If \(n\geq 3\) then there exists a state \(\hat{q}\) on \(\{{\cal H},\hat{p},{\cal V}^{+}\}\) that is equivalent to \(\hat{p}\) such that for any contract \(\hat{X}_{T}\in{\cal V}^{+}\) the price of \(\hat{X}_{T}\) is given by_
\[\Pi_{0T}[\hat{X}_{T}]=P_{0T}\,{\rm tr}(\hat{q}\hat{X}_{T}). \tag{18}\]
_Proof_. It suffices to consider the pricing of A-D securities. For each such contract, the underlying measurement takes the form of a projection operator \(\hat{\Lambda}=|\lambda\rangle\langle\bar{\lambda}|\) for some normalized vector \(|\lambda\rangle\in{\cal H}\). The pricing function gives a map from the space of pure projection operators on \({\cal H}\) to \({\mathbb{R}}^{+}\). Now, it is well known that the space of pure projection operators on a Hilbert space of dimension \(n\) is isomorphic to the complex projective space \({\mathbb{C}}{\mathbb{P}}^{n-1}\). Thus we obtain a real function \(\Pi_{0T}:{\mathbb{C}}{\mathbb{P}}^{n-1}\to{\mathbb{R}}^{+}\) with the property that for any set of \(n\) points \(\{\lambda_{j}\in{\mathbb{C}}{\mathbb{P}}^{n-1}\}_{j=1,2,\ldots,n}\) corresponding to an orthogonal basis in \({\cal H}\) one has
\[\sum_{j=1}^{n}\Pi_{0T}(\lambda_{j})=P_{0T}. \tag{19}\]
This is because the projection operators associated with an orthonormal basis commute and hence by Axiom (2) the sum of the prices of the projection operators must equal the price of the sum of the projection operators. But the latter sum gives the identity operator, which offers a risk-free payout of unity. Thus we obtain a unit discount bond, for which the price is \(P_{0T}\) by Axiom (3). Gleason's theorem [21] can now be applied to the problem and it follows that there exists a state \(\hat{q}\) such that the price of any claim of the form \(\hat{\Lambda}\) is given by
\[\Pi_{0T}[\hat{\Lambda}]=P_{0T}\,{\rm tr}(\hat{q}\hat{\Lambda}). \tag{20}\]
Now, any contract \(\hat{X}_{T}\) can be constructed as a linear combination of orthogonal pure projection operators with positive coefficients. Since these operators commute, Axiom (2) implies that the price of such a contract will be given by the sum of the prices of its elements, and this gives us (18). The fact that the "pricing" operator \(\hat{q}\) must be equivalent to the "physical" state \(\hat{p}\) follows as a consequence of Axiom (1), which ensures that the price of a positive contract \(\hat{X}_{T}\) vanishes if and only if \(\hat{X}_{T}\) is concentrated on the null space of \(\hat{p}\).
The key point here is that we do not assume _a priori_ the existence of a pricing state. The idea rather is to prove the existence of such a state under the _prima facie_ much weaker assumptions implicit in our axioms. The requirement that the pricing function is linear when it is applied to any commuting family of A-D securities coupled with the assumption that the price of a one-period discount bond is known allows us to deduce that the pricing function takes the form (18). In the case of a finite-dimension Hilbert space, the associated projective Hilbert space takes the form of a complex projective space \(\mathbb{CP}^{n-1}\) equipped with the Fubini-Study metric [8]. Gleason's theorem shows that any map \(f:\mathbb{CP}^{n-1}\to[0,1]\) with the property that \(\sum_{j=1}^{n}f(\lambda_{j})=1\) for any set of \(n\) points \(\{\lambda_{j}\}_{j=1,2,\ldots n}\in\mathbb{CP}^{n-1}\) that are maximally distant from each other under the Fubini-Study metric necessarily takes the form \(f(\lambda)=\langle\bar{\lambda}|\hat{q}|\lambda\rangle/\langle\bar{\lambda}| \lambda\rangle\) for some positive operator \(\hat{q}\) with trace unity. The principle of no arbitrage ("no free lunch") then implies that \(\hat{q}\) is equivalent to \(\hat{p}\).
It should be noted that the physical state \(\hat{p}\) refers to the state of the quantum system upon which measurement of a given physical observable determines the payment made under the terms of the financial contract. Thus \(\hat{p}\) can be used to calculate the probability distribution of the payout, but gives no information about the price, except that minimal statement which is mandated by the absence of arbitrage - namely, that the price should be zero if and only if the probability of a payout greater than zero is zero.
The operator \(P_{0T}\,\hat{q}\) plays the role of a pricing kernel in our theory. In the case of an \(n\)-dimensional Hilbert space the prices of any \(n^{2}-1\) linearly independent financial contracts, alongside the price of the unit discount bond, will be sufficient to completely calibrate the pricing kernel, which can then be used to price other contracts. It may seem surprising that the knowledge of such a system of prices gives no information about the physical state \(\hat{p}\), except to determine its null space, but the analogue of this phenomenon is well known in the classical theory of finance [16; 17; 18; 7; 19]. At first glance, one might think that Proposition 1 is rather weak, since the pricing operator \(\hat{q}\) is completely arbitrary apart from its having the same null space as \(\hat{p}\); but such a conclusion would be incorrect - the point is that the existence of a pricing operator is not assumed but rather is _deduced_ from the minimal axioms we have chosen to characterize a pricing function. Thus, beginning with only the assumed existence of a pricing function, which might in principle be nonlinear, one can whittle the candidates for such a map down to a linear function of the specific form (18).
## III Optimal Investment
A typical problem in classical finance theory is to determine, given a budget \(X_{0}\), the investment that maximizes the expectation of the utility gained by the investor at time \(T\) when the proceeds of the investment are liquidated. It is reasonable then to pose a similar problem in quantum finance. Let us assume (a) that agent \(A\)'s attitudes towards risk and return can be expressed by a standard utility function \(\{U(x)\}_{x>0}\), (b) that the physical state \(\hat{p}\) of the quantum system is known, (c) that the basis under which the physical measurement is being made is known, and (d) that the pricing operator is known. Thus the investment will be characterized by an observable of the form (7), where the basis \(\{|x_{j}\rangle\}_{j=1,\ldots,n}\) is fixed, and the cash flows \(\{x_{j}\}_{j=1,\ldots,n}\) must be determined in such a way that the budget is saturated and the expected utility is maximized. What makes the problem interesting in the present setting is that the expected utility of the payout is calculated by use of the physical state \(\hat{p}\) whereas the budget constraint involves the pricing state \(\hat{q}\), and that neither \(\hat{p}\) nor \(\hat{q}\) necessarily has any special relation to the measurement basis.
**Definition 1**.: _By a standard utility function we mean a map \(U:\mathbb{R}^{+}\backslash\{0\}\to\mathbb{R}\) that satisfies the following conditions:_ (i)_\(U\in\mathrm{C}^{2}(\mathbb{R}^{+}\backslash\{0\})\),_ (ii)_\(U^{\prime}(x)>0\) for all \(x>0\),_ (iii)_\(U^{\prime\prime}(x)<0\) for all \(x>0\),_ (iv)_\(\lim_{x\to\infty}U^{\prime}(x)=0\), and_ (v)_\(\lim_{x\to 0}U^{\prime}(x)=\infty\)._
These requirements can be relaxed somewhat, but the "standard" conditions typically lead to well-posed problems for which solutions can be shown to exist and hence prove to be natural as a basis for modelling. Note that a standard utility function is a strictly convex, strictly increasing function defined for all strictly positive values of its argument. We refer to the function \(U^{\prime}:\mathbb{R}^{+}\backslash\{0\}\to\mathbb{R}^{+}\backslash\{0\}\) as the _marginal utility_. The final two conditions of the definition ensure that a standard utility function has the property that there exists an inverse marginal utility function \(\{I(y)\}_{y>0}\) such that \(I(U^{\prime}(x))=x\) for all \(x>0\). Examples of standard utility functions are (a) logarithmic utility, for which \(U(x)=\log(x)\) for \(x>0\), and (b) power utility with index \(p\in(-\infty,1)\backslash\{0\}\), for which \(U(x)=p^{-1}x^{p}\) for \(x>0\). For logarithmic utility one finds that \(I(y)=1/y\) and for power utility \(I(y)=y^{1/(p-1)}\).
The goal of agent \(A\)'s optimization problem is to determine the cash flows \(\{x_{j}\}_{j=1,2,\ldots,n}\) that maximize the expected value of the utility, providing that these cash flows can be realized with the specified budget. Thus, given a standard utility function \(\{U(x)\}_{x>0}\) we set
\[\{x_{j}^{*}\}_{j=1,2,\ldots,n}=\underset{\{x_{j}\}}{\mathrm{argmax}}\,\, \mathrm{tr}\left[\hat{p}\,\hat{U}(\{x_{j}\})\right] \tag{21}\]
where \(\hat{U}(\{x_{j}\})=\sum_{j=1}^{n}U(x_{j})|x_{j}\rangle\langle\bar{x}_{j}|\) and the \(\mathrm{argmax}\) is subject to the budget constraint
\[X_{0}=P_{0T}\,\mathrm{tr}(\hat{q}\,\hat{X}_{T}),\quad\hat{X}_{T}=\sum_{j=1}^{ n}x_{j}|x_{j}\rangle\langle\bar{x}_{j}|. \tag{22}\]
**Proposition 2**.: _Let the physical state of a quantum system on an \(n\)-dimensional Hilbert space be \(\hat{p}\). Let the pricing state for a financial market based on measurements of the system be \(\hat{q}\), with one-period discount factor \(P_{0T}\). Let the risk preferences of the investor be represented by a standard utility function \(U:\mathbb{R}^{+}\backslash\{0\}\to\mathbb{R}\) and write \(I\) for the associated inverse marginal utility function. Then the optimal cash flow structure \(\{x_{j}^{*}\}\) for an investment with budget \(X_{0}\) paying out according to the measurement of a financial observable of the form_
\[\hat{X}=\sum_{j=1}^{n}x_{j}|x_{j}\rangle\langle\bar{x}_{j}|, \tag{23}\]
_for some fixed orthonormal basis \(\{|x_{j}\rangle\}_{j=1,\ldots,n}\), is given by_
\[x_{j}^{*}=I\bigg{[}\lambda\frac{\langle\bar{x}_{j}|\hat{q}|x_{j}\rangle}{ \langle\bar{x}_{j}|\hat{p}|x_{j}\rangle}\bigg{]}, \tag{24}\]
_where for any choice of \(X_{0}>0\) the parameter \(\lambda\) is uniquely determined by the relation_
\[P_{0T}\sum_{j=1}^{n}I\bigg{[}\lambda\frac{\langle\bar{x}_{j}|\hat{q}|x_{j} \rangle}{\langle\bar{x}_{j}|\hat{p}|x_{j}\rangle}\bigg{]}\,\langle x_{j}| \hat{q}|\bar{x}_{j}\rangle=X_{0}. \tag{25}\]
Proof.: The method of Lagrange multipliers can be used to obtain a candidate for the \(\mathrm{argmax}\). We introduce a Lagrange multiplier \(\lambda\) and seek a solution to the unconstrained problem
\[\{x_{j}^{*}\}=\underset{\{x_{j}\}}{\mathrm{argmax}}\,\,\Big{(}\mathrm{tr} \left[\hat{p}\,\hat{U}(\{x_{j}\})\right]-\lambda P_{0T}\,\mathrm{tr}(\hat{q} \,\hat{X}_{T})\Big{)}, \tag{26}\]
or equivalently
\[\{x_{j}^{*}\}=\underset{\{x_{j}\}}{\mathrm{argmax}}\ \Bigg{(}\sum_{j=1}^{n}U(x_{j}) \langle\bar{x}_{j}|\hat{p}|x_{j}\rangle-\lambda P_{0T}\sum_{j=1}^{n}x_{j}\langle \bar{x}_{j}|\hat{q}|x_{j}\rangle\Bigg{)}. \tag{27}\]
Differentiating with respect to \(x_{j}\) and setting the results to zero, we find that
\[U^{\prime}(x_{j})=\lambda\frac{\langle\bar{x}_{j}|\hat{q}|x_{j}\rangle}{ \langle\bar{x}_{j}|\hat{p}|x_{j}\rangle} \tag{28}\]
for each value of \(j\). Applying the inverse marginal utility function to each side of this equation, we are then led to (24) and the budget constraint (22) gives (25). That (25) admits a unique solution for \(\lambda\) for any \(X_{0}>0\) follows from the fact that the monotonic decreasing map \(I:\mathbb{R}^{+}\backslash\{0\}\rightarrow\mathbb{R}^{+}\backslash\{0\}\) is surjective, which is a consequence of the conditions (iv) and (v) satisfied by a standard utility function. That the candidate solution is indeed a true solution can be checked by use of the fundamental inequality \(U(I(y))-I(y)y\geq U(x)-xy\), which holds for all \(x>0\) and \(y>0\) in the case of a standard utility function. It follows then from (24) that for any _alternative_ choice of payout structure \(\{x_{j}\}\) we have
\[U(x_{j}^{*})-x_{j}^{*}\lambda\frac{\langle\bar{x}_{j}|\hat{q}|x_{j}\rangle}{ \langle\bar{x}_{j}|\hat{p}|x_{j}\rangle}\geq U(x_{j})-x_{j}\lambda\frac{ \langle\bar{x}_{j}|\hat{q}|x_{j}\rangle}{\langle\bar{x}_{j}|\hat{p}|x_{j} \rangle}, \tag{29}\]
for each \(j=1,2,\ldots,n.\) Multiplying by \(p_{j}\) and summing we obtain
\[\sum_{j=1}^{n}p_{j}U(x_{j}^{*})-\sum_{j=1}^{n}p_{j}U(x_{j})\geq\sum_{j=1}^{n}p _{j}x_{j}^{*}\lambda\frac{\langle\bar{x}_{j}|\hat{q}|x_{j}\rangle}{\langle\bar {x}_{j}|\hat{p}|x_{j}\rangle}-\sum_{j=1}^{n}p_{j}x_{j}\lambda\frac{\langle\bar {x}_{j}|\hat{q}|x_{j}\rangle}{\langle\bar{x}_{j}|\hat{p}|x_{j}\rangle}. \tag{30}\]
Then since \(p_{j}=\langle x_{j}|\hat{p}|\bar{x}_{j}\rangle\) we have
\[\sum_{j=1}^{n}p_{j}U(x_{j}^{*})-\sum_{j=1}^{n}p_{j}U(x_{j})\geq\lambda\left[ \sum_{j=1}^{n}x_{j}^{*}\langle\bar{x}_{j}|\hat{q}|x_{j}\rangle-\sum_{j=1}^{n}x _{j}\langle\bar{x}_{j}|\hat{q}|x_{j}\rangle\right]\!. \tag{31}\]
Now, we know by (25) that \(\lambda\) has been chosen to ensure that the candidate solution \(\{x_{j}^{*}\}\) satisfies the budget constraint
\[P_{0T}\sum_{j=1}^{n}x_{j}^{*}\langle\bar{x}_{j}|\hat{q}|x_{j}\rangle=X_{0}. \tag{32}\]
If we require that the alternative choice of payout structure should also satisfy the budget constraint, or perhaps operate under budget, so
\[P_{0T}\sum_{j=1}^{n}x_{j}\langle\bar{x}_{j}|\hat{q}|x_{j}\rangle\leq X_{0}, \tag{33}\]
then the two terms on the right-hand side of (31) cancel, or else leave a difference that is positive (if the alternative choice is under budget), which gives
\[\sum_{j=1}^{n}p_{j}U(x_{j}^{*})\geq\sum_{j=1}^{n}p_{j}U(x_{j}), \tag{34}\]
showing that the candidate solution for the optimal payout gives an expected utility that is no less than that of any alternative choice of payout structure with a budget no greater than that of the candidate solution.
Rate of return
As an example, we can look in detail at the case of logarithmic utility. Suppose we set \(U(x)=\log x\) for \(x>0\). Then the inverse marginal utility function is given by \(I(y)=1/y\) for \(y>0\). It follows that for log utility the optimal payout structure takes the form
\[x_{j}^{*}=\lambda^{-1}\frac{\langle\bar{x}_{j}|\hat{p}|x_{j}\rangle}{\langle \bar{x}_{j}|\hat{q}|x_{j}\rangle}. \tag{35}\]
Inserting this expression into the budget constraint (32) we obtain
\[P_{0T}\lambda^{-1}\sum_{j=1}^{n}\langle\bar{x}_{j}|\hat{p}|x_{j}\rangle=X_{0}. \tag{36}\]
But the sum appearing in the expression above is unity since \(\sum_{j=1}^{n}|x_{j}\rangle\langle\bar{x}_{j}|=\hat{\mathbf{1}}\) and the trace of \(\hat{p}\) is one. Thus for log utility we deduce that \(P_{0T}\lambda^{-1}=X_{0}\) and hence
\[x_{j}^{*}=(P_{0T})^{-1}X_{0}\frac{\langle\bar{x}_{j}|\hat{p}|x_{j}\rangle}{ \langle\bar{x}_{j}|\hat{q}|x_{j}\rangle}. \tag{37}\]
We observe that when the physical state and the pricing state are one and the same, the payouts of the optimal investment are identical for each outcome of chance, each giving \((P_{0T})^{-1}X_{0}\), the usual "future value" of the initial investment. In that case, the optimal investment is to put the initial endowment into unit discount bonds, totalling \(X_{0}\) in value. Then we have \(\hat{X}_{T}=(P_{0T})^{-1}X_{0}\,\hat{\mathbf{1}}\) and one sees that if the pricing state is the physical state, the market assigns no premium to the return on a risky investment, ensuring that the optimal investment is in a discount bond and the rate of return is the interest rate.
The same conclusion applies, more generally, for any choice of the utility. This follows from (24) and (25), from which one concludes that if \(\hat{p}=\hat{q}\) then \(x_{j}^{*}=(P_{0T})^{-1}X_{0}\) for all \(j\). It is interesting therefore to enquire what happens when the pricing state is different from the physical state. The return \(R_{0T}\) on an investment \(\hat{X}_{T}\) is given by the ratio of the expectation of \(\hat{X}_{T}\) under \(\hat{p}\) to the amount initially invested, namely \(X_{0}\). Thus, quite generally, we have
\[R_{0T}=(X_{0})^{-1}\mathrm{tr}(\hat{p}\hat{X}_{T}). \tag{38}\]
But \(X_{0}=P_{0T}\,\mathrm{tr}(\hat{q}\hat{X}_{T})\) by (18), so we deduce that
\[R_{0T}=(P_{0T})^{-1}\frac{\mathrm{tr}(\hat{p}\hat{X}_{T})}{\mathrm{tr}(\hat{q }\hat{X}_{T})}, \tag{39}\]
and it should be clear that if \(\hat{p}=\hat{q}\), except possibly on the null space of \(\hat{X}_{T}\), then the rate of return on the investment is the one-period interest rate.
Specializing now to the case of an optimal investment for an agent with logarithmic utility, let us calculate the rate of return. We have
\[\hat{X}_{T}=\sum_{j=1}^{n}x_{j}^{*}\,|x_{j}\rangle\langle\bar{x}_{j}|, \tag{40}\]
where the optimal payout structure \(\{x_{j}^{*}\}\) is given by (37). It follows then that
\[R_{0T} = (X_{0})^{-1}\mathrm{tr}(\hat{p}\hat{X}_{T}) \tag{41}\] \[= (X_{0})^{-1}\mathrm{tr}\left(\hat{p}\,\sum_{j=1}^{n}x_{j}^{*}\,|x_ {j}\rangle\langle\bar{x}_{j}|\right)\] \[= (X_{0})^{-1}\sum_{j=1}^{n}x_{j}^{*}\,\langle\bar{x}_{j}|\hat{p}|x _{j}\rangle\] \[= (P_{0T})^{-1}\sum_{j=1}^{n}\frac{\langle\bar{x}_{j}|\hat{p}|x_{j} \rangle^{2}}{\langle\bar{x}_{j}|\hat{q}|x_{j}\rangle}.\]
If we set \(R_{0T}=\mathrm{e}^{\mu T}\) then the rate of return \(\mu\) can be split into two parts, namely a risk-free one-period interest rate and a so-called excess rate of return, which is the part of the rate of return that exceeds the interest rate. We can represent this in a standard way by writing \(R_{0T}=\mathrm{e}^{(r+\beta)T}\) where \(r\) is the interest rate and \(\beta\) denotes the excess rate of return. The interest rate is determined by the relation \(\mathrm{e}^{rT}=(P_{0T})^{-1}\) and the excess rate of return is determined by the relation
\[\mathrm{e}^{\beta T}=\sum_{j=1}^{n}\frac{\langle\bar{x}_{j}|\hat{p}|x_{j} \rangle^{2}}{\langle\bar{x}_{j}|\hat{q}|x_{j}\rangle}. \tag{42}\]
**Proposition 3**.: _The optimal investment in the case of an investor with logarithmic utility has a positive excess rate of return. The utility gained from such an investment in a market where the physical state and pricing state differ is greater than or equal to the utility gained from an investment in a risk-free bond._
Proof.: For general utility, the expected utility gained from the payout of an optimal investment is given by
\[\mathrm{tr}\left[\hat{p}\,\hat{U}\right]=\sum_{j=1}^{n}U(x_{j}^{*})\langle \bar{x}_{j}|\hat{p}|x_{j}\rangle. \tag{43}\]
Let us set \(U(x_{j}^{*})=\log x_{j}^{*}\) for logarithmic utility and insert (37). The result is
\[\sum_{j=1}^{n}U(x_{j}^{*})\langle\bar{x}_{j}|\hat{p}|x_{j}\rangle=\log\big{[}( P_{0T})^{-1}X_{0}\big{]}+\sum_{j=1}^{n}\bigg{[}\langle\bar{x}_{j}|\hat{p}|x_{j} \rangle\log\frac{\langle\bar{x}_{j}|\hat{p}|x_{j}\rangle}{\langle\bar{x}_{j}| \hat{q}|x_{j}\rangle}\bigg{]}. \tag{44}\]
The first term on the right-hand side of this equation isolates the part of the utility gain due to the interest rate. The second term can be interpreted as a _relative entropy_. In particular, if we set \(p_{j}=\langle\bar{x}_{j}|\hat{p}|x_{j}\rangle\) and \(q_{j}=\langle\bar{x}_{j}|\hat{q}|x_{j}\rangle\) then it is evident that \(\{p_{j}\}_{j=1,2,\ldots,n}\) and \(\{q_{j}\}_{j=1,2,\ldots,n}\) constitute a pair of absolutely continuous probability distributions. The second term on the right then takes the form of a Kullback-Liebler divergence [35]:
\[D_{KL}(p,q)=\sum_{j=1}^{n}p_{j}\log\bigg{(}\frac{p_{j}}{q_{j}}\bigg{)}. \tag{45}\]
Thus, the utility thus gained gives a measure of the divergence between the physical density matrix and the pricing density matrix. Now, it is well known that the Kullback-Liebler divergence is _non-negative_. It follows, then, that the utility gained from an optimal risky investment in a market where \(\hat{p}\) and \(\hat{q}\) are distinct will be greater than or equal to the utility gained from a risk-free bond investment, as claimed. Moreover, we have the following. The standard logarithmic inequality \(\log z\leq z-1\), which holds for \(z>0\), implies that
\[\log\left(\frac{p_{j}}{q_{j}}\right)\leq\left(\frac{p_{j}}{q_{j}} \right)-1 \tag{46}\]
for each \(j\). Hence, multiplying by \(p_{j}\) and summing we obtain
\[\sum_{j=1}^{n}p_{j}\log\left(\frac{p_{j}}{q_{j}}\right)\leq\sum_{ j=1}^{n}\left(\frac{p_{j}^{2}}{q_{j}}\right)-1. \tag{47}\]
Thus, we have
\[\mathrm{e}^{\beta T}=\sum_{j=1}^{n}\left(\frac{p_{j}^{2}}{q_{j}} \right)\geq 1+D_{KL}(p,q), \tag{48}\]
and by the positivity of the Kullback-Liebler divergence we deduce that the excess rate of return \(\beta\) is positive for an optimal investment under logarithmic utility, as claimed. \(\square\)
## V Kolmogorov vs Bell-Kochen-Specker
It is often maintained by physicists that quantum probability is more general than Kolmogorov's well-established "classical" theory of probability [33] and that the latter is contained as a special case of the former. There is no doubt that quantum probability, when laid out as a mathematical theory, does have a different look and feel when it is compared to Kolmogorov's theory; but despite the fact that numerous well-argued accounts of quantum probability can be found in the literature [13; 14; 22; 25; 34; 36; 42; 43] (see also [3; 14; 45]), some even taking an axiomatic approach, it is not that easy to pinpoint the exact sense in which quantum theory is _essentially_ non-Kolmogorovian - rather than, say, merely an intricate reworking of Kolmogorov's theory in a radically different form. This issue is compounded by the fact that, except in the most loose terms, it is difficult to say what one means by "probability" without embedding the concept in a rigorous framework; and it seems that there are at least two such frameworks available to us at the moment - namely, classical probability and quantum probability, each with a host of applications.
It is fortunate then that we have the results of Gleason [21], Bell [4; 5; 6], Kochen & Specker [24; 32], and others following in their footsteps, which add clarity to the matter. The point is that one has to work rather hard to come up with examples of situations in quantum probability that cannot be reduced to a classical probability model. But a number of such examples have been worked out involving finite-dimensional Hilbert spaces, so this creates the prospect of constructing financial models for claims based on the results of quantum measurements, in settings for which quantum probability is required in their analysis.
Since most of what we know of modern finance theory is based quite explicitly on Kolmogorov's framework, both for the derivation of rigorous theorems concerning the mathematics of financial markets as well as for the implementation of the large-scale computer models used for trading and risk management at financial institutions, it may be worthwhile to take note of a few examples of situations where quantum probability comes into play.
Among the numerous attempts that have been made to generalize or extend the Kochen-Specker construction [11, 12, 29, 30, 37, 38, 39]. Perhaps the simplest of these yet put forward is that of Cabello et al, which entails the specification of nine different non-commuting observables on a four-dimensional Hilbert space, each admitting four distinct eigenvalues. In a financial context, one can think of the setup as involving a single quantum system being prepared in a state \(\hat{p}\) with nine different "draft financial contracts" drawn up, each requiring measurement of one of the nine observables. The contracts specify the payments that will be made in the event that one of the four possible results occurs for the measurement associated with a specific contract. It is of the nature of quantum probability that only one of the nine contracts can be implemented, so we can envisage a rational investor being presented with the alternatives and choosing one optimally in accordance with their needs.
Figure 1: Diagram illustrating a result of the Kochen-Specker type in a 4-dimensional Hilbert space. Each of the 9 vertices are met by 4 lines and each of the 18 lines join 2 vertices. The 18 lines represent a set of normalized projection operators with the property that the 4 projection operators meeting a given vertex are mutually orthogonal and sum to the identity operator. It is easy to see that it is impossible to “colour” the lines so that one blue line meets each vertex and 3 red lines meet each vertex. This illustrates the fact that in the standard Kolmogorov setup one cannot find a set of 18 random variables on a probability space \((\Omega,{\cal F},\mathbb{P})\), each taking values in the set \(\{0,1\}\), such that when the 18 random variables are assigned to the 18 lines, the sum of the 4 random variables meeting any given vertex will be one for all \(\omega\in\Omega\).
The setup is an elaborate although feasible one, and we can use the methods discussed to calculate the probabilities of the results for the nine different measurements and hence the expected utility gained from each choice. Each observable has four possible outcomes, thus determining an orthonormal tetrad in Hilbert space. These are the four eigenvectors of the Hermitian matrix corresponding to a given observable. The result of the measurement is to select one of these eigenvectors. Equivalently, each measurement measures four commuting projection operators, namely the projection operators associated with the four legs of the tetrad. The outcome of one of these four measurements will be unity and the rest nil.
The clever idea behind results of the Kochen-Specker type is to choose the observables so that some of the tetrads legs overlap when one moves from one observable to another. In the present situation, involving nine observables, the overlap structure is shown in Figure 1. Alongside each vertex of the enneagon one sees the corresponding tetrad, where to ease the typography we write \(\bar{1}\) for \(-1\). When two vertices are connected by a dotted line, this means that the associated tetrads share a vector in common. The analysis is simplified somewhat by the fact that the tetrads in this example can all be taken to be real.
If we label the nine observables \(\{\hat{X}_{r}\}_{r=1,2,\ldots,9}\) and if for each value of \(r\) the four projection operators associated with \(\hat{X}_{r}\) are denoted \(\{\hat{\pi}_{rj}\}_{j=1,2,3,4}\,,\) then the probability that outcome \(j\) will result, if contract \(r\) is chosen, is given by \(\operatorname{tr}(\hat{\pi}_{rj}\,\hat{p})\). The construction of an analogous setup within Kolmogorov's system turns out to be impossible. Since this is a rather sweeping statement, let us be a little more precise about what is being claimed. The point is that in Kolmogorov's theory, one would have to model the setup with 36 random variables on a single probability space. The 36 random variables are grouped into nine sets of four. Let's call these hypothetical random variables \(\{X_{rj}\}_{r=1,2,\ldots,9,j=1,2,3,4}\) (with no hats). Each random variable can take the value zero or one. Thus we have a total of 36 maps of the form
\[X_{rj}:\Omega\to\{0,1\}. \tag{49}\]
There are two requirements that have to be satisfied to match the layout of the quantum setup. First, the sum of the four random variables for a given value of \(r\) must be unity. This means that one of them must be equal to one and the other three must be equal to zero for any given outcome of chance \(\omega\in\Omega\). Secondly (this is where the rabbit goes into the hat) the 36 random variables have to be equal in pairs, in conformation with the structure of the diagram in Figure 1. Thus, the 36 random variables are cut down in effect to 18 by the requirement that they must match in pairs.
Can one find such a set of 18 random variables? The answer, perhaps surprisingly, is no. This can be checked by a colouring argument. Given Figure 1, can one colour each line red or blue in such a way that exactly one blue line meets each vertex? Suppose one finds a way of colouring four of the lines blue, no vertex being hit by more than one blue line. That would leave one vertex unmet by a blue line. Suppose then one tried to colour five lines blue. Well, that would mean at least one vertex was hit by more than one blue line. This shows that it is impossible to construct a set of 18 random variables on a probability space in such a way that the required properties are satisfied.
In financial terms, this means that we cannot model the payouts of the nine contracts as random variables on a probability space in such a way that the outcome of chance determines the payouts of all nine. A sceptic might ask, "Isn't it unlikely in practice that one will come up against such a configuration of contracts?" Well, that may be so, but the point is that quantum finance can handle such configurations whereas classical finance can't.
Portfolios as multi-particle systems
Let's return now to the matter of portfolios. There are two rather distinct notions of portfolio that arise in quantum finance. The first notion involves a portfolio of contracts all depending in their payouts on the same experiment. In that case, we can fix the \(n\) axes of the \(n\)-dimensional Hilbert space determining the frame of the measurement and write \(\{\hat{\pi}_{j}\}_{j=1,2,\dots n}\) for the associated projection operators. Then for a given outcome of the experiment one of these projection operators will give the result unity and the rest zero. The projection operators can be regarded as the A-D securities for that experiment and it should be evident that any contingent claim based on the outcome of the given experiment can be written as a portfolio of \(n\) such A-D securities. Thus, for such claims we can write
\[\hat{X}=\sum_{j=1}^{n}\theta_{j}\,\hat{\pi}_{j}, \tag{50}\]
where the \(\{\theta_{j}\}_{j=1,2,\dots n}\) represent the holdings in the various A-D securities. More generally, if we allow short positions in the A-D securities, then the resulting overall position can be expressed uniquely as the difference between two positive claims, with the understanding that we net claims involving long and short positions in the same A-D security.
Clearly, a linear combination of two portfolios in this setting gives another portfolio. Furthermore, it should be evident that the operator corresponding to the portfolio can be represented as the sum of a trace part, proportional to the identity operator, and a trace-free part. The trace part represents a position (long or short) in the risk-free asset, and the remainder consists of investments in risky assets. For example, in two dimensions, a portfolio of the form \(2|z_{1}\rangle\langle\bar{z}_{1}|+|z_{2}\rangle\langle\bar{z}_{2}|\) consists of a long position of three-halves of one unit of the risk free asset, a long position of one-half of a unit in the A-D security \(\hat{\pi}_{1}\) and a short position of one-half of a unit in the A-D security \(\hat{\pi}_{2}\), since we have \(2\hat{\pi}_{1}+\hat{\pi}_{2}=\frac{3}{2}(\hat{\pi}_{1}+\hat{\pi}_{2})+\frac{1 }{2}(\hat{\pi}_{1}-\hat{\pi}_{2})\). In this way we can isolate the risk-free part of a portfolio. This first notion of a portfolio corresponds rather closely to the notion of a portfolio in a one-period market that arises in classical finance theory [1; 7; 16; 17; 18; 19] and can be pursued further in that spirit. The point is that once the measurement basis for the underlying experiment has been fixed, the various associated operators arising for positions with different portfolio weightings commute.
As we pointed out in Section II, it does not make sense to form a portfolio of several contracts each based on the same quantum system but with different measurement frames, since such measurements will in general be incompatible and cannot be simultaneously realized. In this respect, our approach differs completely from that of reference [2] who attempt to set up a portfolio theory on exactly that basis, where all of the assets are associated with the same Hilbert space. This leads us to conclude their main result (an analogue of the fundamental theorem of asset pricing) is in some respects ill-posed. The authors of [2] give no clear statement (let alone a definition) of what is meant in real terms by a "quantum asset" and merely assume the existence of a pricing state rather than deducing its existence. Nonetheless, that said, there is some interesting overlap with the present work.
In our approach to the problem we consider portfolios of assets for which the payouts are based on separate measurements being made on two or more distinct quantum systems. Imagine, for example, a financial institution where in one room an experiment is carried out on Quantum System I, with certain results obtained, and another experiment is carried out in another room on Quantum System II, with certain results obtained. In each case, there are contracts leading to payouts depending on the results obtained.
Since the measurements do not interfere with one another (after all, they are carried out in different rooms) they can be carried out simultaneously, each delivering a certain number of units of account, so it makes sense to speak of holding a portfolio in the two assets, for which the payout is simply the totality of the payouts of the constituents of the portfolio, with appropriate weightings.
Let's see how we model such a situation. To simplify the discussion, we stick with the case where there are just two quantum systems involved, with measurements made on each of them. The setup can be easily generalized to the case where there are \(N\) such systems. The key idea is that to model a portfolio of two such quantum assets, we need to consider the tensor product of the Hilbert spaces of the individual systems. In fact, the two Hilbert spaces might even be of different dimensions.
The usual Dirac notation does not hold up so well in such a setting, so we use an _index notation_ instead, which works quite smoothly [8; 20]. Thus, let \({\cal H}_{1}\) be a Hilbert space of dimension \(n\) and let \({\cal H}_{2}\) be a Hilbert space of dimension \(n^{\prime}\), where \(n\) and \(n^{\prime}\) are not necessarily the same. We write \(\xi^{a}\) and \(\xi^{a^{\prime}}\) for typical elements of \({\cal H}_{1}\) and \({\cal H}_{2}\) respectively, where \(a=1,2,\,\ldots n\) and \(a^{\prime}=1,2,\,\ldots n^{\prime}\). Thus indices without dashes refer to the first Hilbert space and indices with dashes refer to the second Hilbert space. We write \(\eta_{a}\) and \(\eta_{a^{\prime}}\) for typical elements of the corresponding dual spaces \({\cal H}_{1}^{*}\) and \({\cal H}_{2}^{*}\). The complex conjugates of \(\xi^{a}\) and \(\xi^{a^{\prime}}\) are denoted \(\bar{\xi}_{a}\) and \(\bar{\xi}_{a^{\prime}}\) respectively. Then for the inner product between \(\xi^{a}\) and \(\eta_{a}\) we write \(\xi^{a}\,\eta_{a}\) and for the inner product between \(\xi^{a^{\prime}}\) and \(\eta_{a^{\prime}}\) we write \(\xi^{a^{\prime}}\,\eta_{a^{\prime}}\), with the usual summation convention.
We are interested in the product Hilbert space \({\cal H}_{12}={\cal H}_{1}\otimes{\cal H}_{2}\), and we write \(\xi^{aa^{\prime}}\in{\cal H}_{12}\) for a typical element of this space. Then we write \(\eta_{aa^{\prime}}\) for a typical element of \({\cal H}_{12}^{*}\) and \(\bar{\xi}_{aa^{\prime}}\) for the complex conjugate of \(\xi^{aa^{\prime}}\), and for the inner product of \(\xi^{aa^{\prime}}\) and \(\eta_{aa^{\prime}}\) we write \(\xi^{aa^{\prime}}\,\eta_{aa^{\prime}}\). The state of a two-particle system takes the form of a density matrix \(p^{aa^{\prime}}_{bb^{\prime}}\). Thus we require that it should be Hermitian, of unit trace, and positive, so
\[p^{aa^{\prime}}_{bb^{\prime}}=\bar{p}^{aa^{\prime}}_{bb^{\prime}},\quad p^{ cc^{\prime}}_{cc^{\prime}}=1,\quad p^{aa^{\prime}}_{bb^{\prime}}\alpha^{b} \bar{\alpha}_{a}\beta^{b^{\prime}}\bar{\beta}_{a^{\prime}}\geq 0 \tag{51}\]
for all \(\alpha^{a},\beta^{a^{\prime}}\). A two-particle density matrix is _pure_ if \(p^{aa^{\prime}}_{bb^{\prime}}=\xi^{aa^{\prime}}\,\bar{\xi}_{bb^{\prime}}\) for some state vector \(\xi^{aa^{\prime}}\). We say that the particles are _independent_ if
\[p^{aa^{\prime}}_{bb^{\prime}}=p^{a}_{b}\,p^{a^{\prime}}_{b^{\prime}} \tag{52}\]
for some pair of one-particle states \(p^{a}_{b}\) and \(p^{a^{\prime}}_{b^{\prime}}\). The state is said to be _separable_ if it can be written in the form
\[p^{aa^{\prime}}_{bb^{\prime}}=\sum_{r=1}^{k}p^{a}_{b}(r)\,p^{a^{\prime}}_{b^ {\prime}}(r), \tag{53}\]
for some collection of \(2k\) one-particle states \(\{p^{a}_{b}(r)\}_{r=1,2,\ldots k}\) and \(\{p^{a^{\prime}}_{b^{\prime}}(r)\}_{r=1,2,\ldots k}\). But if the two-particle state is not separable then we say that the particles are _entangled_.
Now we are in a position to discuss the idea of measurements on a two-particle system and the contracts one can associate with such measurements. A generic contract based on the outcome of a measurement made on a two-particle system is described by a Hermitian operator \(X^{aa^{\prime}}_{bb^{\prime}}\). We are interested in the case when the measurement splits into a measurement on System I and a measurement on System II and one adds the results. Such a contract takes the form
\[X^{aa^{\prime}}_{bb^{\prime}}=U^{a}_{b}\delta^{a^{\prime}}_{b^{\prime}}+\delta ^{a}_{b}V^{a^{\prime}}_{b^{\prime}}, \tag{54}\]
where \(\delta^{a}_{b}\) and \(\delta^{a^{\prime}}_{b^{\prime}}\) denote the identity operators on \({\cal H}_{1}\) and \({\cal H}_{2}\) respectively. The eigenstates of such an operator are of the form
\[p^{aa^{\prime}}_{bb^{\prime}}=\alpha^{a}\bar{\alpha}_{b}\,\beta^{a^{\prime}}\bar {\beta}_{b^{\prime}}, \tag{55}\]
where \(\alpha^{a}\) is an eigenvector of \(U^{a}_{b}\) and \(\beta^{a^{\prime}}\) is an eigenvector of \(V^{a^{\prime}}_{b^{\prime}}\). Thus \(U^{a}_{b}\alpha^{b}=u\alpha^{a}\) and \(V^{a^{\prime}}_{b^{\prime}}\beta^{b^{\prime}}=v\beta^{a^{\prime}}\) for some \(u,v\in\mathbb{R}^{+}\backslash\{0\}\) and the sum \(u+v\) gives the overall outcome of the measurement. Such a contract can be thought of as representing a portfolio consisting of one unit of a contract based on System I and one unit of a contract based on System II. More generally, for a portfolio consisting of \(\theta_{1}\) units of the first contract and \(\theta_{2}\) units of the second contract we have
\[X^{aa^{\prime}}_{bb^{\prime}}(\theta_{1},\theta_{2})=\theta_{1}\,U^{a}_{b} \delta^{a^{\prime}}_{b^{\prime}}+\theta_{2}\,\delta^{a}_{b}V^{a^{\prime}}_{b^ {\prime}} \tag{56}\]
and the payout will be of the form \(\theta_{1}u+\theta_{2}v\). The setup for a portfolio of arbitrary size can be constructed analogously. In particular, one can check that the expected payout of a portfolio is equal to the sum of the expectations of the constituents. This is because whenever the density matrix of the two-particle state hits one of the identity operators in the portfolio operator, all but one of the systems gets traced out and one is left with the trace of the product of a single particle density operator and the observable associated with that system. For example, in the case of a two-particle system one finds that
\[p^{bb^{\prime}}_{aa^{\prime}}X^{aa^{\prime}}_{bb^{\prime}}(\theta _{1},\theta_{2}) = p^{bb^{\prime}}_{aa^{\prime}}(\theta_{1}\,U^{a}_{b}\delta^{a^{ \prime}}_{b^{\prime}}+\theta_{2}\,\delta^{a}_{b}V^{a^{\prime}}_{b^{\prime}}) \tag{57}\] \[= \theta_{1}\,p^{bb^{\prime}}_{aa^{\prime}}\,U^{a}_{b}\,\delta^{a^ {\prime}}_{b^{\prime}}+\theta_{2}\,p^{bb^{\prime}}_{aa^{\prime}}\delta^{a}_{b }V^{a^{\prime}}_{b^{\prime}}\] \[= \theta_{1}\,p^{b}_{a}\,U^{a}_{b}+\theta_{2}\,p^{b^{\prime}}_{a^{ \prime}}V^{a^{\prime}}_{b^{\prime}},\]
where \(p^{b}_{a}=p^{bc^{\prime}}_{ac^{\prime}}\) and \(p^{b^{\prime}}_{a^{\prime}}=p^{cb^{\prime}}_{ca^{\prime}}\). Likewise one can check that the price of a portfolio is equal to the weighted sum of the prices of its constituents. This may not be obvious, but the point is that the two-particle system is itself a quantum system with a financial observable based on it, of the form (54), so by Proposition 1 there exists a pricing operator \(q^{bb^{\prime}}_{aa^{\prime}}\) such that
\[q^{bb^{\prime}}_{aa^{\prime}}X^{aa^{\prime}}_{bb^{\prime}}(\theta_{1},\theta _{2})=\theta_{1}\,q^{b}_{a}\,U^{a}_{b}+\theta_{2}\,q^{b^{\prime}}_{a^{\prime} }V^{a^{\prime}}_{b^{\prime}}, \tag{58}\]
where the traced-out operators \(q^{b}_{a}=q^{bc^{\prime}}_{ac^{\prime}}\) and \(q^{b^{\prime}}_{a^{\prime}}=q^{cb^{\prime}}_{ca^{\prime}}\) are the pricing operators associated with the respective individual systems.
There is one further aspect of the portfolio problem that can be analyzed and this concerns the matter of correlations. If the state of the two-particle system is of the form (52), so the two particles are independent, then the outcomes of the experiments on the two systems will be uncorrelated. But if the systems are entangled, then the correlation will in general be non-vanishing, leading to relations such as
\[p^{bb^{\prime}}_{aa^{\prime}}[U^{a}_{b}-\delta^{a}_{b}\,p^{d}_{c}\,U^{c}_{d}] \big{[}V^{a^{\prime}}_{b^{\prime}}-\delta^{a^{\prime}}_{b^{\prime}}\,p^{d^{ \prime}}_{c^{\prime}}\,V^{c^{\prime}}_{d^{\prime}}\big{]}\neq 0. \tag{59}\]
The point about entanglement is that even if the two systems are in separate rooms (or even different cities) the outcomes may be correlated, owing to the original construction of the state of the two-particle system to which they belong. The same is true of the prices: if \(q^{aa^{\prime}}_{bb^{\prime}}\) is entangled, then there will be correlations in the prices, as shown in relations such as
\[q^{bb^{\prime}}_{aa^{\prime}}[U^{a}_{b}-\delta^{a}_{b}\,q^{d}_{c}\,U^{c}_{d}] \big{[}V^{a^{\prime}}_{b^{\prime}}-\delta^{a^{\prime}}_{b^{\prime}}\,q^{d^{ \prime}}_{c^{\prime}}\,V^{c^{\prime}}_{d^{\prime}}\big{]}\neq 0. \tag{60}\]
Thus, in the general situation we see that when there is a market based on contracts associated with measurements being made on a number of different quantum systems, there will be correlations between outcomes of measurements and correlations between prices, where the former are determined by the structure of physical density operator for the market as a whole and the latter by the structure of the pricing operator for the market as a whole.
The physical density operator is objective in nature, the only limitations in its determination being in the usual practicalities of the laboratory settings where the states are manufactured. The pricing operator, on the other hand, if classical finance theory is any guide in the matter [16, 17, 18, 19, 7], will be determined by the collective appetite for risk and reward among market participants and also perhaps by supply and demand related considerations. Hence, as in all markets, prices will be subject to fluctuation and change over time and may even be amenable to a Bayesian treatment. In the one-period setting that we have investigated here, the pricing operator simply is what it is, and all we can say of a definite nature about it is that it exists and that the pricing operator and the physical density operator are equivalent, as we have seen in Proposition 1.
In the one-period version of the theory we have presented, one can be agnostic on the matter of dynamics. This is because \(\hat{p}\), \(\hat{q}\) and \(P_{0T}\) are specified at time \(0\) and no further data are required. By the time one reaches \(T\), however, the physical state will have evolved from the initial state \(\hat{p}\) to a new state \(\hat{p}_{T}\) given by
\[\hat{p}_{T}=\mathrm{e}^{\mathrm{i}\hat{H}T}\,\hat{p}\,\mathrm{e}^{-\mathrm{i }\hat{H}T}, \tag{61}\]
where \(\hat{H}\) denotes the Hamiltonian, assuming that there is no physical interaction with the environment. This new state can then be used for working out the probabilities of outcomes for the next period in a multi-period setting. One might conjecture that \(\hat{q}\) will undergo a similar unitary transformation, in the absence of any Bayesian updating. This will ensure that the physical state and the pricing state will continue to share the same null space. But one can also arrive at this conclusion if one works in the Heisenberg representation from the outset, in which case the sole function of the unitary operators is to rotate the measurement frames for each observable, leaving \(\hat{p}\) and \(\hat{q}\) fixed, in the absence of external interventions or state changes arising as a consequence of measurements having been made.
That the non-Kolmogorovian character of quantum probability may have implications for the development of quantum technologies is widely appreciated. See, e.g., [26] and references cited therein. And indeed, if quantum computers eventually replace the classical computers currently used for algorithmic trading by financial institutions, as they no doubt will, then the role of valuations of the type we have considered here may be important. There is also a widely held view that quantum probability may play a part in cognitive science and hence behavioural finance as well. See, e.g., [10, 23, 31, 40, 41, 46] and references cited therein.
It would be out of the scope of the present discussion to look at either of these proposals in any detail here, but on the latter point one could well imagine that if judgements and decisions are made on the basis of quantum probability then in some situations these assessments would involve _valuations_, rather than probability estimates, and it would be the pricing operator, rather than the physical density operator, that would come into play in making these valuations. In such cases, external intervention in the form of Bayesian updating could be modelled, for example, as in reference [9]. This is consistent with the point we made earlier about the pricing operator being specific to the risk and reward profiles of market operatives and in a state of flux as new information arrives. These and other further developments of the theory we hope to explore elsewhere.
###### Acknowledgements.
The authors wish to thank D C Brody and B K Meister for helpful comments.
|
2306.09238 | Excited state preparation of trapped ultracold atoms via swept
potentials | We study the out-of-equilibrium dynamics of non-interacting atoms confined
within a one-dimensional harmonic trap triggered by dragging an external
long-range potential through the system. The symmetry-breaking nature of this
moving potential couples adjacent eigenstates in the atoms' effective
potential, leading to an energy landscape reminscent of systems exhibiting
trap-induced shape resonances. These couplings may be exploited to selectively
excite the atoms into higher vibrational states of the harmonic trap by
controlling the motion of the dragged potential. To this end, we consider two
protocols designs: the first protocol strives to maintain adiabaticity at
critical points during the atoms' dynamics, whilst the second protocol utilises
the fast tunnelling of the atoms within their effective double-well potential.
These protocols take place in the few to many millisecond regime and achieve
high-fidelity excitation of the atoms into pure vibrational states and
superpositions thereof. Overall, our study highlights the significance of
dragged potentials for controlling and manipulating atom dynamics and offers
intuitive protocols for achieving desired excitations. | Daniel J. Bosworth, Maxim Pyzh, Peter Schmelcher | 2023-06-15T16:20:39Z | http://arxiv.org/abs/2306.09238v2 | # Excited state preparation of trapped ultracold atoms via swept potentials
###### Abstract
We study the out-of-equilibrium dynamics of non-interacting atoms confined within a one-dimensional harmonic trap triggered by dragging an external long-range potential through the system. The symmetry-breaking nature of this moving potential leads to trap-induced shape resonances (TISR) between adjacent eigenstates in the atoms' effective potential. We propose to exploit the TISR to selectively excite the atoms into higher vibrational states of the harmonic trap by controlling the motion of the dragged potential. To this end, we consider two protocols designs: the first protocol strives to maintain adiabaticity at critical points during the atoms' dynamics, whilst the second protocol utilises the fast tunnelling of the atoms within their effective double-well potential. These protocols take place in the few to many millisecond regime and achieve high-fidelity excitation of the atoms into pure vibrational states and superpositions thereof. Overall, our study highlights the significance of TISR in controlling and manipulating atom dynamics and offers intuitive protocols for achieving desired excitations.
## I Introduction
The advent of Bose-Einstein condensation in dilute gases of alkali atoms [1, 2, 3] ushered in an era of pure quantum systems which continues to drive progress across atomic, molecular, optical and many-body physics nearly three decades' later. The ubiquity of ultracold quantum gases in both fundamental research [4, 5] as well as quantum applications [6, 7, 8, 9] is due, in part, to the exceptional control over their inter-particle interactions which is enabled through the existence of tunable scattering resonances [10]. In particular, at sub-\(\mu\)K gas temperatures collision energies of colliding particles are of a similar scale to molecular binding energies, leading to the emergence of Feshbach resonances (FBR) [10] whose presence can be controlled using either magnetic [11, 12] or optical fields [13]. As well as providing flexible control over intra- and inter-species interactions in binary [14] and triple [15, 16] mixtures, FBR can be used to magnetoassociate cold diatomic molecules [17]. Recently, FBR have been observed for the first time in atom-ion collisions [18], which marks an important milestone toward realising ultracold hybrid atom-ion systems [19, 20, 21].
Another decisive property of ultracold quantum gas experiments is their ability to prepare ensembles with a well-defined number of particles [22, 23] and the adaptability of their trapping geometries in terms of shape [24, 25], periodicity [26] and dimensionality [27]. Employing quasi-1D and -2D traps enables the utilisation of a further class of scattering resonances, known as confinement-induced resonances (CIR) [28, 29, 30]. In a quasi-1D trap for example, these occur when a scattering state along the longitudinal trap axis becomes degenerate with a transversally-excited molecular bound state. CIR may thus be controlled through varying the longitudinal and transversal trap frequencies and have been used to associate diatomic molecules [31].
Both CIR and FBR arise from couplings between discrete and continuous states in separate scattering channels. In contrast, single-channel resonances - also known as shape resonances - arise when a scattering state becomes degenerate with a quasi-bound state within the same channel, e.g. due to the presence of a centrifugal barrier [32]. A further example are trap-induced shape resonances (TISR), a term first coined by Stock _et al._[33] in a theoretical study of colliding pairs of particles confined in separate traps. At specific separations between the traps, unbound pair states are coupled to molecular pair states through shape resonances appearing in the effective potential of the reduced single-particle Hamiltonian describing the pair's relative coordinate. Subsequent theoretical works have uncovered TISR for a colliding atom-ion pair in separate traps [34] and proposed using these to realise two-qubit quantum gates [35] as well as to excite atoms into higher Bloch bands of an optical lattice [36]. More recently, TISR were studied theoretically in the context of a trapped atom interacting with multiple static impurities [37] and a landmark experiment carried out by Ruttley _et al._[38] used TISR to facilitate the'mergosaociation' of single RbCs molecules starting from the two constituent atoms confined initially in separate optical tweezers.
In this work, we propose protocols that exploit TISR arising in the collision between trapped non-interacting atoms and an external potential in order to excite the atoms into higher vibrational states of the trap. We consider the dynamics of the atoms in a quasi-1D trap
subject to a dynamically-swept external potential. The form of the potential is chosen to be repulsive at short range and attractive at long-range, such that the dragged potential supports bound states which offers additional flexibility and a more diverse dynamical response of the system. We explore how tuning the external potential's shape and drag speed can be exploited to excite the atom from the ground state into pure excited vibrational states or superpositions of vibrational states. We propose two different types of protocols for achieving this goal which rely on avoided crossings arising in the atoms' discrete energy spectrum due to the TISR. The first protocol, slow yet robust, relies on dragging the potential adiabatically around certain critical TISR in the energy spectrum. The second protocol, significantly faster yet requiring precise control over the external potential's position, exploits the ability of the atom to undergo relatively fast tunnelling at the TISR.
Our work is laid out as follows. In Section II, we introduce the setup and discuss the emergence of the TISR in our system and how these couple the atoms to higher excited states. Section III and section IV focus on the two different state preparation protocols and include proof-of-principle demonstrations for both as well as a discussion of their limitations. Section V summarises the present study and discusses directions for future work.
## II Time-dependent model and emergence of TISR
We begin in section II.1 by introducing the time-dependent Hamiltonian which models the collision between a dragged external potential and trapped atoms in one spatial dimension. Section II.2 considers the scenario in which the external potential is swept through the trap at a constant velocity, highlighting the emergence of TISR during the collision and the role played by the potential's profile and speed. This motivates the discussion of the state preparation protocols which are the focus of Sections III and IV.
### Model: collision of trapped atoms with a dragged potential
Our system is comprised of atoms of mass \(m\) confined within a quasi-1D harmonic trap centred at the origin. The quasi-1D confinement requires \(\omega_{\parallel}\ll\omega_{\perp}\), where \(\omega_{\parallel}\) and \(\omega_{\perp}\) are the longitudinal and transverse trapping frequencies, respectively. The longitudinal axis is chosen to be parallel with the \(z\)-axis and the corresponding longitudinal eigenstates and associated eigenenergies are denoted by \(\{\phi_{n}(z)\}\) and \(\{\epsilon_{n}=n+1/2\},\ n\in\mathbb{N}\). Transverse excitations are neglected throughout this paper, such that we restrict ourselves to a one-dimensional problem. Finally, unless stated otherwise all quantities are given in units defined by the oscillator length \(a_{\rm HO}=\sqrt{\hbar/m\omega_{z}}\) and the energy spacing \(\varepsilon_{\rm HO}=\hbar\omega_{z}\) of the longitudinal eigenstates.
At \(t=0\), the atom occupies the trap's vibrational ground state \(\phi_{0}(z)\). For \(t>0\), it experiences an additional time-dependent potential \(V_{o}(z,t)\) which is swept from one side of the system to the other along the \(z\)-axis. The dragged potential's profile is comprised of a short-range repulsive barrier with an attractive long-range tail and takes the form
\[V_{o}(z,z_{o}(t))=ae^{-b(z-z_{o}(t))^{2}}-\frac{1}{2(z-z_{o}(t))^{4}+1/c}, \tag{1}\]
Figure 1: **Trap-induced shape resonances.** (a) Discrete atomic energy spectrum as a function of the position \(z_{o}\) of the external potential (see inset) relative to the trap centre. The energies of the lowest few harmonic trap eigenstates are labelled \(\epsilon_{i}\). The dashed lines show the approximate energy shift of the external potential’s bound states \(\bar{\epsilon}_{i}+z_{o}^{2}/2\)[34] as a function of \(z_{o}\), where \(\bar{\epsilon}_{i}\) are the energies of the bound states without the harmonic trap. The solid circles highlight examples of trap-induced shape resonances (TISR). (b) and (c) show plots of the instantaneous eigenstates \(\{\varphi_{i}(z;z_{o})\}\) near a TISR between (b) a bound state of the dragged potential and a trap state and (c) two trap states. The solid blue line shows the effective potential experienced by the atoms and the dashed gray line is the harmonic trap potential. Eigenstates are vertically offset by their energy. Here, we have used the parameters \(a=120\), \(b=4\sqrt{10\ c}\) and \(c=40\) for the external potential (1).
where \(z_{o}(t)\) is the displacement of the repulsive barrier from the centre of the trap. The model parameters \(a\), \(b\) and \(c\) set the height and width of the barrier as well as the depth of the wells formed by the attractive tail, respectively. A plot of the potential is provided in the inset of Fig. 1 (a). This potential could be created in an experiment using, for example, a tightly-trapped ion [20, 39, 40] or a shaped optical potential [24].
Summarising the above considerations, we write the single atom Hamiltonian as
\[\hat{H}(z_{o}(t))=-\frac{1}{2}\frac{d^{2}}{dz^{2}}+V_{\text{trap}}(z)+V_{o}(z,z _{o}(t)), \tag{2}\]
where \(V_{\text{trap}}(z)=z^{2}/2\) describes the time-independent harmonic trap. Eq. (2) is parametrically-dependent on the position of the dragged potential \(z_{o}\) and we denote its eigenstates and eigenvalues with \(\{\varphi_{n}(z;z_{o})\}\) and \(\{\varepsilon_{n}(z_{o})\}\), respectively, to contrast them with those of the pure harmonic trap (\(\phi_{n}(z)\) and \(\epsilon_{n}\)).
### Emergence of trap-induced shape resonances (TISR)
Let us first solve the time-independent problem to clarify the \(z_{o}\)-dependence of the atoms' discrete energy spectrum \(\{\varepsilon_{n}(z_{o})\}\). We choose the following model parameters for the external potential (1): \(a=120\), \(b=4\sqrt{10\ c}\) and \(c=40\). These values are similar to those used in related works in which (1) was employed as a model for atom-ion interactions [39, 40, 41, 42, 43]. For this choice of parameters, the potential supports two bound states with energies \(\bar{\epsilon}_{0}=-12.2\) and \(\bar{\epsilon}_{1}=-10.4\) which are shown in the inset of Fig. 1 (a).
Fig. 1 (a) shows the evolution of the lowest nine eigenvalues with \(z_{o}\), obtained using exact diagonalisation of the Hamiltonian (2). For \(|z_{o}|>6\), the lowest eigenstates have a regular energy-spacing \(\hbar\omega_{z}\) and describe states of the unperturbed harmonic trap. Closer to the trap centre (\(4<|z_{o}(t)|<6\)), the energies of the external potential's bound states are reduced which leads to level-repulsions between the bound states and the trap eigenstates, generating two chains of avoided crossings. Each avoided crossing is an example of a _trap-induced shape resonances_ (TISR), first predicted by Stock _et al._ for colliding pairs of trapped atoms [33]. That TISR are indeed a form of shape resonance can be seen in Fig. 1 (b), which shows the trap's ground-state near its avoided crossing with the lower bound state of the external potential at \(z_{o}=-5.25\). Here, these near-degenerate eigenstates are separated by a barrier that forms in the atom's effective potential created by the sum of \(V_{o}(z,z_{o})\) and \(V_{\text{trap}}(z)\). In addition, a second variety of TISR manifests in this system, this time due to the repulsive barrier in Eq. (1). One such example is shown in Fig. 1 (c), where two (perturbed) trap states are separated on either side of the external potential's Gaussian barrier at \(z_{o}=1.48\). We see therefore that the repulsive and attractive components of (1) each create their own class of TISR. Crucially, both kinds of shape resonances present in Fig. 1 would not appear in the absence of the trap's discrete energy spectrum.
Let us now turn to the time-dependent solution of the Hamiltonian (2). In the remainder of this section, we examine the simplest case of the external potential Eq. (1) moving at a constant velocity \(\dot{z_{o}}\) from one side of the system to the other. We are interested in the state of the atoms at long times, i.e. after the external potential has passed into and through the system and excited it on the other side, and which factors influence it.
At \(t=0\), the atoms occupy the ground-state of the trap \(\varphi(z,0)=\phi_{0}(z)\). We choose the same model parameters for the external potential as before. For numerical purposes, we set the external potential's position at \(t=0\) to be \(z_{o}(0)=-6\), which is sufficiently far-removed from the trap centre to prevent an immediate quench of the initial atomic state. We determine the atomic dynamics \(\psi=\psi(t)\) by solving the time-depdenden Schrodinger equation via wavepacket propagation using a dynamically-optimised truncated basis
Figure 2: **Path of atomic state along the energy curves.** (a)-(f) The atomic energy spectrum (grey) weighted by the overlap of the atomic state with the instantaneous eigenstates \(|\,\langle\psi(z;z_{o}|\varphi_{n}(z;z_{0})\rangle\,|^{2}\) of the Hamiltonian (2) as a function of \(z_{o}(t)\) for constant drag speeds (a) \(\dot{z}_{o}=0.01\), (b) \(\dot{z}_{o}=0.10\) and (c) \(\dot{z}_{o}=1.00\). The model parameters are the same as those given in Fig. 1. (d)-(f) Shows the same as (a)-(c) for a barrier height \(a=320\). (g),(h) Open circles show the overlap of the final state \(|\,\langle\psi_{f}(z)|\phi_{n}(z)\rangle\,|^{2}\) with the first (g) 11 (h) 101 harmonic trap eigenstates \(\{\phi_{n}\}\) (\(n=0,1,2,\ldots\)) for various drag speeds \(\dot{z}_{o}\). Filled circles in (g) are values obtained using the Landau-Zener formula (3).
representation [44].
We first consider the way in which the dragged potential couples the initial atomic state with other eigenstates during the course of the dynamics. For this purpose, we determine the overlap of the atomic state with the instantaneous eigenstates of the Hamiltonian (2) as a function of \(z_{o}(t)\). Fig. 2 (a)-(f) show plots of the energy spectrum (cf. Fig. 1 (a)) in which the curves \(\{\varepsilon_{n}(z_{o})\}\) are weighted by the overlap integrals \(|\,\langle\psi(t)|\varphi_{n}(z_{o})\rangle\,|^{2}\) for different drag speeds \(\dot{z_{o}}\) and heights \(a\) of the repulsive barrier. These plots effectively describe how \(\psi(t)\) evolves within the Hilbert space of the Hamiltonian (2). We see in Fig. 2 (a) that for a sufficiently slow drag speed and small barrier height, the state \(\psi(t)\) initially evolves along a single energy curve, with only minor population of neighbouring curves occuring after the dragged potential passes through the trap centre. For faster drag speeds and a greater barrier height, the atomic state follows an increasingly diabatic path to higher energy curves. Fig. 2 shows that for \(\dot{z_{o}}=0.01\) and \(\dot{z_{o}}=0.10\), diabatic transitions between energy curves take place exclusively at the avoided crossings, since there the coupling between energy curves is greatest and the energy gap smallest. However, this simple picture breaks down at sufficiently fast drag speeds, such as at \(\dot{z_{o}}=1.00\) which is shown in Fig. 2 (c) and (f). In both of these cases, the coupling between curves becomes strong enough that additional transitions take place at positions \(z_{o}\) away from the immediate vicinity of the avoided crossings, where the curves have relatively large energy separations. For our purposes, these additional transitions are undesirable since they constitute an additional form of 'leakage' between energy curves which hinders the controlled preparation of a well-defined final atomic state.
A more quantitative understanding of the influence of the drag speed and barrier height on the path of the atomic state in Fig. 2 is provided by the semi-classical Landau-Zener formula [45; 46]. This determines the probability \(P_{ij}\) for a diabatic transition at an avoided crossing between the energy curves of the eigenstates \(\varphi_{i}(z_{o})\) and \(\varphi_{j}(z_{o})\):
\[P_{ij}=\exp\bigg{(}-2\pi\frac{\Delta_{ij}^{2}}{\dot{z}_{o}\,\alpha_{ij}} \bigg{)}. \tag{3}\]
Here, \(\Delta_{ij}=\min(|\varepsilon_{i}-\varepsilon_{j}|)/2\) is half the minimum energy gap at the avoided crossing and \(\alpha_{ij}=|\frac{d}{dz_{o}}(\varepsilon_{i}-\varepsilon_{j})|\). For \(P_{ij}\to 0\), transitions between the states are suppressed, i.e. the dynamics is adiabatic. This holds for the condition \(\Delta_{ij}^{2}\gg\dot{z}_{o}\alpha_{ij}\). Whereas for \(\Delta_{ij}^{2}\ll\dot{z}_{o}\alpha_{ij}\), \(P_{ij}\to 1\) and the dynamics is maximally diabatic.
The filled circles in Fig. 2 (g) are predictions for the composition of the atomic state at long times determined by applying Eq. (3) at each crossing encountered by the state. The predictions are in good agreement with the results obtained from the solution of the time-dependent Schrodinger equation (open circles) over a wide range of drag speeds \(\dot{z}_{o}\). Thus, we see that the Landau-Zener formula (3) is a reasonable model for describing the state's path and we may use it to guide our intuition. Fig. 2 (h) extends the numerical results from Fig. 2 (g) up to the \(100^{\text{th}}\) excited trap state, highlighting that it is in principle possible to populate arbitrarily-highly excited states using the dragged potential. In an experimental setting however, the finite depth of the trapping potential imposes an upper energy limit and any atoms excited beyond this threshold would be lost from the system. This loss could be exploited to our advantage in the following way. We may design a state preparation protocol in which any atoms that do not reach the desired final state are lost from the system, thereby maximising the fidelity with the target state at the cost of particle number uncertainty. This could be used to circumvent the limitations of the adiabatic state preparation protocol which is the focus of Section III.
From Eq. (3), we see that we have three knobs at our disposal for controlling the atoms' path through the energy curves \(\{\varepsilon_{n}(z_{o})\}\): \(\Delta_{ij}\), \(\alpha_{ij}\) and \(\dot{z}_{o}\). The gap size \(\Delta_{ij}\) at each avoided crossing is determined by the size of the barrier at the shape resonance since taller, wider barriers lead to more narrowly-avoided crossings. Therefore, we can control \(\Delta_{ij}\) by tuning the model parameters in Eq. (1) as well as the longitudinal trapping frequency \(\omega_{z}\). These will also influence \(\alpha_{ij}\), however the quadratic dependence of \(\Delta_{ij}\) in Eq. (3) makes it a more sensitive and thus attractive control parameter. The speed of the dragged potential is also an attractive control parameter since it is a free parameter.
In the following sections, we develop protocols which exploit these control parameters in order to realise deterministic state preparation, such that the dragged potential shuttles the atoms into an excited trap state \(\phi_{n},\,n>0\) or a well-defined superposition of \(N\) trap states \(\sum_{n=0}^{N}c_{n}\phi_{n}\). We denote the target state by \(\psi_{t}\) and the goal of the following sections is to maximise the fidelity measure \(\mathcal{F}=|\,\langle\psi|\psi_{t}\rangle\,|^{2}\). We choose the following fixed set of model parameters: \(a=320,\,b=4\sqrt{10\,\,c}\) and \(c=40\). In particular, we choose \(a=320\), since from Fig. 2 (e) we see that for this barrier height - in combination with a drag speed of \(z_{o}=0.1\) - the state's path is predominantly diabatic and transitions between energy curves are to a large extent 'clean', by which we mean that the transitions occur chiefly at the avoided crossings and not, as is the case in Fig. 2 (e) and (f), also in-between avoided crossings. Both of these features are crucial for realising efficient, high-fidelity state preparation protocols.
## III Adiabatic protocol
This section introduces the first state preparation protocol, an adiabatic protocol, which seeks to control the path of the atomic state through the energy curves \(\{\varepsilon_{n}(z_{o})\}\) using only the intuition provided by
the Landau-Zener model (3) discussed in Section II. Specifically, we use the drag speed \(\dot{z}_{o}\) of the external potential to control whether the state traverses a given TISR adiabatically or diabatically in order to force it to follow a pre-determined path through the energy spectrum. In particular, we demonstrate preparation of the target states \(\psi_{t}^{(1)}(z)=\phi_{5}(z)\) and \(\psi_{t}^{(2)}(z,t)=(\phi_{4}(z)+e^{i\Phi(t)}\phi_{5}(z))/\sqrt{2}\), where we include the phase factor \(\Phi(t)=-\omega_{z}t\) to indicate that the latter target state is not a pure eigenstate of the harmonic trap and hence undergoes periodic dynamics.
The adiabatic protocol is outlined in Fig. 3. In particular, Fig. 3 (a) illustrates the ideal path through the energy spectrum from the ground state to the fifth excited state of the harmonic trap \(\phi_{5}(z)\). Ten narrowly-avoided crossings lie along this particular path, created by the TISR. Starting from \(t=0\), the state should evolve diabatically at speed \(v_{\rm d}\) until just before it reaches the \(8^{\rm th}\) avoided crossing (indicated by the box in Fig. 3 (a)), whereupon the dragged potential is decelerated linearly to the speed \(v_{\rm a}\), which should be sufficiently slow to fulfill the adiabatic condition \(\Delta_{ij}^{2}\gg\dot{z}_{o}\alpha_{ij}\) (see Fig. 3 (b)). If no deceleration occurs, the state will continue to populate higher trap eigenstates, similar to the path seen in Fig. 2 (e). After passing this critical \(8^{\rm th}\) avoided crossing, the potential is accelerated once again to \(v_{\rm d}\) and the state continues diabatically through the last two TISR, finally reaching the target state \(\psi_{t}^{(1)}(z)=\phi_{5}(z)\).
Equally, the target state \(\psi_{t}^{(2)}(z,t)=(\phi_{4}(z)+e^{i\Phi(t)}\phi_{5}(z))/\sqrt{2}\) may be achieved through a slight modification to the protocol for \(\psi_{t}^{(1)}(z)\). In particular, an additional deceleration step is required such that the state splits equally along the two energy curves at the \(7^{\rm th}\) avoided crossing, as depicted in Fig. 3 (c). The speed protocol is shown in Fig. 3 (d). The potential is first decelerated from \(v_{d}\) to \(v_{a}^{\prime}\), whose value is chosen such that an equal mixing between states at the \(7^{\rm th}\) TISR is achieved and can be estimated using Eq. (3).
The results of the simulations for \(\psi_{t}^{(1)}\) are summarised in the top row of Fig. 4. Fig. 4 (a) shows the actual path followed by the atomic state in each simulation, which agree as expected with the ideal path given in Fig. 3 (a). The evolution of the atomic probability density \(\rho(z,t)=\psi^{*}(z,t)\psi(z,t)\) is provided in Fig. 4 (b)-(d) and the external potential's trajectory \(z_{o}(t)\) is indicated by the dashed line. As the potential enters the trap (Fig. 4 (b)), the atomic density is swept in the direction of motion of the potential and the dynamics of the state is diabatic. After the external potential is decelerated, the density begins to tunnel to the opposite side of the potential's barrier Fig. 4 (c). As the potential leaves the trap (Fig. 4 (d)), the atomic density re-centres on \(z=0\) and its profile matches approximately that of the fifth excited trap state (see comparison in Fig. 4 (e)). For this particular simulation, we obtain an overlap of \(97.4\%\) with the target state in a time of \(\sim 1.22\times 10^{4}\).
Similar results for the \(\psi_{t}^{(2)}(z,t)\) protocol are depicted in the bottom row of Fig. 4. Here, we obtain an overlap of \(92.6\%\) with the target state. The fidelity is smaller than that obtained for \(\psi_{t}^{(1)}(z)\) in part due to the larger value of \(v_{a}\) used in this example (see Fig. 4 caption). Consequently, the duration of this protocol is shorter at \(\sim 6.90\times 10^{3}\). The final atomic state exhibits regular density oscillations (Fig. 4 (i)) with a period matching the time scale set by the energy separation of the neighbouring trap states, namely \(2\pi/\omega_{z}\).
Adiabatic protocols are slow by nature. For a longitudinal trapping frequency of \(\omega_{z}=2\pi\times 300\) Hz, the examples shown in Fig. 4 would have a duration on the order of seconds. A tighter trapping potential would reduce this of course, since the time unit \(\tau\) is given by \(\tau=1/\omega_{z}\) in our unit system. In addition, using
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(v_{a}/v_{d}\) & \(|\langle\psi_{f}|\phi_{5}\rangle|^{2}\) & \(t_{\rm tot}\) \\ \hline \(5.0\times 10^{-3}\) & 25.8 & \(3.20\times 10^{2}\) \\ \(5.0\times 10^{-4}\) & 89.1 & \(1.40\times 10^{3}\) \\ \(5.0\times 10^{-5}\) & 97.4 & \(1.22\times 10^{4}\) \\ \hline \end{tabular}
\end{table}
Table 1: **Dependence of final fidelity on the adiabatic speed**. (I) Ratio of adiabatic \(v_{a}\) and diabatic \(v_{d}\) speeds, with \(v_{d}=0.1\) in all cases. (II) Fidelity with the target state \(\phi_{5}\). (III) Protocol duration.
Figure 3: **Schematic of the adiabatic protocol**. (a) The ideal state path (orange) through the atomic energy spectrum (grey) to excite the atom to the fifth excited trap state \(\phi_{5}(z)\). (b) Close-up of the critical region highlighted by the box in (a). The impurity’s drag speed is overlaid in blue, indicating the transition between the diabatic and adiabatic speeds (\(v_{d}\) and \(v_{a}\), respectively). (c) The ideal state path (green) through the atomic energy spectrum (grey) to excite the atom to the superposition state \((\phi_{4}+e^{i\Phi(t)}\phi_{5})/\sqrt{2}\), where \(\Phi(t)=-\omega_{z}t\). (d) Close-up of the critical region highlighted by the box in (c). Note that in both (b) and (d) \(\dot{z}_{o}\) is plotted on a log-scale for the sake of visibility.
larger values of \(v_{a}\) would further reduce the protocol duration, but would come at the cost of the fidelity (see Table 1). Additional improvements could be made by minimising the distance over which the potential moves adiabatically via standard optimisation techniques. The final fidelity achieved is strongly influenced by the value of \(v_{a}\). Nonetheless, there are additional sources of fidelity loss, accounting overall for \(\sim 1\%\) of the total probability. Firstly, the state's evolution whilst the potential is dragged at \(v_{d}\) is not perfectly diabatic, which leads to minor losses at each crossing. Diabatic transitions between energy curves away from the avoided crossings are a further source of loss, as we saw for fast drag speeds in Fig. 2 (c) and (f). No doubt a protocol could be devised to fine-tune the drag speed around particular regions where these transitions become significant. This would, however, make the overall protocol more complex for rather marginal improvements to the fidelity.
## IV Tunnelling protocol
The key limiting factor of the protocols described in Section III is their long duration: achieving fidelities with the target state greater than \(90\%\) requires \(10^{3}-10^{4}\) units of time, which translates to timescales on the order of seconds for trapping frequencies on the order of \(100\) Hz. Ideally, we want to be able to significantly reduce the duration of the protocols whilst still preserving their relative simplicity and high fidelity. This will be the focus of the following section.
\begin{table}
\begin{tabular}{|c|c|} \hline \(\%\) error in \(z_{s}\) & \(\left\langle\psi_{f}|\phi_{3}\rangle\right|^{2}\) \\ \hline \(0.01\) & \(98.70\) \\ \(0.10\) & \(72.14\) \\ \(1.00\) & \(1.18\) \\ \hline \end{tabular}
\end{table}
Table 2: **Robustness of tunnelling probability to error in stopping position**. (I) Percentage error in stopping position \(\bar{z}_{s}\). (II) Fidelity with the target state \(\phi_{3}\). The error-free fidelity amounts to \(99.02\%\) (see Fig. 6).
Figure 4: **Adiabatic protocol.** (a)-(e) Exciting atoms to the fifth excited trap state \(\phi_{5}(z)\) using the adiabatic protocol. (a) The instantaneous energy spectrum (grey) with the coloured line representing the overlap of the atomic state with the instantaneous eigenstates \(\left|\left\langle\psi(z,z_{a})|\varphi_{\nu}(z,z_{a})\right\rangle\right|^{2}\) of (2) as a function of \(z_{o}(t)\). (b)-(d) Atomic probability density \(|\psi(z,t)|^{2}\) for different time intervals during the protocol. The solid lines represent the harmonic trap (scaled for visibility) and dashed lines indicate \(z_{o}(t)\). (e) Comparison of the density of the final atomic state \(|\psi_{f}|^{2}\) to the target state \(|\psi_{t}=\phi_{5}|^{2}\). Here, we achieve a fidelity \(|\left\langle\psi_{f}|\psi_{t}\right\rangle|^{2}\) of \(97.4\%\). (f)–(g) Show the same as the top row for the target state \((\phi_{4}+e^{i\Phi}\phi_{5})/\sqrt{2}\), where \(\Phi(t)=-\omega_{z}t\). Here, we achieve a fidelity \(|\left\langle\psi_{f}|\psi_{t}\right\rangle|^{2}\) of \(92.6\%\). For both protocols, \(v_{d}=0.1\). For the top row, \(v_{a}=v_{d}/20,000\). For the bottom row, \(v_{a}=v_{d}/600\) and \(v_{a}^{\prime}=v_{d}/2000\) (\(v_{d}\), \(v_{a}\) and \(v_{a}^{\prime}\) are defined in Fig. 3). Wavefunctions are normalised such that \(\int dz|\psi(z)|^{2}=1\).
Section IV.1, we show how more efficient protocols can be designed by drawing analogies between the dynamics of our system to the tunnelling of a particle in a double-well potential and arrive at a condition which enables tunnelling to be exploited usefully for state preparation in our system. In Section IV.2, we apply the knowledge from Section IV.1 to realise efficient protocols and present results for the preparation of pure and superposition excited trap states using the two varieties of TISR in our system that were introduced in Section II.
### Condition for complete tunnelling
The combination of the harmonic trap and the dragged potential (1) creates an effective potential for the atoms resembling an asymmetric double-well (cf. Fig. 1 (b) and (c)). For the sake of building intuition, let us first consider the case of noninteracting atoms confined within a symmetric double-well potential, which is realised in our system for \(z_{o}=0\). The energy spectrum of atoms in a double well is characterised by a series of near-degenerate doublets whose eigenstates have opposite parity. Assume that at \(t=0\) the atoms are in an equal superposition of the lowest two eigenstates: \(\psi(z,0)=(\varphi_{0}(z)+\varphi_{1}(z))/\sqrt{2}\). From the near-degeneracy of the eigenstates \(\varphi_{0}(z)\) and \(\varphi_{1}(z)\) and their opposite parity, this wavepacket is localised solely within one of the wells. For \(t>0\), the state undergoes unitary time evolution and accumulates a phase \(\Phi\): \(\psi(z,0)=(\varphi_{0}(z)+\exp{(i\Phi)}\varphi_{1}(z))/\sqrt{2}\), where \(\Phi=-\Delta\varepsilon\,t\) which is proportional to the energy gap between the eigenstates \(\Delta\varepsilon=\varepsilon_{1}-\varepsilon_{0}\). After a time \(T=\pi/\Delta\varepsilon\), the state will have accumulated a phase \(\pi\), such that the wavepacket is now localised within the opposite well: \(\psi(T)=(\varphi_{0}-\varphi_{1})/\sqrt{2}\). For our purposes, we refer to \(T\) as the tunnelling time.
Based on the size of the energy gaps at the avoided crossings in Fig. 1 (a), we can expect tunnelling times on the order of \(10^{2}\) in our system. This value is one to two orders of magnitude smaller than the time required for the adiabatic protocols discussed in Section III (see Fig. 4 (c) and (h)). In other words, our estimate of the effective double-well tunnelling time \(T\) for our system indicates that we could significantly lower the duration of our protocols by simply setting our adiabatic speed all the way to \(v_{a}=0\), i.e. stopping the potential in the vicinity of the TISR and allowing the state to tunnel freely on timescales set by the atomic energy spectrum.
To exploit tunnelling for the purpose of state preparation, we need to understand how to control it. In this regard, two related questions arise. Firstly, what conditions must be fulfilled in the asymmetric double-well system to realise 'perfect' tunnelling, namely where the atomic density tunnels completely from one side to the other without leaving behind any residue? Secondly, can we realise such tunnelling for arbitrary positions of the dragged potential? The remainder of this section provides concrete answers to these questions through some straightforward analytical considerations.
We assume that on the approach to the TISR between the instantaneous eigenstates \(\varphi_{A}\) and \(\varphi_{B}\), the atomic state is in a superposition of only these two eigenstates:
\[\begin{split}\psi(z;z_{o}(t))=& c_{A}(z_{o}(t)) \varphi_{A}(z;z_{o}(t))\\ &+c_{B}(z_{o}(t))\varphi_{B}(z;z_{o}(t)),\end{split} \tag{4}\]
which is valid assuming that the dynamics up to this point has been diabatic. The complex coefficients \(c_{A}(z_{o}(t))\) and \(c_{B}(z_{o}(t))\) satisfy \(|c_{A}(z_{o}(t))|^{2}+|c_{B}(z_{o}(t))|^{2}=1\) since the atomic wavefunction is normalised \(\langle\psi(z;z_{o}(t))|\psi(z;z_{o}(t))\rangle=1\). The TISR emerges due to a barrier created in the atoms' effective potential, centred at position \(z_{b}\). Depending on the type of TISR (see Section II for details), \(z_{b}\) may be equal to the position of the dragged potential \(z_{o}(t)\), yet this is not guaranteed. For example, the variety of TISR depicted in Fig. 1 (a) is not formed due to the external potential's Gaussian barrier but rather by its long-range attractive tail, hence in this case \(z_{b}\neq z_{o}(t)\).
At \(t=0\), the dragged external potential is suddenly halted at the position \(z_{o}(0)=z_{s}\) near the avoided crossing between \(\varphi_{A}\) and \(\varphi_{B}\). Thereafter, the atomic wavefunction undergoes unitary evolution. Since the Hamiltonian \(\hat{H}(z_{s})\) no longer has explicit time-dependence, the wavefunction for \(t\geq 0\) is given by \(\psi(z,t;z_{s})=e^{-i\hat{H}(z_{s})t}\psi(z;z_{s})\). In the interest of readability, we drop the \(z_{s}\) parameter notation in equations beyond this point. The atomic probability density \(\rho(z,t)=\psi^{*}(z,t)\psi(z,t)\) at time \(t\) is given by:
\[\begin{split}\rho(z,t)=&|c_{A}|^{2}|\varphi_{A}(z )|^{2}+|c_{B}|^{2}|\varphi_{B}(z)|^{2}\\ &+2c_{A}c_{B}\cos(\Delta\varepsilon\,t)\varphi_{A}(z)\varphi_{B}( z),\end{split} \tag{5}\]
where \(\Delta\varepsilon\) is the energy difference between the eigenstates at position \(z_{s}\) and we have assumed that the
eigenstates are real-valued. For brevity, we label the time-independent and time-dependent contributions to the density as \(\bar{\rho}(z)=|c_{A}|^{2}|\varphi_{A}(z)|^{2}+|c_{B}|^{2}|\varphi_{B}(z)|^{2}\) and \(\delta\rho(z,t)=2c_{A}c_{B}\cos(\Delta\varepsilon\,t)\varphi_{A}(z)\varphi_{B} (z)\), respectively. Note that \(\delta\rho(z,t)\) is periodic in time with period \(P=2\pi/\Delta\varepsilon\).
If the dynamics for \(t<0\) has been diabatic, the atoms' probability density at \(t=0\) will be localised on one side of the TISR barrier, for example \(z>z_{b}\) (see Fig. 4 (b)). Thus, the atomic density at \(t=0\) fulfils the condition:
\[\rho(z,0)=\bar{\rho}(z)+\delta\rho(z,0)=0,\ \ \forall\ z\leq z_{b}. \tag{6}\]
Using Eq. (5), we can rewrite the above condition as:
\[\bar{\rho}(z)=-\delta\rho(z,0)=-2c_{A}c_{B}\varphi_{A}(z)\varphi_{B}(z),\ \ \forall\ z\leq z_{b}. \tag{7}\]
We now seek the optimal value of the external potential's stopping position, denoted by \(\bar{z}_{s}\), such that the atoms undergo perfect tunnelling. This requires that at time \(t=P/2\) the atoms are localised on the opposite side of the TISR barrier. Hence, we demand that the atomic density fulfils the following condition:
\[\rho(z,P/2)=\bar{\rho}(z)+\delta\rho(z,P/2)\overset{!}{=}0,\ \ \forall\ z>z_{b}. \tag{8}\]
Making use of Eq. (5) and \(\cos(\Delta\varepsilon\,P/2)=-1\) yields:
\[\bar{\rho}(z)\overset{!}{=}2c_{A}c_{B}\varphi_{A}(z)\varphi_{B}(z),\ \ \forall\ z>z_{b}. \tag{9}\]
Finally, we make use of the conditions in Eq. (7) and Eq. (9) and the fact that \(\bar{\rho}(z;z_{s})\) is normalised to derive the following
\[\begin{split} 1=\int dz\,|\bar{\rho}(z)|&=\int_{z\leq z _{b}}dz\,|\bar{\rho}(z)|+\int_{z>z_{b}}dz\,|\bar{\rho}(z)|\\ &\overset{!}{=}2|c_{A}||c_{B}|\int dz\,|\varphi_{A}(z)||\varphi_ {B}(z)|\end{split} \tag{10}\]
In the above, we have used the absolute value in order to write the final expression as a single integral. Eq. (10) provides us with a relation between the overlap coefficients \(c_{i}=\int dz\,\varphi_{i}(z)\psi(z)\) and the overlap of the eigenstates' absolute magnitudes \(\mathcal{I}=\int dz|\varphi_{A}(z)||\varphi_{B}(z)|\) which must be fulfilled in order for perfect tunnelling to
Figure 6: **Tunnelling protocol.** (a)-(c) Exciting atoms to the third excited trap state \(\phi_{3}(z)\) using the tunnelling protocol. (a) The atomic energy spectrum, weighted by the overlap of its state with the instantaneous eigenstates \(|\left<\psi(z,z_{o})|\varphi_{\nu}(z,z_{o})\right>|^{2}\) of Eq. (2) as a function of \(z_{o}(t)\). (b) Atomic probability density \(|\psi(z,t)|^{2}\) throughout the protocol. The solid line is the harmonic trap potential (scaled for visibility) and the dashed line indicates \(z_{o}(t)\). (c) Comparison of the density of the atom’s final state \(|\psi_{f}|^{2}\) to the target state \(|\phi_{3}(z)|^{2}\). Here, we achieve a fidelity \(|\left<\psi_{f}|\psi_{t}\right>|^{2}\) of 99.02%. (d)-(f) Same as for the top row for the target state \((\phi_{0}(z)+e^{i\Phi(t)}\phi_{4}(z))/\sqrt{2}\), where \(\Phi(t)=-4\omega_{z}t\). Here, we achieve a fidelity \(|\left<\psi_{f}|\psi_{t}\right>|^{2}\) of 99.7%. For both protocols, the speed of the ion between stops was \(v_{d}=0.1\). Wavefunctions are normalised such that \(\int dz|\psi(z)|^{2}=1\).
take place, namely \(|c_{A}||c_{B}|\,\mathcal{I}\stackrel{{!}}{{=}}1/2\).
Since \(0\leq|c_{A}||c_{B}|\leq 1/2\) and \(0\leq\mathcal{I}\leq 1\), the condition in Eq. (10) can only be fulfilled when \(|c_{A}||c_{B}|=1/2\) and \(\mathcal{I}=1\). This requires (i) the atomic state to be in an equal superposition of eigenstates \(\varphi_{A}(z)\) and \(\varphi_{B}(z)\) (i.e. \(|c_{A}|=|c_{B}|=1/\sqrt{2}\)) and (ii) that these eigenstates differ at most by the sign of their prefactors (\(|\varphi_{A}(z)|=|\varphi_{B}(z)|\quad\forall\ z\)). The former condition is rather loose, since it could be realised in general for arbitrary \(z_{s}\). However, the latter condition provides a strong indication that the optimal stopping position \(\bar{z}_{s}\) is located at the narrowest point of the avoided crossing between the eigenstates. Thus, we have shown that the requirements for perfect tunnelling in an asymmetric double well match those of the symmetric double well that we considered at the beginning of this section. We determine \(\bar{z}_{s}\) for a given crossing by evaluating the overlap integral of the eigenstates \(|\varphi_{A}(z)|\) and \(|\varphi_{B}(z)|\) for a range of \(\bar{z}_{s}\) around their common TISR. Fig. 5 shows the results for \(|\varphi_{5}(z)|\) and \(|\varphi_{6}(z)|\). In this case, we confirm that the critical position \(\bar{z}_{s}\) occurs at the point of closest approach between the energy curves \(\varepsilon_{5}\) and \(\varepsilon_{6}\). In conclusion, the tunnelling protocol cannot be realised for arbitrary \(\bar{z}_{s}\). In fact, the ability to tunnel is highly sensitive to the choice of \(\bar{z}_{s}\) as shown by Fig. 5. Nonetheless, through the above analysis we have arrived at the condition \(\mathcal{I}(\bar{z}_{s})=1\) which must be fulfilled to achieve perfect tunnelling, which provides us with a systematic method for determining the optimal stopping position \(\bar{z}_{s}\). Furthermore, the size of the energy gaps at the avoided crossings mean that the atoms will tunnel over one order of magnitude faster than the adiabatic protocols discussed in Section III.
### Proof-of-principle
Using the knowledge about the conditions for perfect tunnelling gained from the previous section, we now perform state preparation using with new protocols that execute sudden stops of the dragged potential at relevant avoided crossings in the atomic energy spectrum. The relevant avoided crossings are determined by the desired target state. The duration of each stop is set by the tunnelling time \(T=\pi/\Delta\varepsilon\) for the given avoided crossing. Between stops, the external potential moves at a constant speed \(\dot{z}_{o}=0.10\) and the change in its velocity is assumed to be sudden.
Fig. 6 summarises the results of tunnelling protocols for target states \(\psi_{t}^{(3)}(z)=\phi_{3}(z)\) and \(\psi_{t}^{(4)}(z,t)=(\phi_{0}(z)+e^{i\Phi(t)}\phi_{4}(z))/\sqrt{2}\), where \(\Phi(t)=-4\omega_{z}t\). In both cases, we achieve fidelities above 99% for durations of \(10^{2}\) time units. In order to prepare the superposition state \(\psi_{t}^{(4)}(z,t)\), we follow a slightly different approach by exploiting instead the TISR that arise between a bound state of the dragged potential (1) and a vibrational state (see e.g. Fig. 1 (b)). Using these TISR requires us to reverse the direction of motion of the dragged potential, which therefore requires stopping twice during the protocol as compared to only once in the protocol on the top row of Fig. 6. The advantage of this approach is however that there are overall fewer avoided crossings that the state has to traverse, which improves the overall fidelity at the cost of a slightly longer protocol. Finally, we note that by stopping the potential at \(\bar{z}_{s}\) for only half the tunnelling time \(T/2\) the state will split equally along both paths that meet at the crossing. Using this method, we achieve a fidelity of 99.7% with \(\psi_{t}^{(4)}(z,t)\) in a time of 650 (see bottom row of Fig. 6 for further details).
Whilst the tunnelling protocols have a distinct advantage in terms of speed, their major drawback is their sensitivity to errors in the stopping position \(\bar{z}_{s}\). In Table 2, we summarise some data which investigates the robustness of the protocol to errors in the critical position \(\bar{z}_{s}\). We find that deviations as small as 0.1% can lead to a sizeable decrease in the fidelity with \(\psi_{t}^{(3)}(z,t)\). The level of precision in the positioning of the potential might be challenging to meet by current experimental standards.
## V Summary and conclusions
In this work, we explored protocols for exciting individual trapped atoms into higher vibrational states by means of a dynamically-swept external potential. In particular, we employed an external potential possessing long-range attractive character and a repulsive barrier at its centre, which could be realised via a tightly-trapped ion or a shaped optical potential. Excitation of the atoms was facilitated by avoided crossings in the atomic energy spectrum, whose position and gap size may be tuned through the shape of the external potential. The presence of the avoided crossings is a consequence of trap-induced shape resonances (TISR) which emerge in the atomic effective potential, formed by the superposition of the harmonic trap and the external potential. The protocols proposed in our work selectively prepare the atoms in excited vibrational states through controlling the movement of the external potential in order to drive the state along a desired path through the atoms' discrete energy spectrum. The first protocol relies on adiabatic driving around a small number of critical TISR, which depend on the desired target state. The protocol's primary limitation is its duration: achieving fidelities higher than 90% requires durations of \(10^{3}-10^{4}\) in harmonic oscillator units. For a Rb atom with \(\omega_{z}=2\pi\cdot 1\) kHz, this would correspond to a protocol duration of approximately 0.1 s to 1.0 s.
In contrast, the second protocol brings the potential to a complete halt at the critical TISR, whereupon the atom undergoes unitary dynamics in its effective potential created by the harmonic trap and the now static external
potential. During this period, the atom tunnels through the barrier present at the shape resonance on timescales defined by the energy gap between the eigenstates at the avoided crossing. We found that tunnelling occurs over durations of \(10^{2}\), which is one to two orders of magnitude faster than the timescales for the adiabatic protocol. The tunnelling protocol achieved fidelities higher than 99% with protocol durations of 10 ms to 100 ms, assuming a Rb atom with \(\omega_{z}=2\pi\cdot 1\) kHz. However, the fidelity of this protocol is highly-sensitive to the external potential's stopping position.
Our work may be extended to weakly-interacting Bose or Fermi gases to investigate the role of interparticle interactions and particle statistics. Moreover, considering a binary mixture may be of particular interest. For instance, consider a mixture of two components A and B, where species A initially occupies an excited trap state and species B occupies the vibrational ground state. Introducing weak interspecies interactions would mean that species B experiences, in an effective picture, a lattice-like background potential created by the density of species A. Additionally, the lattice could be made to vibrate by preparing species A in a superposition of trap states, thus mimicking phononic excitations.
## Acknowledgements
This work is funded by the Cluster of Excellence "Advanced Imaging of Matter" of the Deutsche Forschungsgemeinschaft (DFG)-EXC 2056, Project ID No. 390715994.
|
2305.08476 | RDF Surfaces: Computer Says No | Logic can define how agents are provided or denied access to resources, how
to interlink resources using mining processes and provide users with choices
for possible next steps in a workflow. These decisions are for the most part
hidden, internal to machines processing data. In order to exchange this
internal logic a portable Web logic is required which the Semantic Web could
provide. Combining logic and data provides insights into the reasoning process
and creates a new level of trust on the Semantic Web. Current Web logics
carries only a fragment of first-order logic (FOL) to keep exchange languages
decidable or easily processable. But, this is at a cost: the portability of
logic. Machines require implicit agreements to know which fragment of logic is
being exchanged and need a strategy for how to cope with the different
fragments. These choices could obscure insights into the reasoning process. We
created RDF Surfaces in order to express the full expressivity of FOL including
saying explicitly `no'. This vision paper provides basic principles and
compares existing work. Even though support for FOL is semi-decidable, we argue
these problems are surmountable. RDF Surfaces span many use cases, including
describing misuse of information, adding explainability and trust to reasoning,
and providing scope for reasoning over streams of data and queries. RDF
Surfaces provide the direct translation of FOL for the Semantic Web. We hope
this vision paper attracts new implementers and opens the discussion to its
formal specification. | Patrick Hochstenbach, Jos De Roo, Ruben Verborgh | 2023-05-15T09:27:46Z | http://arxiv.org/abs/2305.08476v1 | # RDF Surfaces: Computer Says No+
###### Abstract
Logic can define how agents are provided or denied access to resources, how to interlink resources using mining processes and provide users with choices for possible next steps in a workflow. These decisions are for the most part hidden, internal to machines processing data. In order to exchange this internal logic a portable Web logic is required which the Semantic Web could provide. Combining logic and data provides insights into the reasoning process and creates a new level of trust on the Semantic Web. Current Web logics carries only a fragment of first-order logic (FOL) to keep exchange languages decidable or easily processable. But, this is at a cost: the portability of logic. Machines require implicit agreements to know which fragment of logic is being exchanged and need a strategy for how to cope with the different fragments. These choices could obscure insights into the reasoning process. We created RDF Surfaces in order to express the full expressivity of FOL including saying explicitly 'no'. This vision paper provides basic principles and compares existing work. Even though support for FOL is semi-decidable, we argue these problems are surmountable. RDF Surfaces span many use cases, including describing misuse of information, adding explainability and trust to reasoning, and providing scope for reasoning over streams of data and queries. RDF Surfaces provide the direct translation of FOL for the Semantic Web. We hope this vision paper attracts new implementers and opens the discussion to its formal specification.
Semantic Web, First-order Logic, Logical Reasoning 2023
## 1 Introduction
RDF [1] is the standard for modeling data on the Semantic Web. From a syntactic viewpoint, RDF is a simple data model for expressing relations between triples where each triple is a first-class web object. From a semantic viewpoint, a triple is an assertion expressing what is believed to be _true_. Combinations of triples form a logical conjunction (and). Any combination of resources on the Semantic Web creates a universal and statement [2]. This is not unusual as the majority of database systems assert truth similarly.
Human and software agents interpret data and use internal logic to turn these assertions into decisions regarding what data to _trust_ or not (negation), provide _options_ to select data to ingest
or compute (disjunction) and make _inferences_ from this data (implication). Current Web logics in the form of rule and ontology languages provides insights into the reasoning process but only carries a fragment of first-order logic (FOL). These fragments of FOL are tuned to make the processing of Web logic decidable or easily processable, at the cost of portability. _Portability_ is the ability to represent data and logic independently irrespective of the processing environment. RDF is portable, any RDF triple expresses the same data and meaning irrespective of the source. Adding Web logic, this becomes much harder: machines must first agree on an entailment regime before they know what each triple represents. In this position paper, we present our case for a portable Web logic language extending RDF semantics called _RDF Surfaces_, which is as powerful as FOL. RDF Surfaces is able to group RDF data within surfaces and provide semantics to say 'no': expressing an explicit scope and classical negation.
## 2 Background
We started our research into portable logic in relation to our involvement in the SolidLab Flanders [3] and Mellon-funded project "Scholarly Communications in the Decentralized Web" [4]. In both projects, a decentralized network of _data pods_ exist on which individuals store (personal) data. In the case of SolidLab, data are documents Flemish citizens share with the government or businesses. In the case of the Mellon project, data are research artifacts such as scholarly articles and research data that are shared within a research community. Both projects are investigating how automated systems can assist in managing these types of data, interlink data with existing (external) resources, and provide enforcement of data policies using logic-based rules. In each use case, automated systems can't assume that only one actor creates and manages these rules. In a decentralized world, many agents can set their own requirements of what should or shouldn't happen with data. For instance, in the case of data policies, different actors on a personal, institutional, and governmental level can define the permissions and prohibitions (re)using data. These policies can overlap and possibly contradict themselves. Such clashes can benefit from being detected at an early stage and not at run time. Another challenge is the single language problem. Machines should be capable to implement potentially heterogeneous collection policy languages with many possible dialects. The strength of RDF is that the data model for these variations is portable, there is a common format that can express the overlap between the different policy languages. But, it is not the case that the logic - what all RDF triples entail - is portable. Without knowing which kind of inferences are possible (using entailment regimes such as RDFS, OWL-DL, OWL2, Notation3,..), it is possible that designers of policies can reach other conclusions than the consumer (the policy enforcer on the pod). For these use cases, we need a common portable logic, with a common syntax and semantics.
## 3 RDF Surfaces
Only two extensions are required to the semantics of RDF to make it expressable as FOL: a notion of a (possible nested) _surface_ to group zero or more RDF graphs with a truth-functional type, and a notion of _graffiti_ as marks on such as a surface with the function of existentially quantified variables.
**Surfaces** can be regarded as nestable virtual sheets of paper on which RDF graphs are written. A _positive surface_ asserts all triples written on it; as with RDF, multiple triples form a logical conjunction (and). The default surface is positive. All existing RDF graphs are regarded to be on the default positive surface. A _negative surface_ negates all triples written on it, and multiple triples form a negation (not) of a conjunction (and).1 With logical connectives and and not, any truth-functional statement can be created by composition, similar to how logical gates on a computer chip can be created by combining nand gates.
Footnote 1: Individual triples are not necessarily negated; only their conjunctions.
**Graffiti** are marks on a surface representing quantified logical variables. Graffiti marks on a positive surface are interpreted as _existentially_ quantified variables, this is how blank nodes are currently interpreted in RDF. The difference between blank nodes and graffiti marks is that blank nodes act as _coreferences_ to graffiti marks, and that every surface contains its own unique set of graffiti marks. Transporting marks to a new surface requires creating new graffiti marks (not relabeling or merging graffiti marks, as is the case with blank nodes). Using De Morgan's duality2, graffiti marks on a negative surface are interpreted as _universally_ quantified variables.
Footnote 2: \(\neg\exists x\colon P(X)\equiv\forall x\colon\neg P(x)\)
Given the notion of a surface and graffiti, we define an **H-graph** as the combination of a (typed) surface \(S\), graffiti \(Gr\), and a graph \(H\) which is again an H-graph. Each RDF graph is an H-graph on the default positive typed surface. The empty H-graph is regarded as a tautology on a positive-typed surface and a contradiction on a negative-typed surface. By combining and nesting H-graphs any truth-functional statement can be created. H-graphs have similar semantics as the alpha and beta Existential graphs of Pierce3 which are as expressive as FOL[5][6]. Our position is that these two added primitives (surfaces and graffiti marks) are a much more economical way to express FOL than in OWL Full, plus they are an immediate natural extension to RDF semantics.
Footnote 3: [https://plato.stanford.edu/entries/peirce-logic/#AlphSyst](https://plato.stanford.edu/entries/peirce-logic/#AlphSyst)
We need a notation to transport the H-graphs over the Web and require an RDF syntax for that. This syntax should provide a way to scope RDF graphs and codify the graffiti marks. Our first choice was to use Notation3 [7], because it provides syntactic support for nested quoted graphs (surfaces), built-ins (to reason with surface types), and lists (codifying the graffiti marks). Using Notation3, an H-graph is expressed using graffiti marks as the subject list, a typed surface as a predicate, and an H-graph as the object. In our implementations, the surface predicate is implemented as a built-in to allow for reasoning with H-graphs.
Listing 1 provides an example of an H-graph with semantics \(\forall s\colon\mathit{learns}(s,Physics)\Rightarrow\mathit{reads}(s,Newton) \vee\mathit{reads}(s,Einstein)\) by stating it as \(\forall s\colon\neg(\mathit{learns}(s,Physics)\wedge\neg\mathit{reads}(s, Newton)\wedge\neg\mathit{reads}(s,Einstein))\) which means "Anyone that learns physics reads Newton or Einstein (or both)".
For a detailed introduction to RDF Surfaces we refer to our RDF Surface Primer [8].
## 4 Related Work
The case for portable Web logic is not new. Already in the 2000s Berners-Lee [9] called for the development of a language on top of RDF in his Semantic Web Logic Language (SWeLL) proposal.
SWeLL was imagined as a unifying language acting as a logical bus to "allow any web software to read and manipulate data published by any other web software". For all logical relations to be expressed, SWeLL had to extend RDF by including negation and explicit quantification.
A similar portability argument was given in 2009 by Hayes in his ISWC invited talk [10]. According to his argumentation, the idea of Web logic portability has not been achieved due to the layering of logic in the Semantic Web stack. Different layers of the stack cannot guess which entailment regimes are used when receiving data. Without this information, two independent machines cannot arrive at the same conclusions given a model. As a first step towards a solution for this conundrum, Hayes presented RDF Redux which provides a syntax for expressing Web logic with FOL semantics. His ideas were very influential for our work and led to our current research into RDF Surfaces.
There are two main reasons why classical negation has thus far been avoided. The first reason is negation opens the door to create _paradoxes_ on the Semantic Web. One of the main motivations of Berners-Lee et al. for N3Logic [11] was to introduce the ability to compare Web documents and make inferences about assertions in each document. For this reason, a quoting mechanism was introduced. Quoting in combination with classical negation can lead to paradoxes when assertions in resources are self-referential. This self-referential problem resembles the liar paradox in the philosophy of language: "This sentence is false". To avoid paradoxes, Scoped Negation As Failure (SNAF) was introduced to N3Logic and acts as a monotonic version of Negation As Failure (NAF). SNAF disallows self-referential negative statements about resources and defines a scope for negation by the absence of information (Scope + NAF = SNAF). However, SNAF cannot avoid semantic inconsistencies nor has a syntactic mechanism for expressing them. With RDF Surfaces we introduce a formalism requiring negative facts to be explicitly expressed as negative surfaces, without using (S)NAF. Inconsistencies can be expressed as an assertion plus a negative surface thereof. These inconsistencies still need to be detected which leads to our second reason.
Classical negation is also avoided due to combining the expressive power of logics with negation and _decidability_ problems. _Satisfiability_ is a decidability problem to find inconsistencies. _Completeness_ is a decidability problem, related to satisfiability, to find all valid statements from a knowledge base. _Halting_ is a decidability problem requiring a machine to stop in a finite time. Logics containing logical variables and negation can be as powerful as first-order logic (FOL) for which it is proven [12] solving any of these problems is undecidable. Three options are available for dealing with undecidability for which, alas, only two can be chosen: (P) portable, allow any ontology input, (C) be complete, (H) always halt [13]. OWL 2 Full is an example of a Web logic
that can express any ontology but is undecidable. OWL 2 Description Logics drops option (P) and creates decidable fragments of FOL and reasoners can always return all answers and always halt, i.e. (C+H). This choice raises the question of what fragment of FOL to choose. This is at the cost of Web portability. There are at least as many choices as there are OWL Description Logics. Reasoners that cannot guess which fragment to choose to have to implement them all to provide portability. Reasoners should be as expressive as FOL in order to do so. In general, if we want portable Web logic, option (P) seems to be unavoidable. We argue: any of the (P+C), (P+H), and (C+H) axis are valuable depending on the use case. A reasoner as powerful as (P+C) or (P+H) is required to implement a portable Web logic, but the logic could be 'tunable' for specific use cases. A 'tuned down' (C+H) reasoner could implement OWL Description Logics as syntactic sugar. (P+C) and (P+H) can provide a portable logic for reasoning over decentralized sources. Undecidability is a hard problem, but unavoidable in the latter case. The logic provided by RDF Surfaces, we argue, can be created with fewer primitives than OWL 2 Full and is in line with a large body of research on FOL. Undecidability itself does not stop the Web from evolving as can be seen from the popularity of Web programming languages such as JavaScript.
## 5 Conclusion and future work
We believe that RDF Surfaces provides an expressivity comparable to Peirce's alpha and beta graphs and FOL. We still need proof that a complete mapping from RDF Surfaces to FOL (or Peirce's graphs) exists. This is part of our current research. Existential rules have shown already to be sufficient to implement the inference rules of RDFS, OWL-RL, and OWL-EL [14, 15, 16]. These existential rules were implemented in the Notation3 language, but we are porting them to RDF Surfaces. We started to port the Notation3 language itself in RDF Surfaces as ongoing work.4
Footnote 4: [https://github.com/eyereasoner/Notation3-By-Example/tree/main/examples/n3s](https://github.com/eyereasoner/Notation3-By-Example/tree/main/examples/n3s)
The expressivity of RDF Surfaces goes beyond what is provided by OWL Description Logics. It could be impractical to express every inference rule solely in the language of first-order logic. The Notation3 language provides built-in functions and relations for arbitrary operations on RDF graphs of practical assistance to the programmer. A pure subset of these functions (deterministic and without side effects) could form the basis for an assembly language of the web. In addition to Web portability, use cases for RDF Surfaces can be found in implementing policy languages where one would explicitly want to express what is regarded as misuse of information and reason over that. RDF Surfaces can provide the semantics to limit the scope of reasoning over RDF data streams or limit the scope for queries and provide alternatives to query whatever is in reach [17].
The authors are currently experimenting with reasoners implementing RDF Surfaces along the (P+H) [18] and (C+H) [19] axis and formed a W3C Community Group 5 to define the semantics and find new implementers.
## Acknowledgments
This work is funded by the Andrew W. Mellon Foundation (grant number: 1903-06675) and supported by SolidLab Vlaanderen (Flemish Government, EWI, and RRF project VV023/10). The authors thank Dorthe Arndt and Ruben De Decker for several insightful discussions about logic and applications of RDF Surfaces.
|
2306.11477 | CATS: A Pragmatic Chinese Answer-to-Sequence Dataset with Large Scale
and High Quality | There are three problems existing in the popular data-to-text datasets.
First, the large-scale datasets either contain noise or lack real application
scenarios. Second, the datasets close to real applications are relatively small
in size. Last, current datasets bias in the English language while leaving
other languages underexplored. To alleviate these limitations, in this paper,
we present CATS, a pragmatic Chinese answer-to-sequence dataset with large
scale and high quality. The dataset aims to generate textual descriptions for
the answer in the practical TableQA system. Further, to bridge the structural
gap between the input SQL and table and establish better semantic alignments,
we propose a Unified Graph Transformation approach to establish a joint
encoding space for the two hybrid knowledge resources and convert this task to
a graph-to-text problem. The experiment results demonstrate the effectiveness
of our proposed method. Further analysis on CATS attests to both the high
quality and challenges of the dataset. | Liang Li, Ruiying Geng, Chengyang Fang, Bing Li, Can Ma, Rongyu Cao, Binhua Li, Fei Huang, Yongbin Li | 2023-06-20T12:02:26Z | http://arxiv.org/abs/2306.11477v1 | # Cats: A Pragmatic Chinese Answer-to-Sequence Dataset with Large Scale and High Quality
###### Abstract
There are three problems existing in the popular data-to-text datasets. First, the large-scale datasets either contain noise or lack real application scenarios. Second, the datasets close to real applications are relatively small in size. Last, current datasets bias in the English language while leaving other languages underexplored. To alleviate these limitations, in this paper, we present Cats, a pragmatic Chinese answer-to-sequence dataset with large scale and high quality. The dataset aims to generate textual descriptions for the answer in the practical TableQA system. Further, to bridge the structural gap between the input SQL and table and establish better semantic alignments, we propose a Unified Graph Transformation approach to establish a joint encoding space for the two hybrid knowledge resources and convert this task to a graph-to-text problem. The experiment results demonstrate the effectiveness of our proposed method. Further analysis on Cats\({}^{1}\) attests to both the high quality and challenges of the dataset.
## 1 Introduction
Data-to-text (D2T) generation [12, 13] aims to generate a natural language description conditioned on structured or semi-structured data, such as graphs [20, 17] or tables [1, 16]. It helps people get the key points of the input data and makes the stored information accessible to a broader range of end-users. A large number of datasets have been proposed as the testbed for neural D2T models and are driving the domain.
However, as shown in Table 1, we note three problems existing in the popular datasets. First, the large-scale datasets either contain noises (e.g., WEATHERGOV [14]) or lack practical application scenarios, e.g., ToTTo [15]. The shortcoming leads to a separation between research and application. Second, the datasets close to practical scenarios are relatively small in size. For example, ROTOWIRE [16] only contains 4.9K training examples, and CoSQL [21] is consist of 7.8K training pairs. The small training size can easily lead to overfitting and is not conducive to training a reliable neural network model. Lastly, most of the existing datasets are built for English, which leads to advanced work on D2T generation primarily focusing on English and leaving other languages underexplored. These limitations hinder the progress of D2T generation. We therefore need to investigate possible remedies.
The crucial step to improving the above limitations is digging out a data-to-text task with a practical scenario. Recently, CoSQL [21] has proposed a practical controlled D2T task: answer-to-sequence. As shown in Figure 1, the task takes a SQL query generated by a semantic parsing module, i.e., text-to-SQL [22]
Figure 1: An example for a practical TableQA system. The red dotted lines denote the input data for answer-to-sequence.
2012), and its corresponding execution result (in the form of a table) as the model input and aims to produce a natural language description as the response to users in a real-world TableQA system. The SQL gives explicit signals for models on what to generate. The generated description could provide a concise and easy-to-understand summary of the result table and help users verify whether the queried result is consistent with the original question Fang et al. (2022). Moreover, the task also contributes to a more user-friendly human-computer interaction. Nevertheless, CoSQL contains only 7.8K answer-to-sequence examples for training. Additionally, it is a dataset with SQL-grounded dialogue state tracking as the core, and the generation annotations are very rough. The scale and quality of CoSQL limit further exploring the answer-to-sequence task.
In this paper, to bridge the gap between research and application of data-to-text datasets and enrich their language diversity, we comply with the CoSQL setting and present Cats, a large-scale and high-quality **C**hinese **a**nswer-**t**o-**s**equence dataset. We manually annotate all collected SQL-table pairs to obtain their descriptions. We make two efforts to improve the quality and scale of the collected SQL-Table pairs and guarantee they are close to practical scenarios. First, we annotate the SQL-table pairs from DuSQL Wang et al. (2020), a large-scale Chinese Text-to-SQL dataset with a SQL query distribution close to real applications. Data collected in this way are named Cats-D. Second, we adopt an automatic data construction pipeline to collect a large number of SQL-table pairs for annotation. The basic idea is automatically crawling a mount of tables from the Internet to build multi-table databases and then automatically generating SQL queries based on the SQL grammar and constrained by the given database. Data collected with this method are referred to as Cats-S. Compared to Cats-D, Cats-S expands the data scale while reducing the share of easy SQLs to make the dataset more challenging. In total, Cats is made up of both Cats-D and Cats-S, and contains 43,369 answer-to-sequence examples, which is an order of magnitude larger than CoSQL.
The input SQL and table in answer-to-sequence are heterogeneous, and there is a structural gap between them. To bridge the gap and establish better semantic alignments, we propose a Unified Graph Transformation approach (Ugt), which first converts the two sources to two undirected graphs, then builds the connection between the nodes in different graphs to obtain a unified graph. In this way, we convert this task to a graph-to-text problem Gardent et al. (2017). Previous graph-to-text work Ribeiro et al. (2021) transforms the input graph into a new token graph to apply pretrained language models, such as T5 Raffel et al. (2020). We consider that this transformation breaks the original input graph structure and may bring in extra noises into graph encoding. Hence, we further introduce a Node Segment Embedding (Nse) to preserve original structure information.
Our contributions are three-fold as follows:
* We present a large-scale and high-quality Chinese answer-to-sequence dataset (Cats), which narrows the gap between research and application of data-to-text generation datasets and enriches the language diversity.
* We propose Ugt and Nse to better model the input of two heterogeneous structured input data sources.
* Experiments and analysis on Cats attest to both the high quality and challenges of the dataset. The results also demonstrate the effectiveness of our proposed method.
## 2 Related Works
### Answer-to-Sequence Generation
In a real-world setting, a TableQA system comprises a table semantic parsing (text-to-SQL) component and an answer-to-sequence component. The semantic parsing component converts a natural language question into a SQL query Guo et al. (2019); Wang et al. (2020); Hui et al. (2021) and the answer-to-sequence component aims generating a natural language description of the SQL and the execution result. CoSQL Yu et al. (2019) first proposes the answer-to-response task and refers to it as response generation. Intuitively, response generation should encompass both answer acquisition and answer description, which could easily be confused with the role of the whole Table QA system. Therefore, to make the task more clearly related to its definition and function, we rename it as answer-to-sequence generation. In this paper, the proposed Cats follows the same task setting in CoSQL. Specifically, the task's input consists of a SQL query and its corresponding execution result (in the form of a table), and the output is a natural language description.
Especially, using SQL query as input rather than natual language question is more practical in multi-turn TableQA scenarios because the SQL query can easily represent the context state (Yu et al., 2019).
### Structure Modeling in Data-to-Text
Recently, some works in D2T generation have shown that the structure modeling for the input data can dramatically improve the model performance. For table data, Liu et al. (2019); Li et al. (2021) propose to utilize a hierarchal encoder to model the table's representation from the row and column levels. For graph structure modeling, early works (Song et al., 2018; Damonte and Cohen, 2019) introduce Graph Neural Networks as the structure encoder, which only considers the relations between neighbor nodes. Unlike the local encoding strategies, Zhu et al. (2019); Cai and Lam (2020) propose the Graph Transformer that uses explicit relation encoding and allows direct communication between two distant nodes. Newly, some works enable the pretrained language models the structure modeling capabilities and achieve SOTA results on many D2T tasks. Especially, Ribeiro et al. (2021) attempt to insert structural adapters into T5'encoder to model the graph structure. Wang et al. (2022) modify the T5's attention masking matrix to encode table with a structure-aware self-attention mechanism. In this paper, we propose to utilize Ugt to convert the input SQL and table to a graph and utilize a graph-to-model to model it. Our model refers to Ribeiro et al. (2020, 2021)' works and further improves them by introducing Nse to better preserve the graph structure.
## 3 Dataset Construction
Considering the limitations of existing D2T datasets, we present Cats, a massive and pragmatic Chinese answer-to-sequence dataset. Cats is constructed by two phases: SQL-table pairs collection and manual data annotation. To balance the data quality and scale and bring it closer to the practical scenario, we collect the SQL-table pairs in two ways. First, we derive SQL-table pairs from DuSQL (Wang et al., 2020), a text-to-SQL dataset that generates the SQL queries by referring to the SQL query distribution in real-life applications. The dataset obtained by annotating these pairs is referred to as Cats-D. Besides, we implement an automatic data construction pipeline to collect massive high-quality SQL-table pairs. Data collected with this method are referred to as Cats-S, which increases the proportion of complicated SQL queries to make the dataset more challenging. Ultimately, both Cats-D and Cats-S make up Cats. We first describe how to obtain SQL-table pairs for subsequent annotation and then introduce the annotation details.
Database BuildingTo mimic the practical TableQA system, we first follow Wang et al. (2020) to build a multi-table database \(D^{d}\) by collecting all databases in DuSQL. In addition, we also build another multi-table database \(D^{s}\) for expanding the size and domain of our dataset through a table collection pipeline. Specifically, 100,000 high-frequency words are first summarized from the CLUE (Xu et al., 2020) corpus. Then, we query these words in Google and download all the queried spreadsheet files. Subsequently, the available tables in these spreadsheets are extracted by a table parser that can identify the potential table in a work
\begin{table}
\begin{tabular}{l c c c c c} \hline
**Dataset** & **Train Size** & **Domain** & **Target** & **Application** & **Language** \\ \hline WEATHERGOV (Liang et al., 2009) & 25K & Weather & Crawled & Weather Report & English \\ WikiBio (Lebert et al., 2016) & 583K & Wikipedia & Crawled & - & English \\ WebNLG (Gardent et al., 2017) & 25.3K & DBPedia & Annotated & - & English \\ LogExLG (Chen et al., 2020) & 25.8K & Wikipedia & Annotated & - & English \\ ToTo (Parkhi et al., 2020) & 120K & Wikipedia & Annotated & - & English \\ \hline Rotowire (Wiseman et al., 2017) & 4.9K & NBA & Annotated (Noisy) & NBA & English \\ AdverGeneration (Shao et al., 2019) & 115K & Chinese E-commerce & Crawled & Advertising Text Generation & Chinese \\ CoSQL (Yu et al., 2019) & 7.8K & Cross-Domain & Annotated & TableQA & English \\ Map2seq (Schumann and Riezle, 2021) & 7.6K & OpenStreetMap & Annotated & Navigation & English \\ \hline Cats & 34.7K & Cross-Domain & Annotated & TableQA & Chinese \\ Cats-D & 6.7K & Cross-Domain & Annotated & TableQA & Chinese \\ Cats-S & 26.4K & Cross-Domain & Annotated & TableQA & Chinese \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of popular data-to-text datasets in different aspects. **Application** represents the practical application scenario associated with the dataset.
sheet. To protect personal privacy, we use predefined unique words to replace sensitive information in these tables, such as passwords, ID numbers, credit card numbers, etc. Finally, these tables are used to construct the database \(D^{s}\). Please refer to Appendix A.1 for more details.
SQL and Table CollectionWe execute all the SQL queries in DuSQL in the database \(D^{d}\) to get their corresponding tables. This is consistent with how a practical Table QA system answers user questions after parsing it to SQL. Then we discard SQL-table pairs containing SQLs that execute with empty results to obtain a SQL-table pair set CATS-D\({}^{un}=\{s_{i}^{d},t_{i}^{d}\}_{i=1}^{n}\). DuSQL does not release the code for generating synthetic queries. Therefore, to increase the annotation examples, we reimplement a SQL generator similar to the one in DuSQL. Notably, the generated SQL contains both single-table and multi-table queries. Please refer to Appendix A.2 for more detailed information on the SQL generator. The sampled SQLs which cannot execute in database \(D^{s}\) or execute with empty results are deserted. In this way, we obtain another SQL-table pair set CATS-S\({}^{un}=\{s_{i}^{s},t_{i}^{s}\}_{i=1}^{m}\).
Data Annotation ProcessWe employ 20 well-educated crowd workers to annotate the SQL-table pairs in CATS-D\({}^{un}\) and CATS-S\({}^{un}\). In particular, the annotators are asked to write a description \(y\) given a SQL \(s\) and table \(t\) pair. They must follow the requirements: (1) avoiding template-like language and trying to write a natural, fluent, and grammatically correct description; (2) the description must summarize all the content in the table; (3) the description must be logically consistent with the input SQL; (4) filtering the incomprehensible examples that are semantically unclear. Furthermore, to guarantee data quality, another 4 workers are asked to review the annotated data. Data with poor annotation quality will be required to be relabeled. Finally, the annotated CATS-D\({}^{un}\) is named as Cats-D. To guarantee data consistency, we sample a subset from the annotated CATS-S\({}^{un}\) following a similar complexity distribution with Cats-D. We name the sampled dataset Cats-S. However, we find that easy SQL queries account for a large-scale proportion (**47.87%**) in Cats-D. Therefore, we reduce the proportion of easy SQLs (**14.50%**) in Cats-S to make it more challenging.
### Dataset Analysis
The final Cats contains 43,369 examples, including 8,350 examples in Cats-D and 33,019 examples in Cats-S. Each annotated example contains a triple of SQL \(s\), table \(t\), and descriptive sentences \(y\). We split the training/development/test sets by 34,697/4,336/4,336 randomly. To understand the characteristics of the data collected in Cats-D and Cats-S, we also split them accordingly. The training, development, and test sets of Cats-D and Cats-S contain 6,680/835/835 and 28,017/3,501/3,501 examples, respectively.
Data ComplexityTo better understand our dataset, we compare its complexity with CoSQL in four dimensions, including the input tables' row and column number, SQL hardness, and the target length. Following Guo et al. (2021), we adopt SQL hardness to measure the complexity of SQL queries from the following four-level: easy, medium, hard, and extra hard, according to the number of components, selections, and conditions in a SQL query Yu et al. (2018). Considering CoSQL only release the training and development sets, we only show the training set comparision. The results are summarized in Table 2. First, we find that the tables in CoSQL are small, such as 60% of the tables with only one row and more than 80% with only one column. Second, we notice that most of the descriptions in CoSQL are less than 20 in length. The first reason is that most of the input tables are small.
\begin{table}
\begin{tabular}{l r r r r} \hline
**Column Number** & 1 & 2 & 3 & \textgreater{}=4 \\ \hline CoSQL & 6,329 & 1057 & 459 & 0 \\ Cats & 8,966 & 20,862 & 3242 & 1627 \\ Cats-D & 2,883 & 2,977 & 820 & 0 \\ Cats-S & 6,157 & 17,813 & 2,394 & 1,653 \\ \hline
**Row Number** & 1 & 2 & 3 & \textgreater{}=4 \\ \hline CoSQL & 4740 & 610 & 2,495 & 0 \\ Cats & 14,909 & 6158 & 3,671 & 9,959 \\ Cats-D & 2,123 & 656 & 1,129 & 2,772 \\ Cats-S & 12,754 & 5,538 & 2,510 & 7,215 \\ \hline
**SQL Hardness** & Easy & Medium & Hard & Extra Hard \\ \hline CoSQL & 2,788 & 1,826 & 1,717 & 1,514 \\ Cats & 7,223 & 13,000 & 12,016 & 2,458 \\ Cats-D & 3,198 & 1709 & 1,264 & 509 \\ Cats-S & 4,063 & 11,214 & 10,787 & 1,953 \\ \hline
**Target Length** & \(<20\) & \(<40\) & \(<60\) & \(>=60\) \\ \hline CoSQL & 7,005 & 825 & 15 & 0 \\ Cats & 10,319 & 12,862 & 5,864 & 5,652 \\ Cats-D & 1,893 & 2,026 & 1,912 & 849 \\ Cats-S & 8,401 & 10,873 & 3,962 & 4,781 \\ \hline \end{tabular}
\end{table}
Table 2: Complexity distribution comparison between Cats and CoSQL.
By manually checking the data in CoSQL, we find the second reason is that CoSQ describes the table with more than two rows through a generic template, such as "Here are the...". Last, we observe that easy SQL queries in CoSQL account for **35.54%**, far more than **20.84%** in Cats. These features make CoSQL only suitable for simple scenarios and less challenging. By contrast, Cats has a broader distribution than CoSQL, which is more in line with real TableQA applications.
## 4 Structure-Aware Approach
Given an input SQL \(s\) and a table \(t\), the model aims to generate a response \(\tilde{y}\). To bridge the gap between the two sources of information, we first propose a **Unified Graph Transformation** approach (UGT), which explicitly connects the input SQL and table in a unified structure. In this way, we can obtain a joint graph representation of the two sources and convert the answer-to-sequence task to a graph-to-text problem. And then, we utilize a varietal transformer architecture Ribeiro et al. (2020) that employs the original transformer encoder as the Global Node Encoder (G-NE) and introduces a GNN based layer into each transformer encoder layer as the Local Node Encoder (L-NE). G-NE allows explicit communication between two distant nodes, taking advantage of a large node context range. And L-NE has an advantage in modeling the graph topology. As shown in Figure 2 (b), this architecture cascaded performs global and local node aggregation, which gathers the benefits from both strategies. In the rest of this section, we will describe the proposed Unified Graph Transformation and the Local Node Encoder in detail.
### Unified Graph Transformation
Given a SQL \(s\) and its execution result (in the form of a table) \(t\) as input (shown in Figure 1), the Unified Graph Transformation takes two steps to transform the input two sources of data into a unified graph (shown in Figure 2 (a)). First, it converts the SQL and table into two undirected graphs: SQL graph \(\mathcal{G}_{s}\) and table graph \(\mathcal{G}_{t}\). In particular, for a SQL, we follow the previous method Xu et al. (2018) and convert it to a tree. For a table, we treat each column name and table cell as a node and divide the nodes in the table into two categories: table header node and table cell node. And then, we connect each header node with the cell node in the same column. We also build the connections between the cell nodes in the same row. Second, we add connections between the nodes that indicate the same column in \(\mathcal{G}_{s}\) and \(\mathcal{G}_{t}\) to build the unified graph. we also add a self-loop connection for each node. The transformed unified graph is formulated as \(\mathcal{G}_{h}=(\mathcal{V}_{h},\mathcal{E}_{h})\), where \(\mathcal{V}\) represents the nodes set and \(\mathcal{E}_{h}=\{(n,v)|n,v\in\mathcal{V}\}\). Figure 2 (a) shows an example of the transformed unified graph.
We expect that developing generation model should benefit from the recent advance on pre-trained language models (PLMs). Following previous work Ribeiro et al. (2021), we represent each \(\mathcal{G}_{h}\) using subword tokens, and convert it into a new token graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Specifically, each token of a node in \(V_{h}\) becomes a node \(\tilde{v}\) in \(\mathcal{N}\). For each edge \((n,v)\in\mathcal{E}_{h}\), we connect each token between \(n\) and \(v\) to obtain the new edges set \(\mathcal{E}\) (as shown in Figure 2 (c)). However, we notice that the new token graph \(\mathcal{G}\) breaks the structure of the original graph \(\mathcal{G}_{h}\) and may make the encoder pay too much attention to the feature of nodes at the token level instead of the original node level. This may bring
Figure 2: Illustration of the proposed method. (a) is an example of a unified graph transformed by Unified Graph Transformation. (b) is an overview of our model. **L-NE** and **G-NE** denote Local Node Encoder and Global Node Encoder, respectively. (c) is an example of token graph transformation.
extra noise into graph encoding. To preserve the original structural information, we introduce the **Node Segment Embedding** (NSE), which assigns the same symbol to the nodes in the token graph \(\mathcal{G}\) which belong to the same node in the original unified graph \(\mathcal{G}_{h}\). Figure 2 (c) gives an example.
### Local Node Encoder
Given \(\{h_{v}|v\in\mathcal{V}\}\) as the outputs of the Global Node Encoder at the \(L\)-th encoder layer, we next describe how the Local Node Encoder (L-NE) works. As shown in Figure 2 (b), L-NE consists of two main modules: a Node Embedding Layer and a Graph Attention Network (GAT) (Velickovic et al., 2018) Layer. The former enriches the features of the nodes, and the latter explicitly models the graph structure. Formally, given \(h_{v}\), we obtain the feature-enhanced node representation by:
\[h_{v}^{e}=LayerNorm(h_{v})+e_{v}^{s}, \tag{1}\]
where \(LayerNorm\) represents layer normalization (Ba et al., 2016). \(e_{v}^{s}\) denote the node segment embedding for node \(v\). After the Node Embedding Layer, we utilize a GAT layer to model the graph structure. Formally, it aggregates the representations of node \(v\) in a multi-head self-attention layer (Vaswani et al., 2017) as follows:
\[s_{v,n}^{h}=\frac{h_{v}^{e}W_{Q}^{h}(h_{v}^{e}W_{K}^{h})^{\top}}{ \sqrt{d/H}}, \tag{2}\] \[\alpha_{v,n}^{h}=\frac{e^{s_{v,n}^{h}}}{\sum_{\hat{n}\in\mathcal{ N}(v)}e^{s_{v,\hat{n}}^{h}}},\] \[z^{h}=\sum_{n\in\mathcal{N}(v)}\alpha_{v,n}^{h}(h_{v}^{e}W_{V}^ {h}),\] \[h^{r}=Concat(z^{1},...,z^{H}),\]
where \(1\leq h\leq H\), and \(W_{Q}^{h}\), \(W_{K}^{h}\), \(W_{V}^{h}\in\mathbb{R}^{d\times(d/H)}\). \(\mathcal{N}(v)\) denotes the immediate neighborhood of node \(v\) in graph \(\mathcal{G}\).
The transformer parameters are initialized with the pretrained T5 (Raffel et al., 2020), and the others are randomly initialized. Given each gold instance \((s,t,y)\), we fine-tune the model to optimize the following cross-entropy objective:
\[\mathcal{L}=-\sum_{i=1}^{|y|}p_{\theta}(y_{i}|y_{1:i-1};s,t). \tag{3}\]
## 5 Experiment
### Experiment Settings
BaselinesDue to current datasets bias in the English language, the D2T methods for others are rarely explored. Meanwhile, PLMs-based models, such as T5, have achieved SOTA results (Ribeiro et al., 2020, 2021; Wang et al., 2022; Jolly et al., 2022) on many D2T tasks. Therefore, we experiment with T5-based models to understand their performance on Cats-D, Cats-S, and Cats:
* TemP automatically generates descriptions based on the predefined template. Specifically, we first manually write a template for SQL queries replacing the values, columns, table names, and conditions with slots. Meanwhile, we also create a list of descriptions for each component in SQL queries (Table 3 reports the descriptions of partial SQL components). Then we enumerate all cells in the table row by row to obtain the description for a table. Lastly, we join the two parts of descriptions as the final output.
* Pointer-Gen is an RNN-based Seq2Seq model with attention and copy mechanism (See et al., 2017). We concatenate the SQL and linearized table as input.
* T5 denotes finetuning the T5 model on the proposed Cats. The input is the same as that used in the Pointer-Gen. Notably, to make a fair comparison with our proposed method, we add a fully connected feed-forward network (Fnn) on top of each transformer layer and make its parameters equal with the L-NE layer. We denote this as T5 + Fnn.
* T5-Graph is also a finetuning T5 method. Different from T5, it uses the sample graph
\begin{table}
\begin{tabular}{l l} \hline \hline
**SQL Components** & **Descriptions** \\ \hline Min & \(\nexists[]{\text{\text{\text{\text{\text{\text{\text{\text{\text
representation with our method (described in Section 4.1) as input. Again, we add FNN to make a fair comparison, which is denoted as T5-Graph + Fnn.
Evaluation MetricsWe evaluate our models by applying both automatic and human evaluations. For automatic evaluation, we employ the widely used metric, BLEU Papineni et al. (2002) and ROUGE-L Lin (2004), to evaluate the fluency of generated text. And we utilize SacreBLEU Post (2018) to calculate the BLEU after segmenting the sentcne by jieba 2. Additionally, we utilize Coverage Shao et al. (2019) to evaluate the faithfulness of generated text. Coverage measures the average proportion of input tables that are covered by a generated text. The table headers are also considered. We use string matching rules to determine whether a cell exists in the generated text. We conduct experiments over 4 different seeds and report the average scores on them.
Footnote 2: [http://pypi.python.org/pypi/jieba](http://pypi.python.org/pypi/jieba)
We display examples of input representation for different models and provide the implementation details in Appendix C.1 and C.2.
### Main Result
Table 4 presents the experimental results on Cats, Cats-D, and Cats-S, from which we make three main observations.
First, we can see that all neural network models outperform TemP on BLEU by a large margin. This suggests that neural models are better at generating fluent expressions. We consider this thanks to the language modeling task (Equation 3), which trains the neural models to predict the next token, given the previous history. Nevertheless, we find that TemP achieves the best Coverage scores on all sets, even better than Gold. We consider this is because, when annotating the references, to make the presentation more reasonable and fluent, annotators summarize the contents of the table, such as merging some cells, etc. On the other hand, TemP copies all the contents of the table directly.
Second, adding extra trainable parameters (+ Fnn) does not always improve the performance on T5 and T5-Graph. For example, T5 + Fnn performs better than T5 on both Cats and Cats-S, but worse than T5 on Cats-D. Moreover, we notice that T5 performs better than T5-Graph given the fact that the sizes of their parameters are equal. We speculate this is because, compared to T5-Graph, T5 uses the original SQL and the flattened table as input, which preserves the partial structural information of the input SQL and table by the segment symbols "," and "l" (please refer to Appendix C.1 for the example of input data linearizations). However, T5-Graph still treats the input as a sequence and ignores the unified graph's structure, leading to its performance degradation.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c}{Cats} & \multicolumn{3}{c}{Cats-D} & \multicolumn{3}{c}{Cats-S} \\ \cline{2-10} & **BLEU** & **ROUGE-L** & **Coverage** & **BLEU** & **ROUGE-L** & **Coverage** & **BLEU** & **ROUGE-L** & **Coverage** \\ \hline \hline \multicolumn{10}{c}{_Development_} \\ \hline Gold & - & - & 75.56 & - & - & 69.59 & - & - & 77.30 \\ \hline TemP & 40.04 & 57.20 & 81.48 & 18.05 & 47.37 & 77.93 & 42.71 & 59.82 & 83.24 \\ \hline Pointer-Gen & 51.26\({}_{\pm 0.20}\) & 73.70\({}_{\pm 0.14}\) & 68.73\({}_{\pm 0.13}\) & 48.33\({}_{\pm 0.31}\) & 67.95\({}_{\pm 0.96}\) & 56.96\({}_{\pm 0.90}\) & 49.77\({}_{\pm 0.16}\) & 73.79\({}_{\pm 0.26}\) & 69.26\({}_{\pm 0.24}\) \\ T5 & 53.60\({}_{\pm 0.13}\) & 74.42\({}_{\pm 0.06}\) & 72.87\({}_{\pm 0.04}\) & 52.47\({}_{\pm 0.28}\) & 68.5\({}_{\pm 0.32}\) & 68.20\({}_{\pm 0.25}\) & 51.43\({}_{\pm 0.10}\) & 73.77\({}_{\pm 0.04}\) & 73.08\({}_{\pm 0.03}\) \\ T5 + Fnn & 54.14\({}_{\pm 0.21}\) & 74.80\({}_{\pm 0.16}\) & 72.85\({}_{\pm 0.18}\) & 52.10\({}_{\pm 0.17}\) & 68.28\({}_{\pm 0.17}\) & 68.02\({}_{\pm 0.31}\) & 51.67\({}_{\pm 0.22}\) & 73.75\({}_{\pm 0.17}\) & 73.08\({}_{\pm 0.17}\) \\ T5-Graph & 52.21\({}_{\pm 0.17}\) & 73.68\({}_{\pm 0.04}\) & 72.03\({}_{\pm 0.10}\) & 49.89\({}_{\pm 0.04}\) & 66.72\({}_{\pm 0.20}\) & 66.65\({}_{\pm 0.25}\) & 50.12\({}_{\pm 0.18}\) & 73.11\({}_{\pm 0.13}\) & 72.05\({}_{\pm 0.04}\) \\ T5-Graph + Fnn & 52.30\({}_{\pm 0.16}\) & 73.71\({}_{\pm 0.20}\) & 71.87\({}_{\pm 0.02}\) & 48.81\({}_{\pm 0.27}\) & 66.35\({}_{\pm 0.13}\) & 66.60\({}_{\pm 0.30}\) & 50.42\({}_{\pm 0.09}\) & 73.22\({}_{\pm 0.12}\) & 72.07\({}_{\pm 0.05}\) \\ \hline Uot & 54.75\({}_{\pm 0.15}\) & 75.72\({}_{\pm 0.06}\) & 72.68\({}_{\pm 0.16}\) & 54.23\({}_{\pm 0.49}\) & 69.82\({}_{\pm 0.23}\) & 68.07\({}_{\pm 0.61}\) & 52.54\({}_{\pm 0.16}\) & 74.84\({}_{\pm 0.12}\) & 72.99\({}_{\pm 0.07}\) \\ Ugt + Nse & **56.34\({}_{\pm 0.13}\)** & **76.72\({}_{\pm 0.09}\)** & **73.41\({}_{\pm 0.05}\)** & **58.79\({}_{\pm 0.51}\)** & **73.16\({}_{\pm 0.31}\)** & **68.94\({}_{\pm 0.31}\)** & **53.54\({}_{\pm 0.15}\)** & **75.36\({}_{\pm 0.19}\)** & **73.67\({}_{\pm 0.10}\)** \\ \hline \hline \multicolumn{10}{c}{_Test_} \\ \hline Gold & - & - & 76.35 & - & - & 68.67 & - & - & 76.98 \\ \hline TemP & 41.39 & 57.82 & 82.40 & 17.76 & 46.21 & 77.83 & 42.69 & 60.16 & 82.96 \\ \hline Pointer-Gen & 50.77\({}_{\pm 0.56}\) & 73.25\({}_{\pm 0.14}\) & 68.47\({}_{\pm 0.33}\) & 47.34\({}_{\pm 0.81}\) & 66.64\({}_{\pm 0.80}\) & 56.93\({}_{\pm 1.21}\) & 50.37\({}_{\pm 0.27}\) & 74.21\({}_{\pm 0.20}\) & 69.98\({}_{\pm 0.24}\) \\ T5 & 53.49\({}_{\pm 0.13}\) & 74.22\({}_{\pm 0.08}\) & 72.36\({}_{\pm 0.12}\) & 51.32\({}_{\pm 0.22}\) & 66.81\({}_{\pm 0.28}\) & 67.93\({}_{\pm 0.18}\) & 52.91\({}_{\pm 0.07}\) & 74.51\({}_{\pm 0.08}\) & 73.33\({}_{\pm 0.08}\) \\ T5 + Fnn & 53.87\({}_{\pm 0.18}\) & 74.42\({}_{\pm 0.16}\) & 72.34\({}_{\pm 0.10}\) & 50.71\({}_{\pm 0.12}\) & 66.42\({}_{\pm 0.24}\) & 57.17\({}_{\pm 0.14}\) & 74.32\({}_{\pm 0.10}\) & 73.32\({}_{\pm 0.16}\) \\ T5-Graph & 51.82\({}_{\pm 0.13}\) & 73.28\({}_{\pm 0.05}\) & 71.33\({}_{\pm 0.33}\) & 47.91\({}_{\pm 0.28}\) & 67.47\({}_{\pm 0.20}\) & 65.51\({}_{\pm 0.31}\) & 51.40\({}_{\pm 0.22}\) & 73.78\({}_{\pm 0.13}\) & 72.15\({}_{
Lastly, by explicitly modeling the unified graph structures, Ugt dramatically outperforms the size-comparable models T5-Graph + Fnn and T5-Fnn on all metrics. The results display Ugt's superiority in capturing essential structural knowledge for this task. Additionally, Node Segment Embedding (+ NSE) further improves the performance. This verifies that Nse can help the encoder better preserve the original structural information.
### Analysis and Discussion
Effects of input SQL and TableTo examine the effects of different input data, we conduct ablation studies on the input side by removing the input SQL and table. The results on three development sets are summarized in Table 5. We observe that, after removing the SQL and only utilizing the table as input, both T5 + Fnn and our method (Ugt + Nse) perform poorly on all metrics. The performance degrades even more if only SQL is employed. The results demonstrate that both input SQL and table are essential for the answer-to-sequence task. Additionally, our method clearly outperforms T5 + Fnn on all ablation settings. It reveals the effectiveness of our method compared to vanilla T5 architecture even under extreme input conditions.
Effects of Data ComplexityWe further explore the performances on different levels of data complexity. We use BLEU as the metric in this section. The results are shown in Table 6. We first explore the effect of the table size. Unsurprisingly, the BLEU scores of all models decrease as the number of table rows or columns grows. The more rows or columns the table contains, the more difficult it is for a model to process it. Compared to two baseline models, our method is better at handling large tables. Furthermore, we investigate the impact of SQL complexity on model performances. With respect to the SQL complexity, our model achieves larger improvement against baseline models, especially on extra hard SQLs. It shows that our approach can better encode the complex input data than others. Lastly, we study the model performance concerning different ground-truth description lengths. The Pointer-Gen struggles on longer descriptions, where the performance drops over 10 BLEU scores on responses longer than 60. In this scenario, T5-based models dramatically outperform the Pointer-Gen, while our method can still beat T5 + Fnn.
### Human Evaluation
To reach a deeper understanding of the qualities of the generated descriptions, we conduct human evaluation following Parikh et al. (2020). We compare our method with TemP, Pointer-Gen, and T5 + Fnn. Specifically, we first randomly select 100 examples from the Cats test set and the corresponding outputs generated by each model. And then, five native Chinese annotators (three females and two males) with master's degrees or above engaged in NLP research are invited to evaluate the quality from the four axes. Specifically, **Fluency** measures whether the description is fluent. **Failureless** estimates whether the description is logically consistent with input SQL, and all pieces of information are supported by the input table.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Column Number** & 1 & 2 & 3 & \(>=\)4 \\
**\# Examples** & 1,138 & 2,580 & 403 & 215 \\ \hline Pointer-Gen & 53.21 & 50.74 & 42.20 & 35.29 \\ T5 + Fnn & +2.28 & +1.16 & +7.08 & +4.29 \\ Ours & **+5.61** & **+4.69** & **+7.54** & **+5.28** \\ \hline
**Row Number** & 1 & 2 & 3 & \(>=\)4 \\
**\# Examples** & 1,899 & 769 & 467 & 1201 \\ \hline Pointer-Gen & 56.72 & 49.71 & 49.05 & 44.30 \\ T5 + Fnn & +3.57 & -0.58 & +1.68 & +6.24 \\ Ours & **+5.75** & **+1.54** & **+5.16** & **+7.62** \\ \hline
**SQL Hardness** & Easy & Medium & Hard & Extra Hard \\
**\# Examples** & 915 & 1,588 & 1,531 & 302 \\ \hline Pointer-Gen & 60.92 & 54.99 & 42.78 & 43.17 \\ T5 + Fnn & +0.92 & +0.60 & +6.79 & +3.65 \\ Ours & **+3.98** & **+3.75** & **+7.80** & **+9.22** \\ \hline
**Target Length** & \(<20\) & \(<40\) & \(<60\) & \(>=60\) \\
**\# Examples** & 1,275 & 1,635 & 724 & 702 \\ \hline Pointer-Gen & 52.67 & 51.97 & 52.02 & 41.64 \\ T5 + Fnn & +2.93 & -0.31 & -0.06 & +7.54 \\ Ours & **+6.08** & **+3.19** & **+3.33** & **+7.82** \\ \hline \hline \end{tabular}
\end{table}
Table 6: BLEU scores of different models in the Cats test set on different levels of data complexity. Relative results of T5 + Fnn and our method are reported compared against the Pointer-Gen.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **Cats** & **Cats-D** & **Cats-S** \\ \hline T5 + Fnn & 54.14\({}_{\pm 0.21}\) & 52.10\({}_{\pm 0.17}\) & 51.67\({}_{\pm 0.22}\) \\ w/o Sql & 40.90\({}_{\pm 0.24}\) & 39.75\({}_{\pm 0.08}\) & 40.00\({}_{\pm 0.30}\) \\ w/o Table & 17.83\({}_{\pm 0.13}\) & 24.25\({}_{\pm 0.33}\) & 14.51\({}_{\pm 0.11}\) \\ \hline Ours & 56.34\({}_{\pm 0.13}\) & 58.79\({}_{\pm 0.51}\) & 53.54\({}_{\pm 0.15}\) \\ w/o Sql & 45.16\({}_{\pm 0.26}\) & 47.92\({}_{\pm 0.50}\) & 43.89\({}_{\pm 0.38}\) \\ w/o Table & 19.59\({}_{\pm 0.16}\) & 26.91\({}_{\pm 0.11}\) & 16.20\({}_{\pm 0.62}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Effect of input Sql and Table. w/o Sql and w/o Table denote removing the SQL and table from the input, respectively. Ours denotes Ugt + Nse.
They are scores range from 1 to 10, the higher the better. **Coverage** is the percentage of cells in the input table the candidate sentence covers. It is different from the one in Table 4 (please refer to Appendix C.4). **Repetition** is number of cells the candidate sentence repeats. We also introduce the reference as one candidate (denoted as Gold). And its results can be regarded as the upper bound.
The results summarized in Table 7 show that the Gold consistently achieves high performance than generation methods. It attests to the high quality of our human annotations. We report Fluency and Faithfulness score for TemP because they are sensitive evaluation. We can see that TemP gets a high Faithfulness score but is poor on Fluency. Our method outperforms baseline models on almost all axes with an agreement kappa score (van der Lee et al., 2020) more than \(0.86\). It demonstrates the effectiveness of our proposed method. Although our model achieves a high coverage rate (90.26%), its Faithfulness score is relatively low (only 7.48), and there is a considerable gap compared with Gold. It indicates simply copying content from the input table can not guarantee the faithfulness of the generated response. It may be necessary for the model to understand the deep semantics of SQL and table, which is the biggest challenge in this dataset.
## 6 Conclusion
We present Cats, a large-scale and high-quality Chinese answer-to-sequence dataset, along with a series of baselines. It helps alleviate the problem of current D2T datasets' bias towards the English language. We propose a Unified Graph Transformation method to bridge the structural gap between the SQL and table. In this way, we convert the task to a graph-to-text problem. Furthermore, we introduce the Node Segment Embedding to solve the problem that transforming the input graph to a new token graph breaks the original graph's structure. Experiments on Cats show that our proposed model outperforms existing baseline models. We conduct further analysis on Cats, which attests to both the high quality and challenges of the dataset.
### Limitations
This work presents Cats, a large-scale and high-quality Chinese answer-to-sequence dataset. It is a free and open dataset. One of most important motivations for presenting this dataset is that most of the existing datasets are built for English, which leads to advanced work on D2T generation primarily focusing on English and leaving other languages underexplored. However, Cats only alleviates the dataset language bias rather than solving it. And it is limited to the study of Chinese methods. Regarding methodology, the proposed Ugt converts the answer-to-sequence task to a graph-to-text problem to bridge the gap between two heterogeneous input data (SQL and table). However, Ugt works only for answer-to-sequence task rather than graph-to-text task. Additionally, though the proposed Nse can help the graph-to-text model better preserve the original structural information, the contribution may be limited to the graph-to-text task.
### Ethics Statement
This work presents Cats, a free and open dataset for the research community to study the answer-to-sequence problem in the practical TableQA system. And it helps enrich the D2T languages and alleviate the datasets' bias in English. To balance the data quality and scale and bring it closer to the practical scenario, data in Cats are collected from two sources, which are manually annotated as Cats-D and Cats-S. In other words, Cats consists of Cats-D and Cats-S. The data in Cats-D is collected from DuSQL (Wang et al., 2020) dataset, a free and open dataset for the Chinese Text-to-SQL problem. Meanwhile, to enlarge our dataset, we adopt an automatic data construction pipeline to collect a large number of high-quality SQL-table pairs for annotation. To ensure the quality of our dataset, we manually annotate the SQL-table pairs. We hire 24 native annotators with undergraduate degrees to annotate the data. Specifically, 20 annotators are responsible for annotations, and another 4 workers are asked to review the annotated data.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Flu. \(\uparrow\)** & **Fai. \(\uparrow\)** & **Cov.(\%)\(\uparrow\)** & **Rep. \(\downarrow\)** \\ \hline Gold & 8.42 & 9.15 & 95.32 & 0.14 \\ TemP & 5.27 & 6.87 & 99.41 & 0.02 \\ \hline Pointer-Gen & 6.13 & 6.32 & 83.27 & 0.74 \\ T5 + Fnn & 6.82 & 7.16 & 89.27 & 0.39 \\ Ours & **7.14** & **7.48** & **90.26** & 0.27 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Human evaluation over references (denoted as Gold) and model outputs. Flu., Fai., Cov., Rep. denote Fluency, Faithfulness, Coverage and Repetition. \(\uparrow\) indicates higher is better and \(\downarrow\) denotes lower is better.
We pay 2.1 yuan ($0.31 USD) for annotating each SQL-table pair.
To avoid our dataset leakages personal privacy, we replace the sensitive information in the collected tables with predefined unique words. Furthermore, we ask the annotators to filter out the examples that leak personal privacy and contain social bias and harmful content.
|
2310.03795 | VLASS tidal disruption events with optical flares II: discovery of two
TDEs with intermediate width Balmer emission lines and connections to the
ambiguous extreme coronal line emitters | The multiwavelength properties of radio-emitting tidal disruption events
(TDEs) are poorly understood. In a previous paper, we presented the first
sample of radio-selected, optically-detected TDEs, which included two events
(VT J1008 and VT J2012) associated with late-time (${\sim}2$ years post-optical
flare) intermediate with emission lines that are largely unprecedented from
TDEs. In this paper, we investigate these two events in detail. The
multiwavelength properties of these events are otherwise consistent with
optically-selected TDEs. They are hosted by green valley, E+A/Balmer dominated
galaxies with low star formation rates and black holes masses $M_{\rm
BH}\approx 10^{5-6}\,M_\odot$. The optical flare shapes are fully consistent
with those of optically-selected TDEs, although they are slightly faint and
cool at peak. The radio emission from both events is consistent with
wide-angle, non-relativistic outflows with $L_R({\rm GHz}) \sim 10^{38}$ erg
s$^{-1}$. Balmer and Helium emission lines are detected from both events with
full-width-half-maxima ${\sim}700$ km s$^{-1}$ and asymmetric line profiles. VT
J1008 additionally shows coronal line emission with a similar width. The lines
from VT J2012 are redshifted by ${\sim}700$ km s$^{-1}$ relative to the host
galaxy. We show that these events share many characteristics in common with the
ambiguous class of extreme coronal line emitters. We argue that the lines are
likely associated with a radiative shock or dense, photoionized clumps of
outflowing gas in the circumnuclear medium. | Jean J. Somalwar, Vikram Ravi, Wenbin Lu | 2023-10-05T18:00:03Z | http://arxiv.org/abs/2310.03795v1 | VLASS tidal disruption events with optical flares II: discovery of two TDEs with intermediate width Balmer emission lines and connections to the ambiguous extreme coronal line emitters
###### Abstract
The multiwavelength properties of radio-emitting tidal disruption events (TDEs) are poorly understood. In a previous paper, we presented the first sample of radio-selected, optically-detected TDEs, which included two events (VT J1008 and VT J2012) associated with late-time (\(\sim\)2 years post-optical flare) intermediate with emission lines that are largely unprecedented from TDEs. In this paper, we investigate these two events in detail. The multiwavelength properties of these events are otherwise consistent with optically-selected TDEs. They are hosted by green valley, E+A/Balmer dominated galaxies with low star formation rates and black holes masses \(M_{\rm BH}\approx 10^{5-6}\,M_{\odot}\). The optical flare shapes are fully consistent with those of optically-selected TDEs, although they are slightly faint and cool at peak. The radio emission from both events is consistent with wide-angle, non-relativistic outflows with \(L_{R}({\rm GHz})\sim 10^{38}\) erg s\({}^{-1}\). Balmer and Helium emission lines are detected from both events with full-width-half-maxima \(\sim\)700 km s\({}^{-1}\) and asymmetric line profiles. VT J1008 additionally shows coronal line emission with a similar width. The lines from VT J2012 are redshifted by \(\sim\)700 km s\({}^{-1}\) relative to the host galaxy. We show that these events share many characteristics in common with the ambiguous class of extreme coronal line emitters. We argue that the lines are likely associated with a radiative shock or dense, photoionized clumps of outflowing gas in the circumnuclear medium.
0000-0002-8880-7088]Jean J. Somalwar
0000-0002-4880-7088]Vikram Ravi
0000-0001-8883-0888]Wenbin Lu
## 1 Introduction
Tidal disruption events occur when a star ventures within the tidal radius of a supermassive black hole (SMBH). In the last couple decades, our observational and theoretical understanding of TDEs has dramatically improved, largely thanks to the advent of wide field, time-resolved surveys in the optical and X-ray. Follow-up of optically- and X-ray-selected events at other wavelengths has shed light on the evolution of TDEs but can only provide limited information due to selection biases and limited follow-up resources.
These limitations have particularly impacted our understanding of the radio emission from TDEs. The physical mechanisms generating TDE radio emission are poorly understood as there have been limited detections of radio emission from TDEs. The landscape of TDE radio emission is beginning to evolve, however, as full sky radio surveys come online.
In a previous paper, we presented the first sample of radio-selected TDEs. This sample included all TDE-like radio transient in VLASS with optical counterparts. In Paper I, we discussed the connections between host galaxy properties/optical flares with the presence of radio emission using a sample of radio-selected TDEs with optical flares. We noticed several possible correlations between multiwavelength properties and radio loudness, including possible trends with optical luminosity and host galaxy SMBH mass. One of the most intriguing discoveries from our sample was the detection of intermediate width Balmer emission from the only two events in our sample in quiescent galaxies. Such emission has not been reported for uniformly selected, bona-fide TDEs before (see Yao et al. (2023) for an event discovered simultaneous with our work): the only previous reports of similar emission are from SDSS-selected flaring galaxies, but these events had poorly sampled multiwavelength data, causing ambiguities about the flare origins.
In Paper I, we presented our full sample. In this paper, we delve into the multiwavelength properties and the interpretation of the two events with intermediate width Balmer emission lines.
Our sample selection is described in detail in Paper I; we discuss it briefly here. We identified the two objects described in this work as members of a broader sample of radio-selected TDEs. We compiled this TDE sample from the Very Large Array Sky Survey (VLASS; Lacy et al., 2020). VLASS is observing the sky with \(\delta>-40^{\circ}\) at 3 GHz for three epochs with a cadence of \(\sim\)2 years, a per-epoch sensitivity of \(\sim\)0.13 mJy and a spatial resolution of \(\sim\)2.5\({}^{\prime\prime}\). The first two epochs were completed in 2017/2018 and 2020/2021. The third epoch is ongoing at the time of writing.
We identified TDE candidates using the transient catalog of Dong et al., in prep., who identified all sources that were detected at \(>7\sigma\) in E2 but not significantly detected (\(<3\sigma\)) in E1; i.e., this catalog contains all transients that are rising between E1 and E2. Details about the transient detection algorithm are described in Somalwar et al. (2021) and Dong et al., in prep. We select TDE candidates from this catalog as sources coincident with PanSTARRS-detected galaxies. The galaxies must show no evidence for strong AGN activity in any public archival data. Among the criteria used to identify AGN, we consider: the position of the source on the WISE W1-W2 and W2-W3 color diagram; any evidence for past optical, X-ray, or radio variability/detections that could indicate AGN activity. The host galaxy must also have a photometric redshift \(z_{\rm phot}<0.25\), at which redshifts multiwavelength counterparts and host galaxies are more readily detected with reasonable exposure times. The resulting sample has \(\sim\)100 objects.
In Paper I, we focus on the subset of this sample with associated optical flares in data from the Zwicky Transient Facility (ZTF; \(gri\) bands) and the Asteroid Terrestrial-impact Last Alert System (ATLAS; \(co\) bands). The resulting sample has six objects, three of which are hosted by weak Seyferts, one by a starforming galaxy, and two by quiescent galaxies. The two objects in quiescent galaxies were also detected to have unusual, intermediate width Balmer lines, and thus are the focus of this work. Details of the rest of the sample are presented in Papers I and III.
The two sources are named using our VLASS transient naming convention: VT J100853.44+424300.22 (VT J1008) and VT J201229.90-170556.32 (VT J2012). We refer to the host galaxies of these objects using the coordinates prefixed with HG (host galaxy; e.g., HG J1008). In plots, we often label the individual transients or host galaxies without the prefixes (e.g., J1008), except when the prefixes are necessary for clarity.
## 3 Summary of Transient Properties
In this section, we present observations of VT J1008 and J2012 and briefly summarize the multiwavelength properties of each source. We defer discussion of the optical spectral features, which are the main focus of this paper, to the following section.
### Vt J1008
We begin our discussion with VT J1008, the multiwavelength properties of which are summarize in Figure 1. In the following subsections, we break down each panel of Figure 1.
#### 3.1.1 Host galaxy properties
A PanSTARRS optical image of the host galaxy of VT J1008, HG J1008, is shown in the panel a of Figure 1. It is at a redshift \(z=0.045\), or a luminosity distance \(d_{L}=205\) Mpc. The host galaxy spectrum is shown in panel b. Note that this spectrum is contaminated by transient features. Based on the H\(\delta_{A}\) absorption and an upper limit on the H\(\alpha\) EW (we cannot constrain the H\(\alpha\) EW exactly because of the transient emission), this galaxy is consistent with being an E+A; i.e., it underwent a starburst in the last \(\sim\)Gyr. There is no strong indication of an AGN, although the faint [Ne III] and [O III] emission lines could be produced by a weak AGN. These lines could also be produced by star formation, however.
We performed an SED fit for HG J1008 in Paper I. We briefly review the results here, but refer the reader to Paper I for a detailed description of our methodology. The host galaxy has a stellar mass \(\log M_{*}/M_{\odot}=9.04^{+0.22}_{-0.16}\), and the 3\(\sigma\) upper limit on the star formation rate is \(\log{\rm SFR}/(M_{\odot}\,{\rm yr}^{-1})<1.47\). This host is consistent with the green valley. It is also consistent with being an E+A galaxy.
These SED fit parameters also set constraints on the SMBH masses. From the stellar mass, we can infer a SMBH mass using the Yao et al. (2023) relation for
\begin{table}
\begin{tabular}{c|c c} \hline \hline & VT J1008 & VT J2012 \\ \hline R.A. & \(10^{\rm h}08^{\rm m}53.44^{\rm s}\) & \(20^{\rm h}12^{\rm m}29.90^{\rm s}\) \\ Dec. & \(+42^{\circ}43^{\prime}00.22^{\prime\prime}\) & \(-17^{\circ}05^{\prime}56.32^{\prime\prime}\) \\ \(\Delta d~{}[^{\prime\prime}]\) & 0.18 & 0.2 \\ \(z\) & 0.045 & 0.053 \\ \(d_{L}\) [Mpc] & 205 & 244 \\ \(\log M_{\rm BH}/M_{\odot}\) & \(4.81^{+0.40}_{-0.32}\) & \(6.17\pm 0.31\) \\ SFR [\(M_{\odot}\,{\rm yr}^{-1}\)] & \(<1.47\) & \(<1\) \\ \(L_{\nu}\)(VLASS) [\(10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\)] & \(9.6\pm 0.3\) & \(9.9\pm 0.9\) \\ \hline \end{tabular}
\end{table}
Table 1: Properties of our TDEs
optical TDE hosts: \(M_{\rm BH}(M_{*})=4.81^{+0.40}_{-0.32}\). Alternatively, Yao et al. (2023) measured a stellar velocity dispersion for this source \(\sigma_{*}=44\pm 3\). Using the \(M_{\rm BH}-\sigma_{*}\) relation from Kormendy & Ho (2013), we find \(M_{\rm BH}(\sigma_{*})=5.59\pm 0.29\), which is consistent with the value measured from the stellar mass. This SMBH mass is small relative to the SMBH masses measured from TDEs by Yao et al. (2023): VT J1008 has a smaller SMBH mass than 27/32 (\((84\pm 8)\%\)) of their TDEs.
#### 3.1.2 Optical and IR broadband transient emission
In panel c2 of Figure 1, we show the optical lightcurve for VT J1008. This lightcurve was created using the ATLAS1 and ZTF2 forced photometry retrieved using recommended procedures. Both surveys detected an optical flare from this source starting on MJD\(\sim\)59100. The flare rise was missed, but the decay was well-sampled. Yao et al. (2023) fit this lightcurve as an evolving blackbody with the temperature fixed to that at peak luminosity. They found a peak blackbody luminosity of \(\log L_{\rm bb}/(\rm erg\,s^{-1})=42.98\) and peak temperature \(\log T_{\rm bb}/\rm K=4.15\). The radius at peak luminosity was \(\log R_{\rm bb}/\rm cm=14.76\). The time to rise from half-max-luminosity to max-luminosity was \(t_{\rm rise,1/2}=11.8^{+1.5}_{-1.3}\) days, and the time to decay from max to half-max was \(t_{\rm decay,1/2}=23.1^{+1.8}_{-1.2}\) days. This peak temperature is cool for a typical optically-selected TDE: it is cooler than 25/32 (\((78\pm 9)\%\)) of the optical TDEs in Yao et al. (2023). The peak luminosity is lower than every TDE in the Yao et al. (2023) sample. The rise and decay times are typical of optically-selected TDEs.
Footnote 1: [https://fallingstar-data.com/forcedphot/](https://fallingstar-data.com/forcedphot/)
During the writing of this work, VT J1008 rebrightened in the optical and has been identified as a repeating partial TDE. Although it is plausible that the repeating
Figure 1: Summary plot for VT J1008. Panel a shows an image of the host galaxy. Panel b shows an example optical spectrum. The observed spectrum is shown on top in black, and the best-fit stellar emission model is shown in red. The observed spectrum with the stellar continuum subtracted is shown on the bottom in black, with the transient emission lines clearly visible. Panel c shows multiwavelength light curves for VT J1008. Panel c1 shows the radio light curve in blue and the X-ray light curve in black. Upper limits are shown as triangles. Panel c2 shows the ATLAS _co_ optical lightcurve. Panel c3 shows the WISE MIR lightcurve, with no obvious flare detected. Panel d shows the radio observations of this source. The radio SED is consistent with a wide-angle, non-relativistic outflow.
nature of this source has affected the emission, we do not consider the rebrightening here and refer the reader to Somalwar et al. (2023a) for details.
In panel c3, we show the IR lightcurve for this source from the NEOWISE survey, processed using the methods described in Somalwar et al. (2023b). We do not see any significant IR variability.
#### 3.1.3 X-ray emission
We checked public X-ray survey data, including the XMM Newton Slew Survey, etc, for archival detections of VT J1008. No detections were reported. The pre-optical-flare X-ray upper limit was \(10^{42.9}\) erg s\({}^{-1}\). The tightest post-optical flare limit is from our Swift/XRT ToO. We obtained X-ray observations of VT J1008 on MJD 59638 using a 3.5 ks exposure with the Swift/XRT telescope (PI Somalwar). The source was not detected, with a \(3\sigma\) luminosity upper limit of \(10^{41.8}\) erg s\({}^{-1}\). This upper limit is shown in panel c1 of Figure 1.
#### 3.1.4 Radio emission
The radio lightcurve for VT J1008 is shown in panel c1. VT J1008 was first observed by VLASS on MJD 58628; this nondetection was \(\sim\)460 days before the optical peak. It was observed again on MJD 59496 (524 days post-optical peak) and was detected as a 1.13\(\pm\)0.15 mJy source, corresponding to a 3 GHz luminosity of \(L_{\nu}=(5.6\pm 0.7)\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\). We observed the source with the VLA on MJD 59612 (525 days post-peak) in the CLSX bands. The 3 GHz radio luminosity from this SED had risen since to VLASS observation to \((9.6\pm 0.3)\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\). If we assume that the radio-emitting outflow was launched at optical peak, this rise corresponds to a \(L_{\nu}\propto t^{2.1^{+0.6}_{-0.5}}\) power-law evolution, which is consistent with expectations for a constant velocity outflow.
The VLA radio SED is shown in panel d. We modelled the SED as a synchrotron outflow following the methodology described in Appendix E of Paper I, where we assume a spherically symmetric outflow with magnetic field \(B\), electron density \(N_{0}\), and radius \(R\). We assume the electrons have a power-law energy distribution with index \(\gamma\). We assume equipartition with \(\epsilon_{E}=11/17\) and \(\epsilon_{B}=6/17\), corresponding to the minimum energy solution. The total energy in the outflow is given by \(E\). We find a radius \(\log R/\rm{cm}=17.06^{+0.01}_{-0.01}\), a magnetic field \(\log B/\rm{G}=-0.73^{+0.05}_{-0.04}\), an electron density \(\log N_{0}/\rm{cm}^{-3}=3.5^{+0.2}_{-0.1}\), and an energy \(\log E/\rm{erg}=49.4^{+0.1}_{-0.1}\). Assuming the outflow was launched at the optical peak with velocity \(v\), the best-fit radius gives \(\log\beta=\log v/c=-1.37^{+0.1}_{-0.1}\), or \(v\approx 1.3\times 10^{3}\) km s\({}^{-1}\).
The radio SED is thus consistent with a non-relativistic, wide-angle outflow launched near optical peak.
### Vt J2012
#### 3.2.1 Host galaxy properties
An optical image from the DECam Legacy Survey of HG J2012 is shown in the panel (a) of Figure 2. It is at a redshift \(z=0.053\), or a luminosity distance \(d_{L}=244\) Mpc. The host galaxy spectrum (contaminated by transient features) is shown in panel (b). From the H\(\delta_{A}\) absorption and an upper limit on the H\(\alpha\) EW, this galaxy is a balmer strong galaxy, which have slightly older stellar population than E+\(\Lambda\) galaxies. There is no indication of an AGN, although faint [Ne III] emission may be present.
From an SED fit performed in Paper I, the host galaxy has a stellar mass \(\log M_{*}/M_{\odot}=9.90^{0.32}_{0.09}\). The \(3\sigma\) upper limit on the star-formation rate is \(\log\rm{SFR}/(M_{\odot}\,yr^{-1})<1\). This host is consistent with the green valley.
From the stellar mass, we can infer a SMBH mass using the Yao et al. (2023) relation for optical TDE hosts: \(M_{\rm{BH}}(M_{*})=6.55^{+0.24}_{-0.32}\). In paper I, we measured a stellar velocity dispersion for this source \(\sigma_{*}=59\pm 2\). Using the \(M_{\rm{BH}}-\sigma_{*}\) relation from Kormendy & Ho (2013), we find \(M_{\rm{BH}}(\sigma_{*})=6.17\pm 0.31\), which is consistent with the value measured from the stellar mass. This SMBH mass is near the median of the SMBH masses from Yao et al. (2023)
#### 3.2.2 Optical and IR broadband transient emission
The optical lightcurve for VT J2012 is shown in panel c2 of Figure 2. This source was only detected by the ATLAS survey, and the lightcurve was created using recommended procedures. The flare was first detected on MJD\(\sim\)59100. Paper I fit this lightcurve to the same model as used for VT J1008, and found a peak blackbody luminosity of \(\log L_{\rm{bb}}/(\rm{erg\,s}^{-1})=43.07\) and peak temperature \(\log T_{\rm{bb}}/\rm{K}=3.93\). The radius at peak luminosity was \(\log R_{\rm{bb}}/\rm{cm}=15.26\). The time to rise from half-max-luminosity to max-luminosity was \(t_{\rm{rise},1/2}=10.2^{+1.5}_{-1.1}\) days, and the time to decay from max to half-max was \(t_{\rm{decay},1/2}=15^{+14}_{-10}\) days. Like VT J1008, this source shows a remarkably cool, faint flare relative to the Yao et al. (2023) sample.
In panel c3, we show the IR lightcurve for this source from the NEOWISE survey, processed using the methods described in Somalwar et al. (2023b). Like for VT J1008, We do not see any significant IR variability.
#### 3.2.3 X-ray emission
We checked public X-ray survey data, including the XMM Newton Slew Survey, etc, for archival detections
of VT J1008. No detections were reported. The pre-optical-flare X-ray upper limit was \(10^{43.1}\) erg s\({}^{-1}\). The tightest post-optical flare limit is from our Swift/XRT ToO. We obtained X-ray observations of VT J2012 on MJD 59536 using a 1.6 ks exposure with the Swift/XRT telescope (PI Somalwar). The source was not detected, with a \(3\sigma\) luminosity upper limit of \(10^{42.1}\) erg s\({}^{-1}\). This upper limit is shown in panel c1 of Figure 1.
#### 3.2.4 Radio emission
The radio lightcurve for VT J2012 is shown in panel c1. VT J2012 was first observed by VLASS on MJD 58166 (\(\sim\)624 days pre-peak) and the luminosity upper limit was \(L_{\nu}=4.7\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\). It was observed again on MJD 59147 (357 days post-optical peak) and was detected as a \((9.9\pm 0.9)\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\) source (\(1.48\pm 0.13\)). We observed the source with the VLA on MJD 59258 (467 days post-peak) in the CLSX bands, and the 3 GHz radio luminosity had risen to \((14.9\pm 0.2)\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\). This corresponds to a \(L_{\nu}\propto t^{1.5^{+0.4}_{-0.3}}\) power-law, which is consistent with a constant velocity.
The VLA radio SED is shown in panel d. We modelled the SED as a synchrotron outflow following the same methodology used for VT J1008. We find a radius \(\log R/{\rm cm}=17.225^{+0.006}_{-0.006}\), a magnetic field \(\log B/{\rm G}=-0.83^{+0.02}_{-0.02}\), an electron density \(\log N_{0}/{\rm cm}^{-3}=3.27^{+0.08}_{-0.08}\), and an energy \(\log E/{\rm erg}=49.69^{+0.05}_{-0.05}\). Assuming the outflow was launched at the optical peak with velocity \(v\), the best-fit radius gives \(\log\beta=\log v/c=-1.178^{+0.006}_{-0.006}\), or \(v\approx 2\times 10^{4}\) km s\({}^{-1}\).
Like that of VT J1008, this radio SED is consistent with a non-relativistic, wide-angle outflow launched near optical peak.
### Summary
We conclude with a brief summary of the results from this section:
* VT J1008 and VT J2012 are hosted by quiescent galaxies with no evidence for strong AGN activity. The galaxies are both E+A or Balmer-strong galaxies in the green valley. Both galaxies have SMBH masses \(M_{\rm BH}\approx 10^{5-6}\,M_{\odot}\).
* VT J1008 and VT J2012 have optical counterparts with peak blackbody luminosities \(L_{\rm bb}\approx 10^{43}\) erg s\({}^{-1}\) and temperature at peak \(T_{\rm bb}\approx 10^{4}\,K\), which are cooler and fainter than those of typical optically-selected TDEs. The rise and decay times of the optical flares are typical of optically-selected TDEs.
Figure 2: Summary plot for VT J2012, in the same format as Figure 1.
* Neither VT J1008 or VT J2012 have IR or X-ray counterparts.
* VT J1008 and VT J1008 both are associated with radio transients that turned on \(\sim\)\(1-2\) years post-optical peak. They both have 3 GHz luminosities \(\sim\)\(10^{29}\) erg s\({}^{-1}\) Hz\({}^{-1}\), which is a typical luminosity for optically-selected, radio-detected TDEs. The radio SEDs of both sources are consistent with low velocity \(\sim\)\(10^{-1}c\), wide angle outflows with energies \(\sim\)\(10^{49.5}\) erg.
## 4 The transient optical spectral features
While keeping the broader picture of the multiwavelength properties of our two radio TDEs in mind, we now delve into the intermediate width transient lines, which are the focus of this work. Zoom-ins on these features are shown in Figure 3. In this section, we present our methodology for constraining the emission line properties and present the results. We then briefly compare the observed lines to other transient observations, but defer a detailed discussion of the origins of the lines to Section 5.
### Methodology
We first constrain the properties of the transient spectral lines. We use two sets of observations, which we process separately.
First, we considered low resolution but flux calibrated observations with the Low Resolution Imaging Spectrometer (LRIS) on the Keck I telescope. We observed VT J1008 on MJD 59676 for 20 min. using the 1\(\arcsec\) slit with the standard star Feige 34 and on MJD 59616 for 10 min. using the standard star G191-B2B. We centered all observations on the galactic nuclei using a parallactic angle. We used the 400/3400 grism, the 400/8500 grating with central wavelength 7830, and the 560 dichroic. The resulting wavelength range was \(\sim\)1300\(-\)10000 A and the resolution \(R\)\(\sim\)700. We observed VT J2012 on MJD 59464 for 10 min using the 1\(\farcs\)0 slit with the standard stars BD+28 and G191-B2B for the blue/red sides, respectively, and on MJD 59678 using the standard star Feige 34. We reduced all spectra using the lpipe code with standard settings.
To constrain the properties of the transient spectral lines, we first must remove the host galaxy stellar emission from all the spectra. We fit each spectrum with the ppxf full spectrum fitting tool using the MILES templates (Vazdekis et al., 2010) following the method detailed in Appendix B of Somalwar et al. (2021). The resulting best-fit stellar component is shown, for each source, in red in the panel b's of Figure 1 and Figure 2. We then create a nebular spectrum by subtracting the stellar component from the galaxy spectrum, and the result is shown at the bottom of the panel b's of Figure 1 and Figure 2.
With these low resolution emission line spectra in hand, we now can fit the line profiles. We consider the following lines, for reasons that will become clear later in this work: H\(\alpha\), H\(\beta\), [O III]\(\lambda\)5007, [O III]\(\lambda\)4959, [O II]\(\lambda\lambda 3727,3729\), and He II\(\lambda\)4686. We first fit each line to a Gaussian with free centroid, width, and flux. We require the centroid be within 2000 km s\({}^{-1}\) of the expected wavelength given the host redshift. We set the lower bound of the width to be such that the line FWHM is greater than 6A, corresponding to a rough lower limit on the LRIS resolution. The width upper bound is 2000 km s\({}^{-1}\), which does not affect the fits. We adopt broad, uninformative priors for the flux: \(f\in[0,10^{-13}]\) erg cm\({}^{-2}\) s\({}^{-1}\). For the [O II]\(\lambda\lambda 3727,3729\) complex, we fit a single Gaussian rather than two because we do not expect the doublet to be resolvable given the large LRIS FWHM.
For some of the Balmer lines, this single Gaussian fit produces a statistically inconsistent \(\chi^{2}\). In those cases, we run a third fit with two Gaussian components, each with independent widths, amplitudes, and centroids. For J2012, we fix the centroid of one component at the host redshift. This produces a consistent \(\chi^{2}\) in each case.
From both events, the [O II] doublet is not significantly detected, and the [O III] line is detected with a narrow width \(\sim\)100 km s\({}^{-1}\). Given the narrow widths of the [O III] emission, we do not believe it is coming from the same location as the Balmer and Helium emission. Instead, it is consistent with being stable host galaxy emission. To constrain the presence of intermediate width, transient emission at the location of these oxygen lines, we repeat the Gaussian fit but fix the width to FWHM\(\approx 700\) km s\({}^{-1}\). In the case of [O III] \(\lambda\)5007, we do not subtract out the narrow line component before this fit, so the broad fit absorbs the flux from this narrow line. We choose to do this because it produces the most conservative upper limit on the broad line flux for this emission line, even though the resulting fit has a high \(\chi^{2}\). Subtracting out the narrow component would not affect our conclusions; it would simply tighten the bound on the presence of transient [O III] \(\lambda\)5007 emission.
The resulting best-fit parameters of the lines from the low resolution spectra are listed in Table 2, and the fits are shown in Figure 3.
In addition to these low resolution spectra, we use high resolution spectra that are not flux calibrated. The high resolution spectra are taken at later times than the low resolution spectra, so they provide constraints on the
line profiles over a longer timescale. Moreover, while we cannot set luminosity constraints using the high resolution spectra, we can study the line profiles in detail; in particular, we can distinguish between multiple, blended lines or a single line with a broad profile. We obtained spectra of these obejts with the Echellette Spectrograph and Imager (ESI) on the Keck II telescope. We used the Echelle mode for all observations. We observed VT J1008 on MJD 59908 for 22.5 min using the 0\(\farcs\)3 slit and the standard BD+28, and on MJD 60029 for 45 min using the 0\(\farcs\)5 slit and the standard BD+33. We observed VT J2012 on MJD 59876 for 20 min using the 0\(\farcs\)5 arcsec slit and the standard BD+28. We reduced the observations using the makee code with standard settings, and removed telluric lines using recommended procedures. For both objects, the H\(\alpha\) lines fall in a region with strong telluric absorption and bright sky lines. Fortunately, the spectra are of sufficiently high resolution that we can still study the smooth line profiles, but all spikes and other unusual features in the data are due to this contamination.
Zoom-ins on the resulting H\(\alpha\) line profiles are shown in Figure 4. While other lines are detected (VT J1008: [O III]\(\lambda\)5007, He II\(\lambda\)4686), no other line is detected at a sufficiently high signal-to-noise to allow detailed constraints on its profile.
Because the line profiles observed cannot be well-modelled as Gaussians, we constrain the line profiles using a different methodology from that used for the LRIS data. We aim to constrain the velocity offset of the line peak, the upper FWHM, and the lower FWHM. First, we fit a second degree polynomial in the range \(6560-6570\) A of the pixel with the maximum flux. The line centroid is then the wavelength at the peak of the polynomial. We measure the uncertainties on this centroid by repeating this process 1000 times, where each time we randomize the spectrum based on the observed errors. This method of finding the line centroid is approximate: the centroid is not well defined for the unusual profiles observed here. We then measure the upper and lower FWHM as the distance to the pixels on either side of the peak at which the flux levels drop below half of the peak flux. The results are reported in Table 2.
### Results
In this section, we briefly summarize the results from our emission line analysis. We also perform some basic analysis to constrain the parameters of the emitting region, which uses simple approximations for the emission line parameters. While we do not expect the results of this analysis to be exact, we use them to gain a rough understanding of the physical conditions in the emitting region.
We first consider the recombination lines: the Balmer and Helium emission. Both H\(\alpha\) and H\(\beta\) are detected from VT J1008 and VT J2012 in all observations. The H\(\alpha\) lines from both objects have luminosities of \(\sim\)10\({}^{40}\) erg s\({}^{-1}\) and they are brightening with time. The H\(\beta\) lines are fainter, at \(\sim\)10\({}^{39}\) erg s\({}^{-1}\) and are (insignificantly) fading for VT J2012 and brightening for VT J1008. We will discuss the line profiles in more detail later in this section, but note here that the Balmer lines from both objects have widths \(\sim\)1000 km s\({}^{-1}\).
Figure 3: Zoom-ins on select optical lines from the low resolution LRIS observations of VT J1008 and VT J2012. The top row shows observations of VT J1008 and the bottom row shows observations of VT J2012. The observations are shown in black, and the best-fit models are shown as colored lines. The colored bands denote 1\(\sigma\) uncertainties. The blue fits correspond to the first observation epochs, and the orange fits correspond to the second epochs. The features blueward of H\(\alpha\) in the J2012 panel are caused by telluric features
He II\(\lambda\)4685, and He I\(\lambda\)5875 lines are strongly detected from VT J1008, and all have luminosities \(\sim\)10\({}^{39}\) erg s\({}^{-1}\) and FWHM\(\sim\)700 km s\({}^{-1}\). The lines are brightening with time. The He II line is faintly detected from VT J2012 at a similar width and with a redshift consistent with that of the Balmer emission, although such a component is not detected in the second observation epoch. He I lines are not detected from VT J2012.
Let us assume that these luminosities are entirely produced by recombination in a spherical region. We will discuss a wider range of models in Section 5. With this assumption, we can approximately estimate the mass and volume of the recombining material, following the methodology described in Chapter 13 of Osterbrock & Ferland (2006). The ionized mass is given by
\[M_{\rm ion}=\frac{1.4L_{\rm H\alpha}m_{p}}{1.15n_{p}\alpha_{\rm H\alpha}h\nu_{ \rm H\alpha}}, \tag{1}\]
where we have assumed a pure Hydrogen and Helium gas where the Helium density is a tenth of the Hydrogen density and the Helium is equally divided between its two ionization states. Assuming \(L_{\rm H\alpha}=10^{40}\) erg s\({}^{-1}\) and \(\alpha_{\rm H\alpha}\sim 10^{-14}\) cm\({}^{3}\) s\({}^{-1}\), appropriate for case B recombination, we find an ionized gas mass \(M_{\rm ion}\sim 3000\,M_{\odot}(10^{4}\,{\rm cm}^{-3}/n_{e})\sim 0.3\,M_{ \odot}(10^{9}\,{\rm cm}^{-3}/n_{e})\). Note that this analysis is not correct at very high densities \(\sim\)10\({}^{9}\) cm\({}^{-3}\), but we include the results to give a rough sense of the expected ionized gas mass. These mass constraints corresponds to radii \(R\sim 10^{18}\,{\rm cm}(10^{4}\,{\rm cm}^{-3}/n_{e})^{2/3}\sim 10^{15}\,{\rm cm }(10^{4}\,{\rm cm}^{-3}/n_{e})^{2/3}\), assuming a filling factor \(\sim\)1. Based on the observed variation in the line luminosities and profiles on timescales \(\lesssim 60\) days, we expect that the emitting region has a size \(\lesssim 10^{17}\) cm, implying \(n_{e}\gtrsim 10^{5}\) cm\({}^{-3}\). As we will discuss in detail later, if this gas is stellar debris from a TDE with a \(\sim\)1 \(M_{\odot}\) star, we require a high density \(n_{e}\gtrsim 10^{9}\) cm\({}^{-3}\) in order that the ionized gas mass be smaller than the stellar mass. Even with that requirement, a large fraction of the stellar debris must be contained in this dense emitting region. This corresponds to an emitting radius \(\lesssim 10^{15}\) cm. If the gas is not stellar debris, it could be from the galaxy circumnuclear medium, in which case lower densities are possible.
We can further constrain the physical parameters of the Balmer emitting region using the Balmer decrement. These observed luminosities imply that the Balmer decrerements from both transients are remarkably high and are increasing: that of VT J1008 is \(\sim\)9 during the both epochs and that of VT J2012 is 14.5 in the first epoch and 25 in the second epoch. Typical Balmer decrerements from unextincted, low density, photoionized gas are \(\sim\)3. If we assume the Balmer emission is produced by recombination in a low density gas (e.g., \(n_{e}\approx 10^{2-4}\) cm\({}^{-3}\)), the high Balmer decrement implies strong extinction. Assuming photoionized gas with \(T=10^{4}\) K and a low density \(n_{e}\approx 10^{2}\) cm\({}^{-3}\), the color excess is related to the Balmer decrement as
\[E(B-V)=1.97\log\left[\frac{{\rm H\alpha}/{\rm H\beta}}{2.86}\right] \tag{2}\]
For VT J1008 (VT J2012), this implies \(E(B-V)\approx\) 1(1.5) mag.
In addition to dust, high densities and radiative transfer effects can increase the Balmer decrement, although
Figure 4: H\(\alpha\) spectral profiles from the medium resolution ESI observations. The observations of VT J1008 are in the _left_ panel, and those of VT J2012 are in the _right_ panel. The blue line shows the first epoch of observations and the orange line shows the second epoch. The flux for each epoch is normalized to the local continuum, which is not expected to be the same in both observations. In the _right_ panel, regions particularly impacted by strong sky lines are shown in red.
considering the latter is beyond the scope of this work. In AGN broad line regions, which have densities \(\gtrsim\)10\({}^{9}\) cm\({}^{-3}\) the decrement is often observed to reach values \(\sim\)6. We believe it likely that high densities may contribute high Balmer decrements observed here based on the properties of the other observed emission lines.
First, high densities are favored by the detection of the [Fe X] coronal line from VT J1008. Coronal lines are emission lines with extremely high ionization potentials \(>100\) eV, and they are most often observed from AGN. However, coronal lines from AGN always have [Fe X]\(\lambda 6375\) to [O III]\(\lambda 5007\) ratios \(\ll 1\). We observe a ratio \(>2\), which is unprecedented for AGN. This high ratio places VT J1008 in the class of extreme coronal line emitters, which we will discuss in the next section. Here, we use the detection of [Fe X] to constrain the physical parameters of the emitting region. Because the width of the [Fe X] line is similar to the Balmer widths, within a factor of a few, we expect that they are coming from \(\sim\)the same distance from the central SMBH, so it is plausible that the conditions of the coronal line emitting gas are similar to those of the Balmer emitting gas. We can constrain the density of the coronal line emitting gas using the lack of an [Fe VII] detection. Gener
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline \multicolumn{1}{c}{ Name} & MJD & Line Name & \(\Delta v\) & \(\sigma_{\rm FWHM,upper}\) & \(\sigma_{\rm FWHM,lower}\) & \(f_{\rm line}\) & \(L_{\rm line}\) \\ & & & km s\({}^{-1}\) & km s\({}^{-1}\) & km s\({}^{-1}\) & \(10^{-16}\) erg cm\({}^{-2}\) s\({}^{-1}\) & \(10^{39}\) erg s\({}^{-1}\) \\ \hline & & [OII]\(\lambda\lambda 3726,3728\) & \(99.9\pm 42.3\) & \(86.4\pm 32.9\) & \(-\) & \(0.18\pm 0.07\) & \(0.09\pm 0.03\) \\ & & HeII\(\lambda 4685\) & \(-18.6\pm 21.7\) & \(379.3\pm 18.3\) & \(-\) & \(3.6\pm 0.2\) & \(1.7\pm 0.1\) \\ & & H\(\beta\) & \(205\pm 36\) & \(583\pm 119\) & \(1041\pm 184\) & \(3.1\pm 0.9\) & \(1.5\pm 0.4\) \\ & & [OIII]\(\lambda 5006\) & \(30.2\pm 19.5\) & \(126.6\pm 17.7\) & \(-\) & \(0.84\pm 0.09\) & \(0.40\pm 0.05\) \\ & & HeL\(\lambda 5875\) & \(-72.5\pm 45.4\) & \(352.4\pm 45.3\) & \(-\) & \(1.8\pm 0.2\) & \(0.8\pm 0.1\) \\ & & [FeX]\(\lambda 6374\) & \(16.2\pm 25.1\) & \(342.9\pm 27.0\) & \(-\) & \(2.6\pm 0.2\) & \(1.2\pm 0.1\) \\ & & H\(\alpha\) & \(140\pm 6\) & \(732\pm 18\) & \(1135\pm 22\) & \(27.7\pm 0.5\) & \(13.2\pm 0.2\) \\ \cline{2-7} VT J1008 & & [OII]\(\lambda\lambda 3726,3728\) & \(242.9\pm 34.6\) & \(99.1\pm 25.1\) & \(-\) & \(0.31\pm 0.08\) & \(0.15\pm 0.04\) \\ & & HeII\(\lambda 4685\) & \(52.0\pm 34.5\) & \(406.0\pm 32.1\) & \(-\) & \(3.0\pm 0.2\) & \(1.4\pm 0.1\) \\ & & H\(\beta\) & \(115\pm 3\) & \(779\pm 4\) & \(779\pm 4\) & \(2.5\pm 0.0\) & \(1.19\pm 0.01\) \\ & & [OIII]\(\lambda 5006\) & \(128.3\pm 17.6\) & \(114.8\pm 18.2\) & \(-\) & \(0.8\pm 0.1\) & \(0.4\pm 0.1\) \\ & & HeL\(\lambda 5875\) & \(130.5\pm 59.0\) & \(277.3\pm 45.6\) & \(-\) & \(1.1\pm 0.2\) & \(0.5\pm 0.1\) \\ & & [FeX]\(\lambda 6374\) & \(-29.6\pm 29.4\) & \(239.8\pm 28.3\) & \(-\) & \(1.6\pm 0.2\) & \(0.8\pm 0.1\) \\ & & H\(\alpha\) & \(146\pm 8\) & \(732\pm 26\) & \(1030\pm 34\) & \(22.5\pm 0.7\) & \(10.7\pm 0.4\) \\ \cline{2-7} & 59909 & H\(\alpha\) & \(1466.23\pm 0.00\) & \(7.09\pm 0.00\) & \(4.47\pm 0.00\) & \(-\) & \(-\) \\ \cline{2-7} & 60029 & H\(\alpha\) & \(25.69\pm 0.00\) & \(372.28\pm 0.00\) & \(535.05\pm 0.00\) & \(-\) & \(-\) \\ \hline & & [OII]\(\lambda\lambda 3726,3728\) & \(-786.9\pm 516.9\) & \(821.8\pm 486.9\) & \(-\) & \(0.5\pm 0.3\) & \(0.4\pm 0.2\) \\ & & HeII\(\lambda 4685\) & \(531.3\pm 92.6\) & \(464.8\pm 106.6\) & \(-\) & \(1.2\pm 0.2\) & \(0.8\pm 0.1\) \\ & & H\(\beta\) & \(694\pm 39\) & \(605\pm 120\) & \(607\pm 120\) & \(0.8\pm 0.2\) & \(0.6\pm 0.1\) \\ & & [OIII]\(\lambda 5006\) & \(43.9\pm 29.3\) & \(106.3\pm 20.9\) & \(-\) & \(0.41\pm 0.08\) & \(0.28\pm 0.05\) \\ & & HeI\(\lambda 5875\) & \(694.9\pm 69.1\) & \(121.7\pm 63.3\) & \(-\) & \(0.5\pm 0.2\) & \(0.4\pm 0.1\) \\ & & [FeX]\(\lambda 6374\) & \(-120.0\pm 370.1\) & \(472.1\pm 219.7\) & \(-\) & \(0.3\pm 0.2\) & \(0.2\pm 0.1\) \\ VT J2012 & & H\(\alpha\) & \(718\pm 12\) & \(729\pm 33\) & \(1083\pm 79\) & \(11.6\pm 0.4\) & \(7.8\pm 0.3\) \\ \cline{2-7} & & [OII]\(\lambda\lambda 3726,3728\) & \(-871.1\pm 62.5\) & \(133.1\pm 51.0\) & \(-\) & \(0.4\pm 0.2\) & \(0.3\pm 0.1\) \\ & & HeII\(\lambda 4685\) & \(196.1\pm 72.2\) & \(241.3\pm 40.3\) & \(-\) & \(0.9\pm 0.2\) & \(0.6\pm 0.1\) \\ & & H\(\beta\) & \(804\pm 169\) & \(370\pm 1067\) & \(370\pm 252\) & \(0.6\pm 0.1\) & \(0.4\pm 0.1\) \\ & & [OIII]\(\lambda 5006\) & \(35.4\pm 63.7\) & \(92.2\pm 50.8\) & \(-\) & \(0.4\pm 0.1\) & \(0.3\pm 0.1\) \\ & & HeI\(\lambda 5875\) & \(707.2\pm 84.8\) & \(557.7\pm 83.2\) & \(-\) & \(2.2\pm 0.3\) & \(1.5\pm 0.2\) \\ & & [FeX]\(\lambda 6374\) & \(-337.0\pm 105.2\) & \(907.3\pm 72.1\) & \(-\) & \(3.2\pm 0.3\) & \(2.1\pm 0.2\) \\ & & H\(\alpha\) & \(687\pm 9\) & \(958\pm 28\) & \(1075\pm 32\) & \(15.0
ally, the [Fe X] coronal line is accompanied by the detection of [Fe VII] transitions, which have a lower ionization potential, and so would be expected to be stronger. One way to suppress [Fe VII] is to invoke high density gas: the [Fe VII] critical density is \(\sim\)10\({}^{7}\) cm\({}^{-3}\), whereas the [Fe X] critical density is \(\sim\)10\({}^{10}\) cm\({}^{-3}\). Alternatively, the [Fe VII] could be suppressed if the ionizing SED is peaked above 250 eV, although we disfavor this possibility given the detection of low ionization potential lines like H\(\alpha\) and He II with similar FWHM. Thus, the detection of [Fe X] without [Fe VII] suggests that there is high density gas in the intermediate-line emitting region. Since we now are confident that there is \(\gtrsim 10^{7}\) cm\({}^{-3}\) gas in the vicinity of the Balmer line emitting region, given the similar line widths of the Balmer and coronal lines, it is plausible that the Balmer emitting region is similarly dense. Note that coronal lines are not detected from VT J2012; however, throughout this paper we will adopt the simplifying assumption that both events are produced by similar mechanisms, so we assume that the previous argument holds for VT J2012 and VT J1008.
A high density would also explain the lack of intermediate width Oxygen lines. We detect neither intermediate width [O II] nor [O III] lines from either VT J1008 and VT J2012. Given that the ionizing source is such that we observe both Balmer and [Fe X] lines, we would expect Oxygen lines to be detectable, given the intermediate ionization potentials of these transitions. The [O III] line has a critical density \(\sim\)10\({}^{7}\) cm\({}^{-3}\); thus, high densities would explain the lack of a detection.
Based on the previous arguments, we consider it very likely that the intermediate line emitting region has a density \(\gtrsim 10^{7}\) cm\({}^{-3}\) and a radius \(\lesssim 10^{16}\) cm.
We can further constrain the conditions of the line emitting region using the line profiles. We primarily consider the Balmer line profiles, because these lines are among the brightest. The Balmer line profiles from both objects are asymmetric. The Balmer emission from VT J1008 shows a slightly redshifted peak a long blue tail. VT J2012 shows similarly asymmetric H\(\alpha\) emission. The observed H\(\beta\) emission is narrower and more symmetric, although H\(\beta\) is quite faint from VT J2012, so it is possible that the signal-to-noise is affecting our result. In contrast to the Balmer lines from VT J1008, those from VT J2012 are redshifted by \(\sim\)700 km s\({}^{-1}\), with no significant evolution in line centroid between epochs.
From the later-time, high resolution spectra, we see that the line profile for VT J1008 shows a broad, flat top, with a long blue tail. In the \(\sim\)120 days between ESI observations, the line profile became slightly more peaked towards the red side. Note that the apparent brightening may not be real \(-\) the flux is continuum-normalized, but the continuum level is expected to be different in the two observations because the slit orientations were different. In contrast to that of VT J1008, the line profile for VT J2012 is peaked and symmetric.
Based on these line profiles, the emitting region must be aspherical and, in the case of VT J2012, flowing away from the observer.
In summary, VT J1008 and VT J2012 show transient, intermediate width (\(\sim\)700 \(-\) 1000 km s\({}^{-1}\)) Balmer, He II, He I, and [Fe X] emission. Based on the observed luminosities and line profiles, we consider it likely that the emission is arising from a dense (\(\gtrsim\)10\({}^{7}\) cm\({}^{-3}\)), compact (\(\lesssim\)10\({}^{16}\) cm), aspherical, outflowing emitting region.
### Comparison to published TDE observations
In the rest of this section, we compare the observed emission lines to observations of published transients. We first compare these objects to TDEs. In the next section, we discuss coronal-line emitting transients, some of which are TDEs and some of which are of ambiguous origin. Then, we compare to non-TDE transients. In this section, because we aim to compare to previous TDEs, we will focus on those lines that have been studied in published TDEs; namely, we focus on the Hydrogen and Helium lines.
We begin by discussing the Hydrogen and Helium lines. Optically-detected TDEs are well-established to produce these transient spectral features. These TDEs can be divided into four classes based on their early time (\(\lesssim 6\) month post-optical flare): (1) Hydrogen rich TDEs, which should broad \(\sim\)10\({}^{4}\) km s\({}^{-1}\) Balmer features; (2) Hydrogen and Helium TDEs, which show \(\sim\)10\({}^{4}\) km s\({}^{-1}\) Balmer features and a complex of emission lines near He II\(\lambda\)4686, typically including N III\(\lambda\)4640 and N III\(\lambda\)4100; (3) Helium TDEs, which show only a broad (\(\sim\)10\({}^{4}\) km s\({}^{-1}\)) He II\(\lambda\)4686 line; and, (4) featureless TDEs, which show no transient spectral features and have brighter optical flares and, typically, higher redshift host galaxies relative to those of the former three TDE classes.
Our events would appear to most closely resemble the Hydrogen and Helium TDEs based on the detection of both Balmer and He II lines. However, there are many differences between our observed lines and those observed from Hydrogen and Helium TDEs. First, we do not observe N III lines. The N III lines are expected to be produced by the Bowen flourescence mechanism, which only operates under specific physical conditions. It is feasible that, at late times, it is not able to operate.
Our observed lines also differ from typical TDEs because of their high luminosities and narrow widths. These are highlighted in Figure 5. The left panel of
this figure shows the evolution of H\(\alpha\) width and luminosity for a sample of optically-selected TDEs. The luminosities of our lines are both \(\gtrsim 10^{40}\) erg s\({}^{-1}\), and are brightening with time \(\sim\)2 years post-TDE. Very few other optically-detected TDEs have detectable H\(\alpha\) at such late times: most of the available 3\(\sigma\) upper limits are at or below the luminosity detected from our events. The TDEs ASASSN 14li and ASASSN 14ae both have H\(\alpha\) detections \(\gtrsim\)500 days post-TDE, but the observed luminosities are at least a factor of a few fainter than those from our TDEs, and they both show declining H\(\alpha\) luminosities whereas our events are brightening.
The right panel of Figure 5 shows the gravitational radius in units of Schwarzschild radii \(r_{g}\) of the H\(\alpha\) emitting region for our TDEs, ASASSN-14li, and the typical early-time optically-selected TDE. We calculate this radius from the FWHM of the lines \(v\) as \(r/r_{g}=2(v/c)^{2}\). For the typical early-time optically-selected TDE, we assume \(v\sim 10^{4}\) km s\({}^{-1}\). The H\(\alpha\) lines from our TDE are much broader than those from ASASSN 14li and those from early-time observations of optically-selected TDEs. Our TDEs have \(\log r/r_{g}\sim 5.5\), and the radius is decreasing. The typical optically-selected TDE has \(\log r/r_{g}\sim 3\). ASASSN 14li has \(\log r/r_{g}\sim 5\) at \(\sim\)500 days post-TDE, although the radius is increasing and it is possible it evolved to match our events after the last observations.
In summary, the lines from our TDEs most closely match those from the Hydrogen+Helium TDE class. However, they are significantly brighter and narrower than any previously observed TDE.
### Comparison to coronal-line emitting transients
There have been a few bona-fide TDEs and a growing sample of ambiguous transients with strong coronal line emission, like that observed from VT J1008. These objects are typically referred to as the extreme coronal line emitters (ECLEs). The first known ECLE, SDSS J0952+2143, was discovered by Komossa et al. (2008, 2009) in a search for galaxies in SDSS with evolving spectral features. This source was associated with X-ray, optical, IR, and UV flares. There is no radio detection reported, although extensive radio follow-up was not performed. The source showed many transient emission lines, including bright coronal, He II and Balmer features. The coronal lines and He II lines are broad, with FWHM\(\sim\)800 km s\({}^{-1}\). The Balmer lines were decomposed into multiple components: a narrow component consistent with the host galaxy dispersion and redshift, a redshifted broad component, and two unresolved horns on either side of the rest-frame component. The redshifted, broad H\(\alpha\)component had a velocity offset of \(\sim\)560 km s\({}^{-1}\), a FWHM\(\sim\)1930 km s\({}^{-1}\), and a luminosity \(\sim\)10\({}^{41}\) erg s\({}^{-1}\). The redshifted, broad H\(\beta\)component had similar parameters, but with a luminosity imply
Figure 5: Comparison of the H\(\alpha\) emission line from VT J1008 and VT J2012 to optically-selected TDEs (Brown et al., 2017; Hammerstein et al., 2022, E. Hammerstein, private communication) and ECLEs (Wang et al., 2012). In the _left_ panel, we show H\(\alpha\) luminosity lightcurves for VT J1008 (red crosses), VT J2012 (magenta X’s), ECLEs (blue stars), and optically-selected TDEs (black circles). The H\(\alpha\) luminosities of VT J1008 and VT J2012 are much brighter at late times than those of optically-selected TDEs. They are more comparable to the ECLEs. In the _right_ panel, we show the distance of the emitting region from the central SMBH, \(r\), implied from the H\(\alpha\) width. The radio-selected TDEs are at larger radii than almost every TDE and ECLE.
ing a Balmer decrement \(\sim\)9. Both Balmer lines were first detected \(\sim\)1 year after the associated optical flare and remained bright \(\sim\)3-post optical flare, although they faded in that time.
The cause of this transient emission is unknown. The event could be a TDE, but it also consistent with a supernova with extreme coronal line emission. AGN-like emission lines are detected from the host galaxy, so the emission could also be associated with a flaring AGN. If the source is caused by a TDE, Komossa et al. (2009) argues that the broad lines are likely produced by photoionized stellar debris that has become unbound and forms eccentric streams surrounding the SMBH. They argue that the unresolved, narrow horns are produced by shocks in a neutral medium, but Heng (2010) shows that such a mechanism would require an unreasonable Hydrogen density.
The emission lines from SDSS J0952+2143 are remarkably similar to those observed from VT J1008 and VT J2012, although we do not detect narrow emission at the location of the Balmer lines and the Oxygen lines detected from our sources are much fainter than those from SDSS J0952+2143. We also do not detect multiwavelength flares analogous to those from SDSS J0952+2143. However, it is plausible that some of the differences between our sources and SDSS J0952+2143 could be reconciled by invoking a TDE in a galaxy with pre-existing accretion disk and/or a different line-of-sight.
After the discovery of SDSS J0952+2143, Wang et al. (2012) performed a search for transient coronal-line emitting galaxies in SDSS. They identified a sample of seven non-active galaxies with strong coronal line emission. The host galaxies all had narrow line emission, with six qualifying as BPT H II galaxies and one bordering the LINER and H II regions. Four of the objects showed \(\geq 3\sigma\) variations in their optical continua in the \(\sim\)months-years before the spectroscopic observations, measured by comparing their SDSS spectral and fiber magnitudes. In five of the seven sources, intermediate width emission lines are detected with FWHM \(\sim\)\(880-2600\) km s\({}^{-1}\). The lines were fading in all objects. Broad He II\(\lambda 4686\) is detected in three objects.
We overlay the luminosities and widths of the H\(\alpha\) from these ECLEs on the panels in Figure 5. Note that the time since optical flare is very uncertain for these events. We adopted the time between the SDSS spectroscopic and photometric observations for those objects with detected optical flares, and 400 days for those objects without detected optical flares. The broad emission from these events much more closely resembles that from VT J1008 and VT J2012. The primary differences between these ECLEs and our events come from their host galaxies: the Wang et al. (2012) ECLEs and SDSS J0852+2143 show strong nebular emission, whereas are events are in quiescent galaxies. Recent work, however, has suggested that, when considering the full ECLE population, they do tend to have TDE-like host galaxies; i.e., host galaxies that more closely resemble those of VT J1008 and VT J2012.
There is growing evidence that ECLEs are definitive TDEs, primarily based on arguments about the required ionizing flux, the optical light curve shapes for those few events with well-sampled light curves, and the similarities of TDE and ECLE host galaxies (Hinkle et al., 2023). However, most of the known ECLEs are in galaxies with nebular emission lines from the host galaxies, rendering this conclusion uncertain: it is impossible to exclude an AGN origin for these events. We still lack a large sample of bona-fide TDEs with coronal line detections. It is, then, intriguing how closely the multiwavelength, transient properties of ECLEs resemble VT J1008 and VTJ2012, both of which have quiescent hosts, well-sampled optical light curves, and other multiwavelength data that allow us to argue that they are in fact bona-fide TDEs (see Section 5.2).
### Comparison to ambiguous and non-TDE transients
In addition to TDEs and ECLEs, other transients can produce lines similar to those observed here. In particular, our objects resemble some supernovae. In particular, type IIn supernovae have been detected with similar line profiles. For example, SN2012ab is an optically-flaring Type IIn supernovae hosted near the nucleus a spiral galaxy that was discovered by Bilinski et al. (2018). SN2012ab is associated with an intermediate width H\(\alpha\) component of width \(\sim\)4500 km s\({}^{-1}\) that is redshifted by \(\sim\)800 km s\({}^{-1}\). The intermediate width component was first detected \(\sim\)7 days after the optical flare, alongside a broad H\(\alpha\) component with FWHM \(\sim\)20000 km s\({}^{-1}\). The intermediate width component is still detected \(\sim\)1200 days post-event. The late-time spectrum is not of sufficiently high signal-to-noise to constrain the presence of a late-time broad component. Bilinski et al. (2018) argues that SN2012ab is a type IIn supernova based on the observed spectral features and the optical light curve. The unusual H\(\alpha\) emission is caused by interaction with an aspherical circumstellar material. They note that they cannot rule out a TDE origin; however, if the event is a TDE, it would be highly unusual because of the asymmetric material needed to produce the observed emission lines.
Based on the emission lines produced by VT J1008 and VT J2012, we see similar evidence for asphericities
as observed for SN 2012ab. In contrast to SN 2012ab, VT J1008 and VT J2012 are hosted by quiescent galaxies. Moreover, no broad H\(\alpha\) component is detected from our events, although our spectra were taken \(>500\) days post-optical flare, and it is possible that a broad component was present at early times.
### Summary
VT J1008 and VT J2012 are associated with Balmer, Helium, and coronal line emission. The line widths are \(\sim\)1000 km s\({}^{-1}\), which is much narrower than typical lines detected from optically-selected TDEs. Based on the line parameters, we suggested that the emission comes from a dense \(\gtrsim\)\(10^{7}\) cm\({}^{-3}\) and compact \(\lesssim\)\(10^{16}\) cm\({}^{-3}\) region. The mass of the ionized, recombination line-emitting gas is likely \(\gtrsim 0.03-0.3\,M_{\odot}\); i.e., the mass is a large fraction of a solar mass. These emission lines do not resemble those detected from the optically-selected TDE sample. Instead, the observed emission lines most closely resemble those observed from the ambiguous class of ECLEs, which have been proposed, though not confirmed, to be TDEs.
## 5 Discussion
### Summary
* VT J1008 and VT J2012 are radio-selected transients in the nuclei quiescent, green valley host galaxies. The host galaxies have SMBHs with masses \(M_{\rm BH}\sim 10^{5-6}\,M_{\odot}\).
* VT J1008 and VT J2012 have optical counterparts, with lightcurves typical of optically-selected TDEs except that they are slightly fainter and cooler at peak. They do not have detectable X-ray or IR counterparts.
* VT J1008 and VT J2012 have radio counterparts with GHz luminosities \(\sim\)\(10^{38}\) erg s\({}^{-1}\) that have SEDs consistent with wide-angle, low velocity \(\beta\lesssim 0.1\) outflows at radii \(\sim\)0.1 pc.
* VT J1008 and VT J2012 both have transient, intermediate-width Balmer and He II emission detected \(\sim\)2 years post-TDE. Their H\(\alpha\) luminosities are \(\sim\)\(10^{40}\) erg s\({}^{-1}\) and are likely increasing. All the lines have FWHM \(\sim\)\(700-1000\) km s\({}^{-1}\), and may be broadening slightly with time. VT J1008 also has strong He I and [Fe X] emission with similar line widths.
* The observed transient lines detected from these radio-selected events are reminiscent of Hydrogen+Helium TDEs. However, the lines are much more luminous at late-times than other optically-detected TDEs, and they are narrower than is typically observed from optically-selected TDEs.
* Instead, the observed spectral features more closely resemble the emission lines associated with some extreme coronal line emitters. A subset of these objects are known to have intermediate width lines, in some cases with centroids that are redshifted relative to the host (e.g., J0952+2143).
### Are VT J1008 and VT J2012 tidal disruption events?
Before delving into the origin of the transient emission lines, we briefly consider and rule out the possibility that VT J1008 and VT J2012 are not tidal disruption events. The three most likely origins for these events are: stellar explosions, active galactic nuclei flares, or TDEs. In the following, we consider each of these possibilities. We assume both events are caused by the same type of event, which is justified given the strong similarities in their multiwavelength properties and host galaxies.
**Stellar explosions.**
We first consider the possibility that these events are stellar explosions. Supernovae can produce radio-loud events with optical flares and transient optical spectral lines. However, the only type of supernovae that has been observed to produce radio outflows with velocities \(\sim\)\(0.1c\), as observed for our events, are SN Ic-BL, so we only consider this type of event to be possible. There are a number of factors that make VT J1008 and VT J2012 unusual for SNe Ic-BLs. First, these events are in the nuclei of their host galaxies: their offsets in units of the host half-light radii are consistent with zero. SNe Ic-BLs tend to lie in the outskirts of their host galaxies, with a median offset relative to host half-light radius of \(0.7\pm 0.2\)(Japelj et al., 2018). Second, our events are hosted by non-star forming galaxies, whereas SNe Ic-BL hosts tend to be star-forming (0.3% of core-collapse supernovae hosts are quiescent).
The observed optical light curve is very unusual for SN Ic-BL: these SN often, though not always, show rapid post-peak cooling, which is not present for our events (Taddia et al., 2019). While the late time spectra of SN Ic-BL are not well constrained, intermediate width features such as we see are unprecedented (S. Anand, private communication).
While we cannot definitively rule out a SN Ic-BL origin, there are many factors that would make our events extremely unusual. Hence, we do not believe our events are associated with stellar explosions.
**Active galactic nuclei flares.**
We next consider active galactic nuclei flares. AGN can produce variability and flaring across the electromagnetic timescale and over a wide range of timescales. The main evidence against an AGN origin for VT J1008 and VT J2012 is the lack of any evidence that there was active accretion prior to the optical flare. The optical spectra do not show any evidence for strong AGN emission. It is possible that the weak [O III] detections are caused with an AGN, but it would imply a very weak AGN. Moreover, that line could also be caused by weak star formation. The IR colors of the host galaxy also show no evidence for AGN activity; likewise, the lack of an X-ray detection and the lack of optical variability support the hypothesis that these are quiescent galaxies.
From the above evidence, we disfavor an AGN origin. Of course, none of these arguments completely rule out the possibility of a very weak, flaring AGN. In this case, some trigger has caused a large amount of mass to be dumped on the central SMBH. Such events are extremely poorly understood: it is not clear whether such events could even happen without a discrete object like a star venturing near the SMBH. However, given the broad consistency of our events with TDEs and the large complications associated with interpreting low luminosity AGN flares, we do not consider this possibility further.
**Tidal disruption event.**
After ruling out AGN flares and stellar explosions, we are left with TDEs. As we have discussed in this work and in Paper I, the host galaxy, optical lightcurves, and radio emission of VT J1008 and VT J2012 are broadly consistent with optically-selected TDEs. Henceforth, we consider these events to be definitive TDEs.
### The origin of the transient spectral features
In the rest of this paper, we discuss the origin of the transient spectral features. We make the strong but necessary assumptions that (1) the ionization source producing the Balmer emission is the same that produces the higher ionization lines and (2) the same emission mechanism is active in both VT J1008 and VT J2012.
#### 5.3.1 Are the lines associated with a shock?
In both VT J1008 and VT J2012, we know that fast outflows exist given the observed radio emission. It is not a far stretch to imagine that the transient emission lines are also associated with a shocking outflow. Shocks can produce emission lines in multiple ways, depending on the shock velocity and medium density. If the shocked gas can cool efficiently, it will produce strong free-free emission that can photoionize the surrounding medium. Otherwise, the shock is "nonradiative" and primarily emits through collisional ionization.
We can determine if a shock is radiative by comparing the age to the cooling time. To compute the cooling time, we need to know the density \(n\) and the shock velocity \(v_{s}\). From the shock velocity, we first must compute the shock temperature \(T_{s}\):
\[T_{s}=\frac{2(\gamma-1)}{(\gamma+1)^{2}}\frac{m_{p}}{k_{B}}v_{s}^{2}=2.2\times 1 0^{7}\,\mathrm{K}\,\bigg{(}\frac{v_{s}}{10^{3}\,\mathrm{km}\,\mathrm{s}^{-1}} \bigg{)}^{2}, \tag{3}\]
where \(\gamma\) is the adiabatic index and we adopt \(\gamma=5/3\). We consider shock velocities in the range \(\sim\)\(500-10^{4}\) km s\({}^{-1}\), which approximately match the observed line widths. Slow shocks \(v_{s}\lesssim 100\) km s\({}^{-1}\) do not produce \(\sim\)1000 km s\({}^{-1}\) emission lines.
Then, we must compute the cooling rate \(\Lambda(T)\), which, following Draine (2011), we approximate as
\[\frac{\Lambda(T)}{\mathrm{erg}\,\mathrm{cm}^{3}\,\mathrm{s}^{-1}}=\begin{cases} 2.3\times 10^{-24}\left(\frac{T_{s}}{10^{4}\,\mathrm{K}}\right)^{0.5},&T_{s}>10^{ 7.3}\,\mathrm{K}\\ 1.1\times 10^{-22}\left(\frac{T_{s}}{10^{6}\,\mathrm{K}}\right)^{-0.7},&10^{5} \,\mathrm{K}<T_{s}\leq 10^{7.3}\,\mathrm{K}.\end{cases} \tag{4}\]
At high densities, such as those we will consider, the cooling rate can be suppressed because collisional de-excitation reduces cooling through heavy elements. However, at the high temperatures we are consider \(\gtrsim 10^{7}\) K, cooling through heavy elements in subdominant and this suppression is minimal (Wang et al., 2014), so we do not consider it further.
With these definitions, the cooling time is given by
\[t_{\mathrm{cool}}=\frac{3k_{B}T_{s}}{n\Lambda(T)}\] \[=1.3\,\mathrm{years}\,\bigg{(}\frac{T_{s}}{10^{7}\,\mathrm{K}} \bigg{)}\bigg{(}\frac{n}{10^{6}\,\mathrm{cm}^{-3}}\bigg{)}^{-1}\bigg{(}\frac{ \Lambda(T)}{10^{-22}\,\mathrm{erg}\,\mathrm{cm}^{3}\,\mathrm{s}^{-1}}\bigg{)} ^{-1}. \tag{5}\]
If the shock has \(t_{\mathrm{age}}\approx 1\,\mathrm{year}\), then the shock is non-radiative if the density \(n\lesssim 10^{7}\) cm\({}^{-3}\) and \(v_{s}\sim 10^{3}\) km s\({}^{-1}\), or if the shock is near-relativistic and has \(10^{7}\lesssim n/\mathrm{cm}^{-3}\lesssim 10^{8}\).
With these constraints on the parameter space, we now explore possible emission mechanisms for both radiative and non-radiative shocks. First, we consider a non-radiative shock. In this case, we know that the density is relatively low (\(n\lesssim 10^{7}\) cm\({}^{-3}\)) if the shock is non-relativistic. One possible emission mechanism comes from an analogy to type IIn supernova, which produce similar velocity shocks but in such low density material (\(n\sim 0.1-10\) cm\({}^{-3}\)). In these events, Balmer emission resembling that which we observe is produce from Balmer dominated shocks, which occur when the
pre-shock material is partly neutral. In this case, we can approximate the mass of H\(\alpha\) emitting atoms as:
\[M_{\rm H\alpha}=\frac{L_{\rm H\alpha}t_{\rm H\alpha}m_{\rm H}}{E_{ \rm H\alpha}\epsilon_{\rm H\alpha}}\] \[=438M_{\odot}\frac{L_{\rm H\alpha}}{10^{40}\,{\rm erg\,s^{-1}}} \frac{t_{\rm H\alpha}}{1\,{\rm year}}. \tag{6}\]
Here, \(L_{\rm H\alpha}\) is the average H\(\alpha\) luminosity and \(t_{\rm H\alpha}\) is the duration of the H\(\alpha\) emission. We adopted fiducial values \(L_{\rm H\alpha}\sim 10^{39}\) erg s\({}^{-1}\) and \(t_{\rm H\alpha}\sim 1\) year, which are likely correct to within an order of magnitude. \(m_{\rm H}\) is the mass of a neutral Hydrogen atom, \(E_{\rm H\alpha}\sim 2\) eV is the energy of an \(H\alpha\) photon, and \(\epsilon_{\rm H\alpha}\sim 0.2\) is the fraction of excited Hydrogen atom that undergo the H\(\alpha\) transition. If the density of the emitting region is \(\lesssim\)\(10^{8}\) cm\({}^{-3}\), then the size of the emitting region must be \(\gtrsim 0.03\) pc. Assuming a source age of \(\sim\)3 years, the outflow that produced the Balmer dominated shocks must have a velocity \(v\gtrsim 10^{5}\) km s\({}^{-1}\). This value is inconsistent with the width of the observed lines, which is expected to correspond to roughly the velocity of the shock for Balmer dominated shocks. Because of this inconsistency, we do not consider a Balmer dominated shock a feasible cause of the observed emission. However, most modelling of Balmer dominated shocks, on which we base our discussion, assumes low densities applicable to supernova remnants. It is possible that similar shocks in high density environments could have different properties, in which case something similar to a Balmer dominated shock could produce the observed emission. Further exploration of this is beyond the scope of this work.
We now move to the possibility of a radiative shock. In this case, the shock has caused surrounding gas to heat up to \(\sim\)\(10^{7}\) K as described earlier, which is cooling quickly via free-free emission. This emission ionizes the surrounding gas, which is likely at a similar density to the shocked, cooling gas (\(n\gtrsim 10^{8}\) cm\({}^{-3}\)). From Equation 4, the cooling rate for gas at a temperature \(\gtrsim\)\(2\times 10^{7}\) K is \(\Gamma(T)\gtrsim 10^{-23}\) erg cm\({}^{3}\) s\({}^{-1}\). Assuming \(n_{e}\gtrsim 10^{7}\) cm\({}^{-3}\) in a region of radius \(r\sim 0.01\) pc, we have a total luminosity of \(\gtrsim 10^{41}\) erg s\({}^{-1}\). Most of these photons will be Hydrogen- and Helium-ionizing given the high temperature of the free-free emitting region.
In typical models of the emission lines produced by radiative shocks, Oxygen lines (e.g., Allen et al., 2008) are produced. In this case, we would not expect to see Oxygen lines given the high density of the material and the low critical densities of the typical lines. Instead, we would only expect to see recombination lines, such as the observed Balmer and Helium emission, and lines with very high critical densities, such as [Fe X]. The detailed modelling required to constrain this possibility quanti
Figure 6: An example cartoon geometry that could produce either redshifted emission without a blueshifted counterpart, or non-shifted emission. We invoke a dusty torus that is misaligned from the TDE-produced accretion disk. The blue clouds represent accretion disk winds, the black clouds represent dense gas in the torus, and the grey clouds represent less dense gas at larger distances. When the disk winds slam into the dense torus, radiative shocks are produced. The resulting free-free emission photoionizes gas in the vicinity, including outflow gas launched from the disk, producing the observed emission lines. In the _left_ panel, the blueshifted component is obscured from the observer by the dusty torus. In the _right_ panel, no redshifted or blueshifted components are produced.
tatively is beyond the scope of this paper, but from this qualitative discussion we believe it feasible that the observed emission could be produced by a radiative shock.
Now that we have established that the observed lines could be produced by a radiative shock, we must consider the centroid offsets from the host redshifts. Discussion of the line profiles is beyond the scope of this work. The intermediate width lines from VT J2012 are redshifted relative to the host galaxy, whereas the lines from VT J1008 are near the host redshift. There are two possible explanations for the line offsets: (1) the SMBH that produces VT J2012 is moving or (2) the emitting gas is outflowing. We are pursuing follow-up to constrain (1) and will consider it in a future paper; here, we consider (2). While it is possible that we do not detect a blueshifted event by chance, with a shock model it is feasible to produce a geometry where only redshifted velocities are possible.
In Figure 6, we show a cartoon of one model that can produce the observed lines (not to scale). First, we suppose that the TDE has produced an accretion disk that is producing disk winds (see Proga, 2007, for possible disk wind launch models). These winds are outflowing, and collide with the circumnuclear medium (CNM) of the galaxy to produce the radiative shocks. While the structure of the CNM is poorly constrained, it is feasible that it is in a torus (or some extended, axisymmetric) structure, as is known to exist in AGN host galaxies. There is no a prior reason that this axisymmetric dust structure need be aligned with the TDE accretion disk: if both the disk and torus orientations are set by the SMBH spin direction, gravitational precession would cause it to change relative to when the dusty torus was formed. If the disk orientation is related to the orbit of the disrupted star, it will be independent of the torus orientation. In Figure 6, we show two possible orientations. In the left panel, the torus is inclined relative to the disk, and in the right panel, the disk and torus orientation are perpendicular to each other. In the left panel, the disk wind clouds will tend to collide with the edge of the dusty structure. The clouds that are outflowing away from the observer (redshifted) will be visible, but those flowing towards the observer (blueshifted) are seen through a large column of dust. In the right panel, all of the clouds are visible, but no blueshift or redshift is expected. While this is a cartoon model, and it is unclear whether this dust structure is expected, it could reproduce the observed geometry.
In summary, a radiative shock model invoking an axisymmetric dusty structure misaligned with the TDE accretion disk could reproduce the observed lines, including their widths and offset from host redshift.
#### 5.3.2 Are the lines photoionized by a central source?
If the lines are not associated with the shock, they are likely photoionized by a central source associated with the accretion induced by the TDE, as is observed at early times. The observed lines would thus be the evolved version of the early time lines observed from optically-selected TDEs. This leads to two questions: (1) can the models that produce the early time TDE emission also explain the late-time emission, and (2) why do we detect these lines in these radio-detected, optically-selected TDEs, when they do not seem to be present in most optically-selected TDEs?
The evolution of early-time TDE transient lines is not well explored, but available observations suggest that they tend to fade within \(\sim\)1 year. Before this work, none had been detected above 10\({}^{40}\) erg s\({}^{-1}\) at times \(\gtrsim\)1 year post-optical peak. The only strong detection was from the extensively studied radio-emitting TDE ASASSN 14li, which had an H\(\alpha\) detection \(\sim\)1.5 years post-optical peak at a luminosity \(7\times 10^{38}\) erg s\({}^{-1}\).
The origin of these early-time lines is still debated. Roth and Kasen (2018) presented a model where the lines originate from an extended, optically thick envelope surrounding the SMBH. The envelope reprocesses soft X-ray photons emitted during the accretion of the stellar debris. A percentage of the reprocessed emission produces an optical continuum, corresponding to the observed optical flares. Hydrogen and helium in the envelope become ionized and produce the observed emission lines, but because the envelope is optically thick, the Balmer lines are suppressed relative to the Helium emission. The resulting line profiles were analyzed by Roth and Kasen (2018), who how that an optically-thick envelope will produce \(\sim\)10\({}^{4}\) km s\({}^{-1}\) emission lines due to electron scattering, without requiring high velocity dispersion gas. If the optically-thick envelope is outflowing, the line profile will not be Gaussian but instead will have a blueshifted peak with an extended red wing. With time, the emission lines will narrow as the density decreases and electron scattering reduces. The time evolution of the line luminosities relative to the optical continuum level has not been explored in depth.
Narrower lines may be expected at late times, if the Roth and Kasen (2018) model is correct. The line profiles that we observe, however, do not match those predicted by Roth and Kasen (2018). In the case of stationary gas a symmetric line profile is expected whereas outflowing gas would produce a blueshifted peak with a redshifted tail. In the observations of both VT J1008 and VT J2012, we see a redshifted peak with a blueshifted tail. It is possible that altering the geometry of the envelope
could produce the observed emission, but the required modelling is beyond the scope of this work.
Another challenging aspect of a model where the lines are produced by photoionization from a central source is the association with radio emission. There is no reason, a-priori, that we would expect the intermediate width lines to preferentially occur in radio-emitting systems if they are produced by photoionization. One possible explanation is that both these lines and radio-emitting shocks are produced by TDEs with slow accretion rate decays, in which case the photoionizing continuum can remain sufficiently strong at late times to produce the observed emission lines. This explanation is subject to significant theoretical uncertainties; in particular, there is no expectation that events with slower accretion rate decays will tend to cause radio-emission. We also have no direct observational evidence that long-lived TDEs tend to produce radio emission. In Paper I, we saw no significant correlation between the decay of the optical light curve and the presence of radio emission. We do not detect any remarkable X-ray emission from these sources.
Alternatively, we can invoke a gas-rich environment to explain both the radio emission and the spectral features. Such an environment has been invoked for coronal line emitters in the past. Then, the radio detections are caused by shocks in the gas. While this is plausible, it is unclear why this would produce the unusual line profiles that are observed, nor why the emission would be entirely redshifted in the case of VT J2012. Unlike in the case of shock ionization, there is no clear geometry that could produce the observed redshift, unless the SMBH is recoiling or in a binary, which we will constrain in a future paper. This tension becomes stronger if we include events like J0952+2143, which also shows redshifted intermediate width emission. It seems improbable, though not impossible, both of these are TDEs by recoiling or binary SMBHs. Instead, we require dense, rapidly outflowing, asymmetrically-distributed photoionized gas. The gas could be the outflowing stellar debris, but, as we discussed in Section 4, the mass of the ionized gas is a large fraction of a solar mass. Unless these events were caused by the discussion of high mass (\(\gtrsim\)a few solar mass) stars, we require that most of the unbound debris remains in a compact region. This is not expected based on current TDE theory.
In summary, while photoionization from a central source is a feasible model, we prefer a shock ionized model. It is possible that detailed models of the evolution of the transient emission lines from TDEs and the dust geometry and kinematics in the circumnuclear medium could reproduce the observed emission, but we do not currently have strong evidence to favor this model.
## 6 Conclusions
We have presented the multiwavelength properties of two radio-selected TDEs. The TDEs were selected from our sample of six radio-selected, optically-detected TDEs from the VLA Sky Survey. They were the only two TDEs in quiescent galaxies, and they showed unusual, intermediate-width Balmer and Helium emission. These events were otherwise fully consistent with optically-selected TDEs. We discussed the origin of the intermediate width emission lines in detail, and argued that they likely originate from a radiative shock. Alternatively, the lines could originate from outflowing, asymmetric, dense gas in the circumnuclear medium that is photoionized by the TDE, but we marginally disfavor this model.
One of the most intringuing findings in this work is that the transient spectral features observed from these two radio-selected TDEs share many characteristics with those from the ambiguous class of coronal line emitting transients, the ECLEs. This connection provides yet more evidence that ECLEs are caused by TDEs. Moreover, just as early time transient spectral features from TDEs allow the events to be subdivided into classes with different properties, these late-time spectral features may allow for a new TDE classification system: featureless late-time spectra, the extreme coronal line emitters with intermediate width recombination lines, and the extreme coronal line emitters without intermediate width recombination lines. These different classes likely correspond to physically different events; e.g., events with different amounts of circumnuclear material and/or those that do and do not allow for the launch of outflows. In future work, we hope to obtain late-time spectra for a large sample of TDEs, with the aim of further developing classification system and pinning down the physical causes of the late-time emission.
|
2305.16890 | Universal Weak Coreset | Coresets for $k$-means and $k$-median problems yield a small summary of the
data, which preserve the clustering cost with respect to any set of $k$
centers. Recently coresets have also been constructed for constrained $k$-means
and $k$-median problems. However, the notion of coresets has the drawback that
(i) they can only be applied in settings where the input points are allowed to
have weights, and (ii) in general metric spaces, the size of the coresets can
depend logarithmically on the number of points. The notion of weak coresets,
which have less stringent requirements than coresets, has been studied in the
context of classical $k$-means and $k$-median problems. A weak coreset is a
pair $(J,S)$ of subsets of points, where $S$ acts as a summary of the point set
and $J$ as a set of potential centers. This pair satisfies the properties that
(i) $S$ is a good summary of the data as long as the $k$ centers are chosen
from $J$ only, and (ii) there is a good choice of $k$ centers in $J$ with cost
close to the optimal cost. We develop this framework, which we call universal
weak coresets, for constrained clustering settings. In conjunction with recent
coreset constructions for constrained settings, our designs give greater data
compression, are conceptually simpler, and apply to a wide range of constrained
$k$-median and $k$-means problems. | Ragesh Jaiswal, Amit Kumar | 2023-05-26T12:51:16Z | http://arxiv.org/abs/2305.16890v1 | # Universal Weak Coreset
###### Abstract
Coresets for \(k\)-means and \(k\)-median problems yield a small summary of the data, which preserve the clustering cost with respect to any set of \(k\) centers. Recently coresets have also been constructed for constrained \(k\)-means and \(k\)-median problems. However, the notion of coresets has the drawback that (i) they can only be applied in settings where the input points are allowed to have weights, and (ii) in general metric spaces, the size of the coresets can depend logarithmically on the number of points. The notion of _weak coresets_, which have less stringent requirements than coresets, has been studied in the context of classical \(k\)-means and \(k\)-median problems. A weak coreset is a pair \((J,S)\) of subsets of points, where \(S\) acts as a summary of the point set and \(J\) as a set of potential centers. This pair satisfies the properties that (i) \(S\) is a good summary of the data as long as the \(k\) centers are chosen from \(J\) only, and (ii) there is a good choice of \(k\) centers in \(J\) with cost close to the optimal cost. We develop this framework, which we call _universal weak coresets_, for constrained clustering settings. In conjunction with recent coreset constructions for constrained settings, our designs give greater data compression, are conceptually simpler, and apply to a wide range of constrained \(k\)-median and \(k\)-means problems.
## 1 Introduction
Center-based clustering problems such as \(k\)-median and the \(k\)-means are important data processing tasks. Given a set of center locations \(F\subset\mathcal{X}\) and a set \(X\subset\mathcal{X}\) of \(n\) points in a metric \((\mathcal{X},D)\), and a parameter \(k\), the goal here is to partition the set of points into \(k\)_clusters_, say \(X_{1},\ldots,X_{k}\), and assign the points in each cluster to a corresponding _cluster center_, say \(c_{1},\ldots,c_{k}\in F\) respectively, such that the objective \(\sum_{i=1}^{k}\sum_{x\in X_{i}}D(x,c_{i})^{z}\) is minimized. Here \(z\) is a parameter which is 1 for \(k\)-median and 2 for \(k\)-means. In the past decade, there has been significant effort in designing coresets for such settings. Given a \(k\)-median or \(k\)-means clustering instance as above, a coreset with parameter \(\varepsilon\) is a weighted subset \(S\) of points in the metric space with the following property: for every set \(C\) of \(k\) points in the metric space, the assignment cost of \(X\) to \(C\) is within \((1\pm\varepsilon)\) of that of \(S\). More formally, let \(w(x)\) denote the weight of a point \(x\in S\), and for a point \(x\in X\), let \(D(x,C)\) be the distance between \(x\) and the closest point in \(C\). Then the following condition is satisfied for every subset \(C\) of \(k\) points (where \(z=1\) or \(z=2\) depending on the clustering problem being considered):
\[(1-\varepsilon)\sum_{x\in S}w(x)\cdot D(x,C)^{z}\leq\sum_{x\in X}D(x,C)^{z}\leq (1+\varepsilon)\sum_{x\in S}w(x)\cdot D(x,C)^{z}. \tag{1}\]
The notion of coresets is useful for several reasons: (i) There are efficient algorithms for constructing small-sized coresets. Hence, some of the fastest known algorithms for \(k\)-means and \(k\)-median problems proceed in a step fashion: first, find a succinct coreset, and then run a less efficient algorithm on the coreset; (ii) in streaming settings, where one cannot afford to store the entire dataset, a coreset provides a summary of the data without compromising on the quality of clustering. Further,
it is well known that coresets from two distinct data sets can be composed to yield a new coreset for the union of these two datasets. Hence, coresets are amenable to settings where data arrives over time; (iii) in scenarios where the set of \(k\) centers may change over time, a coreset represents an efficient way of computing the clustering cost.
For most applications, the requirements of a coreset may seem too strong. Indeed, a less stringent notion of _weak coreset_ was defined by [13]. A weak coreset, with a parameter \(\varepsilon\), for a point set \(X\) as above is a pair \((J,S)\) of subsets of points in the metric space, with \(S\) being a weighted subset of points, such that (i) the condition (1) is satisfied for all subsets \(C\), where \(|C|=k\) and \(C\subseteq J\); and (ii) there is a subset \(C\) of \(k\) centers in \(J\) such that the assignment cost of \(X\) to \(C\) is within \((1+\varepsilon)\) of the optimal clustering cost of \(X\). The motivation for defining a weak coreset is that one could obtain weak coresets with better guarantees than a coreset. Indeed, this shall be the case in the problems considered in this work.
To understand why weak coresets may have better guarantees than coresets, we briefly discuss coreset construction techniques. Typical constructions use random sampling-based ideas. One starts with an initial set of \(O(k)\) centers obtained by a fast approximation algorithm. For each of these centers \(c\in C\), we partition the data into "rings" of geometrically increasing size around \(c\). From each of these rings, one samples \(poly(\frac{k}{\varepsilon})\) points and appropriately assigns them weights - these weighted sampled points "represent" the points in the ring as far as \(c\) is concerned, i.e., their assignment cost to \(c\) is very close to that of the original set of points in the ring with high probability. These sampled points form the desired coreset. However, for the coreset property to hold, these sampled points must have near-optimal assignment cost for _every_ set of \(k\) centers. Since there are about \(n^{k}\) possibilities for the choice of \(k\) centers, we need to sample \(\big{(}poly(\frac{k}{\varepsilon})\cdot\log n\big{)}\) points from each ring to ensure the coreset property. In geometric settings, concepts such as an \(\varepsilon\)-net and \(\varepsilon\)-centroid set have been used to reduce the coreset size. However, in general metric spaces, there are lower bounds (see [1]) suggesting that the size of the coreset will have a dependency on \(\log n\).
Weak coresets allow us to remove the dependency on \(\log n\) even in general metric spaces. Since the near-optimal clustering guarantees need to hold with respect to \(k\) centers chosen from \(J\) only, the set of such possibilities reduces to \(|J|^{k}\). Thus, a small-sized \(J\) would typically imply a small-sized sample \(S\) as well. Further, weak coresets allow us to maintain a near-optimal clustering in streaming setting. Indeed, the sets \(J\) and \(S\) can be constructed in a streaming setting. Since the set of \(k\) centers needs to be selected from \(J\) only, and each can be tested with respect to \(S\), we can also maintain a set of near-optimal \(k\) centers in a streaming setting.
So far, our discussion has focused on the classical \(k\)-median and \(k\)-means settings. However, there has been significant recent activity in the more general class of _constrained_ clustering problems. A constrained clustering problem specifies additional conditions on a feasible partitioning of the input points into \(k\) clusters. For example, the \(r\)-gathering problem requires that each cluster in a feasible partitioning must contain at least \(r\) data points. Similarly, the well-known _capacitated_ clustering problem specifies an upper bound on the size of each cluster. Constrained clustering formulations can also capture various types of _fairness_ constraints: each data point has a _label_ assigned to it, and we may require upper or lower bounds on the number (or fraction) of points with a certain label in each cluster. Some of these constrained problems are discussed in Section 4.
Coresets for constrained clustering settings were recently constructed by [3, 6]. Note that the standard notion of coreset is meant to preserve the cost of an assignment where points get assigned to the closest center. This prevents using standard coresets in constrained clustering settings where a point may not necessarily get assigned to its closest center. Recent work [3, 6] design "assignment-preserving" coresets that allows their use in constrained settings. In this work, we generalize the notion of weak coresets to _universal weak coresets_ for constrained clustering settings. The underlying idea is the same as that of a weak coreset, i.e., we need a weighted subset \(S\) of points along with a set \(J\) of potential center locations. But now, this pair has the same guarantees as a weak coreset for _any_ constrained clustering problem. This universal guarantee has a feature that we need not know in advance the actual constrained clustering problem being solved.
The notion of a universal weak coreset also has the following subtle application. In some specific settings, there is a distinction between known algorithms for weighted and unweighted settings. More specifically, there exist constrained clustering problems, where even if we are given a small-sized set \(S\) of points, efficient algorithms for a near-optimal set of \(k\) centers with respect to \(S\) are known only if the point set \(S\) is unweighted. For example, a recent development [8] in the
median problem in the Ulam metric has broken the \(2\)-approximation barrier. However, their \((2-\delta)\)-approximation algorithm works only on unweighted input permutations. In such settings, we may not be able to efficiently find a good set of centers even if \(S\) is a coreset. However, when given a weak coreset \((J,S)\), we know that we need to look for centers that are subsets of \(J\) only, and we can use the cost preservation property of the weighted set \(S\) to find good centers from \(J\). This allows us to efficiently handle such constrained clustering problems as well.
Breaking the coreset \(\log n\) barrierSince it is known [1] that the \(\log n\) factor in the size of a coreset is unavoidable in general metric spaces, we must relax the notion of a coreset to break the \(\log n\) barrier. Our notion of a universal weak coreset provides a framework for an appropriate relaxation that allows us to break the \(\log n\) barrier. More specifically, we relax the condition on the set \(J\) to: there exists a subset \(C\) of \(k\) centers in \(J\) such that the assignment cost of \(X\) to \(C\) is within \((\alpha+\varepsilon)\) of the optimal clustering cost of \(X\), where \(\alpha\) is allowed to be \(>1\). Moreover, the _universal_ property on \(J\) says that this \((\alpha+\varepsilon)\)-approximation holds with respect to _any_ target clustering (not only the optimal Voronoi partitioning). The property on the set \(S\) remains unchanged. We call this an \(\alpha\)-universal weak coreset. Note that a \(\alpha\)-universal weak coreset helps to find an \(\alpha\)-approximate solution. The relaxation from \((1+\varepsilon)\) to \((\alpha+\varepsilon)\) guarantee is not a significant compromise if \(\alpha\) is the best approximation guarantee known for a constrained clustering problem, which is indeed true for several constrained problems we discuss in this paper. On the other hand, this relaxation allows the universal weak coreset size, \((|J|+|S|)\), to be \(poly(\frac{k}{\varepsilon})\), i.e., independent of \(n\). Our main results include constructions of such universal weak coresets:
Informal result: _We give a construction of a \(3\)-universal weak coreset for the \(k\)-median and a \(9\)-universal weak coreset for the \(k\)-means problem in general metric spaces (the \(3,9\) factors improve to \(2,4\) for the special case when \(X\subseteq F\)). We also give a \(1\)-universal weak coreset construction for \(k\)-median/means in the Euclidean setting. All these have size \(poly(\frac{k}{\varepsilon})\)._
As applications, we discuss how to obtain an \(\alpha\) approximate solution from an \(\alpha\)-universal weak coreset, for arbitrary versions of constrained clustering problems, such as balanced clustering, fair clustering, \(l\)-diversity clustering, and potentially many more.
Related workTwo decades ago, coresets were introduced [17] primarily as a tool to design streaming algorithms for the \(k\)-median/means problems. Subsequently, it became an independent computational object for study, and some remarkable techniques and results [16; 9; 13; 20; 12; 14] have been obtained. More recent developments [21; 18; 4; 19; 2; 7; 10] have been in getting improvements on the size bounds of coresets in various metrics. Recent developments have also been on coresets for constrained settings [3; 6], which are most relevant to our work.
OrganizationIn the next section, we define the notion of a universal weak coreset. In Section 3, we will see constructions of such coresets. Finally, in Section 4, we will see applications of universal weak coresets in finding approximate solutions to several constrained clustering problems.
## 2 Universal Weak Coreset
We define the notion of universal weak coreset formally in this section. We shall use \([k]\) to denote the set \(\{1,...,k\}\). In the discussion, 'with high probability' should be interpreted as with a probability of at least 0.99. Let \(\mathcal{X}\) denote a metric space with metric \(D\) defined on it. We now formally define a constrained clustering problem. While describing an instance \(\mathcal{I}\), we would like to separate out the actual constraints on feasible clusterings and the underlying clustering instance. A clustering instance \(\mathcal{I}^{\prime}\) is given by a tuple \((X,F,w,k)\), where \(X\) is the set of all input points with a corresponding weight function \(w:X\rightarrow\mathbb{R}^{+}\), a set \(F\) of potential center locations and a value \(k\), which denotes the number of clusters.
A constrained clustering instance consists of a tuple \((X,F,w,k)\) as above and a \(k\)-tuple \(\Gamma=(t_{1},\ldots,t_{k})\) of non-negative real values such that \(\sum_{i\in[k]}t_{i}=\sum_{x\in X}w(x)\). Intuitively, the value \(t_{i}\) denotes the total weight of the points assigned to the \(i^{th}\) cluster. However, a point in \(X\) can be partially assigned to several clusters; but the sum of these partial weight assignments should equal \(w(x)\). In other words, an assignment is given by a mapping \(\sigma:X\times[k]\rightarrow\mathbb{R}^{+}\), such that \(\sum_{i\in[k]}\sigma(x,i)\,=\,w(x)\) for each \(x\in X\). An assignment \(\sigma\) is said to be _consistent_ with
\(\Gamma=(t_{1},\ldots,t_{k})\), denoted \(\sigma\sim\Gamma\), if \(\sum_{x\in X}\sigma(x,i)=t_{i}\) for all \(i\in[k]\). Thus, the \(k\)-tuple \(\Gamma\) denotes how the weights of the points in \(X\) gets partitioned into the \(k\) clusters. Given an instance \(\mathcal{I}=((X,F,w,k),\Gamma)\) of constrained clustering, and a set \(C\subseteq F\) of \(k\) centers, the clustering cost, denoted \(\mathsf{cost}_{z}(X,w,C,\Gamma)\), where \(z=1\) or \(2\), defined as follows:
\[\mathsf{cost}_{z}(X,w,C,\Gamma)\equiv\min_{\sigma\sim\Gamma}\Bigg{\{}\sum_{i= 1}^{k}\sum_{x\in X}\sigma(x,i)\cdot D(x,c_{i})^{z}\Bigg{\}}.\]
Now the optimal cost of clustering over the choice of centers \(C\) is denoted as follows:
\[\mathsf{opt}_{z}(X,w,\Gamma)\equiv\min_{C:C\subseteq F,|C|=k}\{\mathsf{cost}_ {z}(X,w,C,\Gamma)\}.\]
We are now ready to define the notion of weak coresets. In the following, the parameter \(z\) shall be either \(1\) or \(2\). We shall also fix a parameter \(\varepsilon>0\) for rest of the discussion. This should be treated as an arbitrary small but positive constant.
**Definition 1** (\(\alpha\)-Universal Weak Coreset).: _Given a clustering instance \(\mathcal{I}=(X,F,w,k)\), an \(\alpha\)-universal weak coreset is a tuple \((J,S,v)\), where \(J\subset F\) is a subset of potential center locations, and \(S\subset X\) is a weighted subset of points with weight function \(v:S\to\mathbb{R}^{+}\) such that any assignment \(\sigma:X\times[k]\to\mathbb{R}^{+}\): the following condition holds with high probability:_
1. \(J\) _contains a subset_ \((c_{1},...,c_{k})\) _with_ \[\sum_{i=1}^{k}\sum_{x\in X}\sigma(x,i)\cdot D(x,c_{i})^{z}\leq(\alpha+ \varepsilon)\cdot\sum_{i=1}^{k}\sum_{x\in X}\sigma(x,i)\cdot D(x,c_{i}^{*})^{z}.\] _where_ \((c_{1}^{*},...,c_{k}^{*})\) _is the optimal center set that respects_ \(\sigma\)_, i.e.,_ \((c_{1}^{*},...,c_{k}^{*})=\arg\min_{(s_{1},...,s_{k})}\left\{\sum_{i=1}^{k} \sum_{x\in X}\sigma(x,i)\cdot D(x,s_{i})^{z}\right\}\)__
2. _For every subset_ \(C\subseteq J\)_,_ \(|C|=k\) _and every_ \(\Gamma\)_:_ \[\mathsf{cost}_{z}(X,w,C,\Gamma)\in(1\pm\varepsilon)\cdot\mathsf{cost}_{z}(S,v,C,\Gamma).\]
_The size of a weak coreset \((J,S,v)\) is defined as \((|J|+|S|)\)._
An \(\alpha\)-universal weak coreset allows us to summarise the dataset so that this summary is sufficient to obtain an \((\alpha+\varepsilon)\)-approximate solution to any constrained version of the clustering problem in time that is dependent only on the size (\(|J|+|S|\)) of the coreset. This could lead to a fast approximation algorithms if the universal coreset construction is efficient and its size is independent of the data size, \(n=|X|+|F|\). In the next section, we will see that this is indeed possible. Let us see a canonical approximation algorithm that finds an \((\alpha+\varepsilon)\)-approximate solution from an \(\alpha\)-universal weak coreset.
**Theorem 1**.: _Consider a clustering instance \((X,F,w,k)\) and let \((J,S,v)\) be an \(\alpha\)-universal weak coreset for it. Given a constrained clustering instance \(\big{(}(X,F,w,k),\Gamma\big{)}\), there is algorithm \(\mathcal{A}\) that, with high probability, outputs a set of \(k\) centers \(C\subseteq F\) such that:_
\[\mathsf{cost}_{z}(X,w,C,\Gamma)\leq(\alpha+\varepsilon)\cdot\mathsf{opt}_{z}(X,w,\Gamma).\]
_Moreover, the running time of \(\mathcal{A}\) is \(\tilde{O}(|J|^{k}\cdot|S|)\)._
Proof.: The algorithm tries out all ordered subsets \(C:=(c_{1},\ldots,c_{k})\) of size \(k\) of \(J\). For each such subset, one can find an assignment \(\sigma:S\times[k]\to\mathbb{R}^{+}\) that is consistent with \(\Gamma\) and minimizes \(\mathsf{cost}_{z}(S,v,C,\Gamma)\). This can be done by setting up a suitable min-cost flow network. Thus, we can efficiently compute \(\mathsf{cost}_{z}(X,w,C,\Gamma)\). Finally, we output the subset \(C\) such that \(\mathsf{cost}_{z}(X,w,C,\Gamma)\) is minimized. The desired result follows easily from the properties of a universal weak coreset.
Note that if the coreset construction is efficient (_i.e., polynomial in \(n,k,1/\varepsilon\)_) and the coreset size is \(f(k,\varepsilon)\), for some function \(f\), then the above theorem gives an FPT (_Fixed Parameter Tractable_) approximation algorithm with parameter \(k\). This means that as long as \(k\) is a fixed constant, the algorithm runs in polynomial time. We now give efficient constructions of universal weak coresets.
Universal Weak Coreset Construction
We now give an algorithm for constructing coresets. Recall that there are two sets in the definition of a universal weak coreset: \(J\) and \(S\). The set \(S\) represents the input points that need to be clustered, whereas the set \(J\) acts as the representative of the potential center locations. We will construct these two sets independently using two known lines of results.
**Constructing \(J\):** Let us first see the construction of the set \(J\) that follows from developments on \(D^{z}\)-sampling based algorithms for the _list-\(k\)-median/means_ problems [15, 5] - the idea of list \(k\)-median or \(k\)-means gave a unified way of handling a large class of constrained clustering problems.
The following problem is addressed by [15, 5]. Given a clustering instance \((X,F,w,k)\), and a parameter \(\varepsilon>0\), output a list \(\mathcal{L}=\{C_{1},\ldots,C_{\ell}\}\), where each \(C_{i}\subseteq F\) is a set of \(k\) centers such that the following property is satisfied: for any partition \(P_{1},\ldots,P_{k}\) of the point set, there exists a set of \(k\) centers \(C=(c_{1}^{\prime},\ldots,c_{k}^{\prime})\in\mathcal{L}\) such that
\[\sum_{i\in[k]}\sum_{x\in P_{i}}D(x,c_{i}^{\prime})^{z}\leq(\alpha+\varepsilon )\sum_{i\in[k]}\sum_{x\in P_{i}}D(x,c_{i}^{*})^{z},\]
where \(c_{i}^{*}\) is the optimal center for \(P_{i}\). The goal is to minimize the size \(\ell\) of \(\mathcal{L}\) (the above property needs to hold with high probability). To solve this problem, [15, 5] find a suitable set \(M\subseteq F\) (using a \(D^{z}\)-sampling technique) and then iterate over all subsets of size \(k\) of \(M\) to generate the list \(\mathcal{L}\). We state the relevant result from [15] that we shall use to construct the set \(J\).1
Footnote 1: Note that the result in this particular form is not explicitly stated in [15] since this was not the primary goal of that work. In particular, the result stated here is a weighted version of the results in [15]. However, it follows from their analysis.
**Theorem 2** ([15]).: _There is a randomised algorithm, that outputs a set \(M\subset F\) of size \(\left(poly(\frac{h}{\varepsilon})\right)\) with the following property: For any assignment \(\sigma:X\times[k]\to\mathbb{R}^{+}\), with high probability, there is a set of \(k\) centers \(C:=\{c_{1},\ldots,c_{k}\}\in M\) such that:_
\[\sum_{i=1}^{k}\sum_{x\in X}\sigma(x,i)\cdot D(x,c_{i})^{z}\leq(3^{z}+ \varepsilon)\cdot\sum_{i=1}^{k}\sum_{x\in X}\sigma(x,i)\cdot D(x,c_{i}^{*})^ {z},\]
_where \((c_{1}^{*},...,c_{k}^{*})\) is the optimal set of centers that respect \(\sigma\), i.e., \((c_{1}^{*},...,c_{k}^{*})=\arg\min_{(s_{1},\ldots,s_{k})}\sum_{i=1}^{k}\sum_{ x\in X}\sigma(x,i)\cdot D(x,s_{i})^{z}\). The running time of this algorithm is \(O(n|M|)\)._
It is not difficult to see that the set \(M\) in the above theorem is precisely the set \(J\) that we need for a \(3^{z}\)-universal weak coreset. For the special case of \(X\subseteq F\) (i.e., a center can be located at any of the input points), [15] gave an improved guarantee of \((2^{z}+\varepsilon)\) instead of \((3^{z}+\varepsilon)\). So, the same improvement transfers to the universal weak coreset. For the Euclidean metric, [5] used sampling ideas similar to [15] to give a result similar to Theorem 2. However, the approximation guarantee here is \((1+\varepsilon)\) and the size of \(M\) is \((\frac{k}{\varepsilon})^{O(\frac{1}{\varepsilon})}\). This gives a \(1\)-universal weak coreset property for the set \(J\) in the Euclidean setting.
**Constructing the set \(S\):** Now we show how to construct the desired set \(S\). Here, we build on the recent work of [6] in designing "assignment-preserving coresets" for \(k\)-median and \(k\)-means. Their construction works by partitioning the points into \(\tilde{O}(k^{2}e^{-z})\) "rings" and then finding suitable (weighted) representatives from each of these rings. The latter procedure requires clever random sampling techniques. The selected representatives \(S_{R}\) from a particular ring \(R\) satisfy the following condition: for any set of \(k\) centers \(C\), the assignment costs to \(C\) of all the points in the ring \(R\) is close to that of \(S_{R}\). But one would like this property to hold for all \(n^{k}\) possible ways of choosing \(C\). Thus, one needs to apply union bound over all such possibilities, which results in a multiplicative factor of \(k\log n\) in the representative size from each ring.2 As mentioned earlier, one hopes to avoid this barrier by constructing weak coresets. Here, number of possible choices for the set \(C\) reduces to \(|J|^{k}\) instead of \(n^{k}\). So, the \(k\log n\) factor needed in the size of the sampled set \(S_{R}\) from each ring \(R\) gets replaced by \(k\log|J|\). The trade-off is that instead of the classical coreset allowing a \((1+\varepsilon)\)-approximate solution, the \(\alpha\)-universal weak coreset only allows a \((\alpha+\varepsilon)\)-approximate solution. This is not a significant compromise if \(\alpha\) is the best approximation guarantee known for a
constrained clustering problem, which happens to be true for several cases. We now formally state the result from [6] that we shall use to construct the set \(S\) for our universal weak coreset.
**Theorem 3** ([6]).: _Consider a clustering instance \((X,F,w,k)\) and a parameter \(\delta\in(0,1)\). There is a randomised algorithm to construct a weighted set \(T\subset X\) of size \(O\left(poly(\frac{k}{\varepsilon})\cdot\log\frac{1}{\delta}\right)\) with weight function \(v:T\rightarrow\mathbb{R}^{+}\) that satisfies the following property: given a set \(C\) of \(k\) centers,_
\[\forall\Gamma,\mathsf{cost}_{z}(X,w,C,\Gamma)\in(1\pm\varepsilon)\cdot\mathsf{ cost}_{z}(T,v,C,\Gamma),\]
_holds with probability at least \((1-\delta)\). Moreover, the running time of the algorithm is \(O(n|T|)\)._
The construction of the desired set \(S\) using the above result follows from a direct application of union bound over the choice of \(k\) center sets in the set \(J\).
**Theorem 4**.: _Consider a clustering instance \((X,F,w,k)\). There is a randomized algorithm for constructing a weighted set \(S\subset X\) of size \(\left(poly(\frac{k}{\varepsilon})\cdot\log|J|\right)\) with weight function \(v:S\rightarrow\mathbb{R}^{+}\) such that the following event happens with high probability: for every choice of \(C\) centers from \(J\), \(|C|=k\), and every \(\Gamma\):_
\[\mathsf{cost}_{z}(X,w,C,\Gamma)\in(1\pm\varepsilon)\cdot\mathsf{cost}_{z}(S,v, C,\Gamma).\]
_Moreover, the running time of the algorithm is \(O(n|S|)\)._
The following results now follow from Theorem 4 and the discussion after Theorem 2:
**Theorem 5** (Main theorem: Metric \(k\)-median).: _There is a \(3\)-universal weak coreset \((J,S,v)\) of size \(\left(poly(\frac{k}{\varepsilon})\right)\) for \(k\)-median (i.e., \(z=1\)) objective in general metric spaces. The time to construct such a coreset is \(O(n\cdot(|J|+|S|))\)._
For the special case \(X\subseteq F\), the guarantee in the above theorem improves from \(3\) to \(2\).
**Theorem 6** (Main theorem: Metric \(k\)-means).: _There is a \(9\)-universal weak coreset \((J,S,v)\) of size \(\left(poly(\frac{k}{\varepsilon})\right)\) for \(k\)-means (i.e., \(z=2\)) objective in general metric spaces. The time to construct such a coreset is \(O(n\cdot(|J|+|S|))\)._
For the special case \(X\subseteq F\), the guarantee in the above theorem improves from \(9\) to \(4\).
**Theorem 7** (Main theorem: Euclidean \(k\)-median/\(k\)-means).: _There is a \(1\)-universal weak coreset of size \(\left(poly(\frac{k}{\varepsilon})\right)\) for \(k\)-median and \(k\)-means objectives in the Euclidean metric. The time to construct such a coreset is \(O\left(n(k/\varepsilon)^{O(\frac{1}{\varepsilon})}\right)\)._
In the following section, we see applications of the results above.
## 4 Applications
In this section, apply the universal weak coreset constructions to solve constrained versions of the \(k\)-median and \(k\)-means problems. As mentioned earlier, we can view a universal weak coreset as a compression of the original dataset. There are two ways of applying universal weak coresets : (i) Execute a known algorithm for the specific constrained problem on the compressed instance \((J,S,v,k)\) instead of \((F,X,w,k)\), and (ii) Use the meta-algorithm defined in Theorem 1 with appropriate modifications. We now discuss some specific examples.
### Clustering with size-based constraints
We consider constrained clustering problems where, besides optimizing the objective function, there are constraints on the size of the clusters. For example, the \(r\)-gathering problem requires a lower bound of \(r\) on the size of every cluster. Similarly, the capacitated clustering problem has an upper bound on cluster size. These constraints try to capture a "balance" property that limits the variance in the cluster sizes. We can model such size-constrained problems using the balanced \(k\)-median or \(k\)-means problem. Here, in addition to \((F,X,w,k)\), an instance also specifies tuples \((l_{1},...,l_{k})\) and \((u_{1},...,u_{k})\); where \(l_{i}\) and \(u_{i}\) are the lower and upper bound on the total weight of the \(i^{th}\) cluster, respectively. For example, the \(r\)-gathering problem is obtained by setting \(l_{i}=r,u_{i}=\infty\) for all \(i\in[k]\). Let us see how the \(3\)-universal weak coreset for \(k\)-median objective from Theorem 5 helps obtain a \(3\)-approximation algorithm for any instance of the balanced \(k\)-median problem (the extension to balanced \(k\)-means is analogous).
**Theorem 8**.: _Let \((J,S,v)\) be a \(3\)-universal weak coreset for an input instance \((F,X,w,k)\). Let \(\mathcal{I}=(F,X,w,k,(l_{1},...,l_{k}),(u_{1},...,u_{k}))\) be an instance of the balanced \(k\)-median problem. Then there is a randomized algorithm \(\mathcal{A}\), that with high probability, outputs a \(k\) center set \(C\) that is a \((3+\varepsilon)\)-approximate solution for \(\mathcal{I}\). The running time of \(\mathcal{A}\) is \(\tilde{O}(|J|^{k}\cdot|S|)\).3_
Footnote 3: The overall running time of the approximation algorithm, including the time to construct the universal weak coreset is \(n\cdot poly(\frac{k}{\varepsilon})+\left(\frac{k}{\varepsilon}\right)^{O(k)}\).
Proof.: Consider an optimal solution to \(\mathcal{I}\), and let \(\sigma(x,i)\) denote the weight of point \(x\) assigned to cluster \(i\). Define \(\Gamma:=(\sum_{x}\sigma(x,1),...,\sum_{x}\sigma(x,k))\). From the \(3\)-universal weak coreset property, there is a subset \(C\) of \(J\), \(|C|=k\), such that \(\mathsf{cost}_{1}(X,w,C,\Gamma)\leq(3+\varepsilon)\mathsf{opt}(X,w,\Gamma)\). Moreover, the set \(S\) has the property that \(\mathsf{cost}_{1}(X,w,C,\Gamma)\) and \(\mathsf{cost}_{1}(S,v,C,\Gamma)\) are within \((1+\varepsilon)\) factor of each other. This implies that if we try all possible choice of \(k\) centers \(C\) from \(J\) and, for each such \(C\), \(\mathsf{find}\mathsf{opt}_{1}(S,v,C,\Gamma)\), then we can compute \(\mathsf{opt}_{1}(X,w,\Gamma)\) within \((3+\varepsilon)\) approximation factor.
The remaining issue is how to compute \(\mathsf{opt}_{1}(S,v,C,\Gamma)\) for a given choice of \(C\). We do not know \(\Gamma\) here, but we can find the tuple \(\Gamma^{\prime}\) for which \(\mathsf{opt}_{1}(S,v,C,\Gamma^{\prime})\) is minimized. Indeed, we can set up a minimum cost flow network where we would like to assign the points fractionally to the centers in \(C\), and for each center in \(C\), we can assign lower and upper bound (i.e., \(l_{i}\) and \(u_{i}\)) for the amount of weight assigned to it. Solving this min-cost flow problem shall yield the optimal choice of \(\Gamma^{\prime}\). Minimizing over \(C\subseteq J,|C|=k\), we can find \(\mathsf{opt}_{1}(X,w,\Gamma)\).
Combining the above ideas yield a \((3+\varepsilon)\) approximation algorithm.
The \((9+\varepsilon)\)-approximation for arbitrary balanced versions of the \(k\)-means problem in general metrics follows on similar lines using the \(9\)-universal weak coreset from Theorem 6. Similarly, a \((1+\varepsilon)\)-approximation for arbitrary balanced versions of the \(k\)-median and \(k\)-means problems in Euclidean metrics can be obtained using \(1\)-universal weak coreset from Theorem 7.
### Fair clustering and other labeled versions
We now consider constrained clustering problems where points have labels, i.e., suppose we are given a label set \(L:=\{1,...,m\}\), and each point \(x\) has a label \(\ell(x)\in L\) associated with it. Labels can capture disparate scenarios where every client may be part of multiple (overlapping) groups (_e.g., groups based on gender, ethnicity, age, etc._). Every unique combination of groups gets assigned a different label, so \(m\) denotes the number of distinct combinations of groups to which a point can belong. For a label \(j\in L\), let \(X_{j}\) denote the set of points that are assigned label \(j\). Consider a clustering instance \((X,F,w,k,\ell)\), where we have also incorporated the label mapping. The corresponding fair clustering instance \(\mathcal{I}\) is specified by an additional list of of \(k\) pairs, namely, \((\alpha_{1},\beta_{1}),...,(\alpha_{k},\beta_{k})\). An optimal solution needs to find a set of \(k\) centers, and an assignment \(\sigma:X\times[k]\rightarrow\mathbb{R}^{+}\) for all \(x\in X\), such that:
1. For every \(j\in[m]\) and \(i\in[k]\), \(\frac{\sum_{x\in X_{j}}\sigma(x,i)}{\sum_{x\in X}\sigma(x,i)}\in[\alpha_{i}, \beta_{i}]\), i.e., for every group, the fraction of weights assigned to the \(i^{th}\) cluster is in the range \([\alpha_{i},\beta_{i}]\). This captures various fairness notions for points that may belong to a particular group.
2. The assignment cost, i.e., \(\sum_{i=1}^{k}\sum_{x\in X}\sigma(x,i)\cdot D^{z}(x,c_{i})\), is minimized.
Our definition of universal weak coreset is for the case \(m=1\), i.e., points have only one label, which may be interpreted as the unlabeled case. However, we need to extend the notion of universal weak sets to multi-label settings.
Towards this, we recall that the set \(J\) constructed in the previous section (Theorems 5 and 6) satisfies the following property: for any assignment \(\sigma:X\times[k]\rightarrow\mathbb{R}^{+}\), there is a \((3^{z}+\varepsilon)\)-approximate center set in \(J\) with respect to \(\sigma\). More specifically, let \(\sigma^{*}\) denote the optimal assignment and let \(C^{*}\equiv(c_{1}^{*},...,c_{k}^{*})\) denote the optimal \(k\) centers that respects \(\sigma^{*}\). The property on set \(J\) says that there exists \(k\) centers \(C\equiv(c_{1},....,c_{k})\) such that \(\sum_{i}\sum_{x}\sigma^{*}(x,i)\cdot D(x,c_{i})^{z}\leq(3^{z}+\varepsilon) \cdot\sum_{i}\sum_{x}\sigma^{*}(x,i)\cdot D(x,c_{i}^{*})^{z}\). This means that as long as our set \(S\) has the property that for any assignment respecting the group constraint, the corresponding assignment cost to any \(C\subseteq J,|C|=k\) is about the same as that of the point set \(X\), we will have a \(3^{z}\)-universal weak coreset for the fair clustering problem
as well. Here, we note that we can execute the coreset construction from Theorem4 separately on each group and take a union of the corresponding coresets obtained. This larger set acts a coreset for the labeled dataset. We now formalize these ideas. First, we extend the notion of a universal weak coreset to the multi-labeled setting. In the unlabeled version, as assignment of weights to a centers in a set \(C\) was characterized by a tuple \(\Gamma\) of size \(k\). Since we have \(m\) labels now, such an assignment needs to be specified for each label. In other words, we now consider tuples \(\Gamma\) of length \(mk\), i.e., \(\Gamma:=(t_{1,1},...,t_{1,m},t_{2,1},...,t_{2,m},...,t_{k,1},...,t_{k,m})\), where \(t_{i,j}\) is meant to denote the total weight of points with label \(j\) assigned to the \(i^{th}\) cluster. We can define an assignment \(\sigma\) analogously as a map \(X\times[k]\rightarrow\mathbb{R}^{+}\). We say that \(\sigma\) is consistent with \(\Gamma\), i.e., \(\sigma\sim\Gamma\) if for every label \(j\) and cluster \(i\), \(\sum_{x\in X_{j}}\sigma(x,i)=t_{i,j}\). Similarly, for a set of centers \(C\), define \(\mathsf{cost}_{z}(X,w,C,\Gamma)\) as
\[\mathsf{cost}_{z}(X,w,C,\Gamma)\equiv\min_{\sigma\sim\Gamma}\Bigg{\{}\sum_{i= 1}^{k}\sum_{x\in X}\sigma(x,i)\cdot D(x,c_{i})^{z}\Bigg{\}}.\]
Again, \(\mathsf{opt}_{z}(X,w,\Gamma)\) can be defined as the optimum cost over all choices of centers \(C\). Now the definition of a universal coreset \((J,S,v)\) in this setting is analogous to that in Definition1 - we need to satisfy conditions (A) and (B).
**Theorem 9**.: _There is a \((3^{z}+\varepsilon)\)-universal weak coreset of size \(\big{(}m\cdot poly(\frac{k}{\varepsilon})\big{)}\) for constrained clustering in the multi-labeled setting._
Proof.: The set \(J\) is constructed as in Section3. In order to construct the set \(S\), we apply Theorem4 to each of the sets \(X_{1},\ldots,X_{m}\) independently to obtain sets \(S_{1},\ldots,S_{m}\). Finally, \(S:=S_{1}\cup\ldots\cup S_{m}\). The desired result follows from the properties of universal coreset.
Let us now see why a \((3^{z}+\varepsilon)\)-universal weak coreset can be used for obtaining a \((3^{z}+\varepsilon)\)-approximate solution for multi-labeled constrained clustering problem in FPT time (fixed-parameter tractable time). We state the result for the \(k\)-median objective in general metric spaces. Similar results will hold for \(k\)-means in general metric spaces (i.e., \((9+\varepsilon)\)-approximation) and \(k\)-median or \(k\)-means objectives in Euclidean spaces (i.e., \((1+\varepsilon)\)-approximation).
**Theorem 10**.: _Let \((J,S,v)\) be a \(3\)-universal weak coreset for a multi-labeled clustering instance \((F,X,w,k,\ell)\). Consider an instance \(\mathcal{I}\) of the co nstrained clustering problem specified by set of pairs \(\{(\alpha_{1},\beta_{1}),...,(\alpha_{k},\beta_{k})\}\). Then there is a randomize algorithm \(\mathcal{A}\), which on input \(\mathcal{I}\) and \((J,S,v)\) outputs a \((3+\varepsilon)\)-approximate solution with high probability. The running time of \(\mathcal{A}\) is \(|J|^{k}\cdot(mk)^{O(mk)}\cdot n^{O(1)}\)._
Proof.: The proof follows the same line as for the unlabeled case. We try all \(|J|^{k}\) possible \(k\) centers \((c_{1},...,c_{k})\) from \(J\) and solve the "assignment" problem: find the best fair assignment for the given \((c_{1},...,c_{k})\). Our \(3\)-universal weak coreset guarantees the existence of a \((3+\varepsilon)\)-approximate solution within \(J\). So, if we can solve the assignment problem optimally, we can find that \((3+\varepsilon)\)-approximate solution in \(J\). Such an assignment algorithm was given by [3] (see Theorem8.2). The running time of this assignment finding algorithm is \((mk)^{O(mk)}\cdot n^{O(1)}\).
\(l\)-diversity clusteringAnother well known constrained clustering problem in the labelled setting is the \(l\)-diversity problem. Here the goal is to cluster the point set \(X\) into clusters \((X_{1},...,X_{k})\) such that each cluster has at least \(1/l\) fraction of the points from each of the labels. Again, the goal is to minimize the \(k\)-median or \(k\)-means assignment cost.
As above, we can use the universal weak coreset construction from Theorem10 to obtain a \((3^{z}+\varepsilon)\)-approximation algorithm for this problem. Here, we can use algorithm of [11] to solve the corresponding assignment problem.
### Discussion and Open Problems
Classical coresets come with the promise that they will help obtain an approximate solution to the \(k\)-means or \(k\)-median objective in a metric space. This promise holds for most known metric spaces. However, there are certain metrics where a specific approximation guarantee cannot be obtained using a classical coreset. The reason is that the approximation algorithm that gives that
specific approximation guarantee does not work on weighted inputs. Note that a classical coreset is a weighted set. For example, a recent development [8] in the \(k\)-median problem in the Ulam metric has broken the \(2\)-approximation barrier. However, their \((2-\delta)\)-approximation algorithm works only on unweighted input permutations. So, the classical coreset framework does not help in this setting. On the other hand, the universal weak coreset framework may still be applicable. The reason is that even though we cannot run the approximation algorithm on the set \(S\) to find a good center set, we can use \(S\) to locate a good center set from \(J\) using the cost preservation property of \(S\). So, an interesting open question is whether there is a \((2-\delta)\)-universal weak coreset for the Ulam \(k\)-median problem. In general, in cases where the guarantee of the set \(S\) is limited to cost preservation, i.e., \(S\) represents the data only in a limited sense, a universal weak coreset is a more appropriate object to use. It will be interesting to see if there are problems other than the Ulam \(k\)-median problem with this property.
Note that there are one pass streaming algorithms for constructing the set \(S\) because coresets have composability property [9]; and there is a constant-pass streaming algorithm for constructing the set \(J\) (the algorithm for constructing \(M\) in Theorem2 can be implemented in streaming settings). Thus both \(J\) and \(S\) can be constructed in a constant pass streaming setting. We leave it an open problem to design a single-pass streaming algorithm for a universal weak coreset.
Although we give 3-universal weak coreset constructions of size independent of any function of \(n\) for \(k\)-median (and a similar result for \(k\)-means), it remains an open problem to construct an \(\alpha\)-universal weak coreset, \(\alpha<3\), with such a guarantee, even for general metric spaces. This will help obtain a better than \(3\) approximation algorithm for several constrained \(k\)-median problems for which the best-known approximation bound is \(3\) (similarly a better than \(9\)-approximation for \(k\)-means).
|
2305.18665 | E-PANNs: Sound Recognition Using Efficient Pre-trained Audio Neural
Networks | Sounds carry an abundance of information about activities and events in our
everyday environment, such as traffic noise, road works, music, or people
talking. Recent machine learning methods, such as convolutional neural networks
(CNNs), have been shown to be able to automatically recognize sound activities,
a task known as audio tagging. One such method, pre-trained audio neural
networks (PANNs), provides a neural network which has been pre-trained on over
500 sound classes from the publicly available AudioSet dataset, and can be used
as a baseline or starting point for other tasks. However, the existing PANNs
model has a high computational complexity and large storage requirement. This
could limit the potential for deploying PANNs on resource-constrained devices,
such as on-the-edge sound sensors, and could lead to high energy consumption if
many such devices were deployed. In this paper, we reduce the computational
complexity and memory requirement of the PANNs model by taking a pruning
approach to eliminate redundant parameters from the PANNs model. The resulting
Efficient PANNs (E-PANNs) model, which requires 36\% less computations and 70\%
less memory, also slightly improves the sound recognition (audio tagging)
performance. The code for the E-PANNs model has been released under an open
source license. | Arshdeep Singh, Haohe Liu, Mark D. Plumbley | 2023-05-30T00:08:55Z | http://arxiv.org/abs/2305.18665v1 | # E-PANNs: Sound Recognition Using Efficient Pre-trained Audio Neural Networks
###### Abstract
Sounds carry an abundance of information about activities and events in our everyday environment, such as traffic noise, road works, music, or people talking. Recent machine learning methods, such as convolutional neural networks (CNNs), have been shown to be able to automatically recognize sound activities, a task known as audio tagging. One such method, pre-trained audio neural networks (PANNs), provides a neural network which has been pre-trained on over 500 sound classes from the publicly available AudioSet dataset, and can be used as a baseline or starting point for other tasks. However, the existing PANNs model has a high computational complexity and large storage requirement. This could limit the potential for deploying PANNs on resource-constrained devices, such as on-the-edge sound sensors, and could lead to high energy consumption if many such devices were deployed. In this paper, we reduce the computational complexity and memory requirement of the PANNs model by taking a pruning approach to eliminate redundant parameters from the PANNs model. The resulting Efficient PANNs (E-PANNs) model, which requires 36% less computations and 70% less memory, also slightly improves the sound recognition (audio tagging) performance. The code for the E-PANNs model has been released under an open source license.
## 1 Introduction
Everyday sound environments include a wide range of sound activities and events, such as traffic noise, road works, key jangling, music, coughing or people talking. These environmental sound activities contain an abundance of information and can potentially be used in various applications including public security surveillance, monitoring activities in a home for assisted living, healthcare, and improving the office, workplace and urban environment.
Recent advances in machine learning infrastructure and the availability of large-scale dataset such as AudioSet [1] have attracted artificial intelligence and machine learning (AI/ML) researchers to develop methods for automatic sound activity recognition, commonly known as audio tagging. A typical audio tagging system is shown in Figure 1. It takes audio recordings from the surrounding using microphones and then, recognises various sound activities that occur in the surrounding area. Audio tagging systems using convolutional neural networks (CNNs) have shown promising performance compared to the traditional hand-crafted methods [2]. However, CNNs are resource-hungry due to the high computational cost arising from multiply-accumulate operations (MACs) and from the memory requirement for CNNs. For example, one of the best performing audio tagging networks from the pre-trained audio neural networks (PANNs) [2] framework has approximately
81M parameters and requires more than 50G MACs for inference corresponding to a 10s audio clip. Due to this, it may be challenging to deploy such large-scale CNNs on resource-constrained devices having a limited power budget and limited memory, such as smart phones or internet of things (IoT) devices. Moreover, when large-scale CNNs are used as a feature extractor or as a classifier for other downstream tasks such as acoustic scene classification [3], the high computational cost of the CNNs makes them slow during inference and particularly during the training process, where the large-scale CNNs may consume more energy and emit more CO\({}_{2}\). For instance, an NVIDIA RTX-2080 Ti GPU used to train machine learning models for 48 hours generates the equivalent amount of CO\({}_{2}\) to that emitted by an average car driven for 13 miles4. Therefore, large-scale CNNs are not efficient and environmental-friendly despite performing well.
Footnote 4: Machine learning CO\({}_{2}\) estimator: [https://mlco2.github.io/impact/#compute](https://mlco2.github.io/impact/#compute)
**Our contributions:** To reduce computational complexity and memory storage of large-scale CNNs such as PANNs, we eliminate redundant parameters and reduce computational complexity from the PANNs model by taking a pruning approach. The resulting Efficient PANNs (E-PANNs) model has significantly less MACs and memory storage with slightly improved performance compared to that of the original PANNs model. We make a real-time sound recognition demonstration using E-PANNs publicly available5.
Footnote 5: [https://github.com/Arshdeep-Singh-Boparai/E-PANNs.git](https://github.com/Arshdeep-Singh-Boparai/E-PANNs.git)
The rest of the paper is organised as follows. Section 2 introduces some background including a brief overview of convolutional neural networks, and background on existing audio tagging systems and the PANNs architecture. Section 3 presents the proposed methodology used to obtain E-PANNs. Next, the experimental setup and dataset used for experiments is explained in Section 4. Section 5 presents experimental analysis. Finally, Section 6 concludes the paper.
## 2 Background
### Convolutional Neural Networks (CNNs)
Convolutional neural networks (CNNs) are a type of artificial neural network inspired by biological nervous systems such as the human brain. CNNs are designed to learn from examples or from a dataset for an underlying task such as classification of sound activity in the surrounding area. They learn through an optimization process to update their parameters including weights and filters across various types of intermediate layers, such as convolutional, pooling, and dense layers. An architecture of a CNN is shown in Figure 2. A convolutional layer has multiple feature maps, where each feature map is produced by the convolutional operation on input and a filter.
In a CNN, filters are small matrices of size (\(k\times k\)) with \(c\) channels that are convolved across the input data. The convolution operation, as given in Equation 1, is a multiply-accumulate operation (MAC) that involves sliding the filter, \(\mathbf{F}\), over the input, performing element-wise multiplications
Figure 1: An audio tagging system recognising various sound activities in a given audio recording [4].
between the filter and the corresponding input patch \(x\) of size (\(c\times k\times k\)), and summing up the results to produce a single value, \(y\). This process is repeated for every possible position of the filter across the input, resulting in a _feature map_. Similarly, other feature maps are produced using other filters in a given convolutional layer. Subsequently a bias \(b\) and a non-linear activation function \(f(.)\) is applied to the elements of feature maps. Other than convolutional, pooling and dense layers, some CNNs might have residual blocks, which contain skip or shortcut connections that bypass one or more of the convolutional layers. For an introduction to various architectures of CNNs, see the article [5].
\[y=f(\sum_{i=1}^{c}\sum_{k_{1}=1}^{k}\sum_{k_{2}=1}^{k}(x_{i,k_{1},k_{2}}\times \mathbf{F}_{i,k_{1},k_{2}})+b). \tag{1}\]
### Existing Audio Tagging Frameworks
With the release of the large-scale AudioSet dataset [1], which has 2M audio examples and 527 classes, several researchers have conducted studies to improve the performance of neural networks on AudioSet classification and in particular CNNs. Methods for AudioSet classification include CNNs [2], CNNs with residual blocks [6] and and self-attention based Transformers [7]. A summary of the existing methods alongwith their performance and the number of parameters is given in Table 1. While Transformer-based methods perform better than CNNs, Transformers have high computational complexity, which makes their deployment on low-powered devices difficult compared to the CNNs. We shall therefore focus on CNNs in this paper.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & Neural Network type & Parameters (x 10\({}^{6}\)) & mAPs \\ \hline PANNs-CNN6 [2] & Plain & 4.8 & 0.343 \\ PANNs-CNN10 [2] & Plain & 5.2 & 0.380 \\ ERANNs-2-5 [6] & Residual & 38.2 & 0.446 \\ ERANNS-1-6 [6] & Residual & 54.5 & 0.450 \\ PANNs (ResNet38) [2] & Residual & 73.78 & 0.434 \\ PANNs-CNN14 [2] & Plain & 80.75 & 0.431 \\ PANNs (Wavegram-Logmel-CNN) [2] & Plain & 81.06 & 0.439 \\ AST [8] & Transformer & 88.10 & 0.459 \\ AST(ensemble) [8] & Transformer & 526.6 & 0.485 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean average precision (mAPs) obtained for AudioSet evaluation dataset using various audio tagging systems with their number of parameters.
Figure 2: Convolutional neural network (CNN) architecture comprising of convolutional, pooling and dense layer.
### PANNs-CNN14 Architecture for AudioSet Classification
The primary motivation for the development of PANNs was to provide pre-training systems for audio pattern recognition on extensive datasets, in this case, the AudioSet dataset [1], which can be used as a baseline network to extract features or for a classification task to other audio-related tasks. The authors proposed several architectures for PANNs, including CNN-14, which demonstrated promising performance on various audio pattern recognition tasks. Experiments demonstrated that PANNs can generalize well to other tasks with limited training data and outperform models trained from scratch on those tasks.
The architecture of the PANNs-CNN14 model, including the number of parameters across each convolutional layers, the total number of parameters and the model size is shown in Figure 6 (a) in the Appendix. It has six convolutional blocks, each with two convolutional layers, followed by a batch normalization layer and a ReLU activation function. The number of filters in each convolutional block gradually increases from 64 to 2048 from layer to layer. Finally, there is a dense layer with a sigmoid activation function that outputs the predicted probabilities for each class.
The PANNs-CNN14 model takes a log-mel spectrogram of size (1000 \(\times\) 64) computed from the 10-second-length audio input. The model is trained with data augmentation techniques such as Mixup [9] and SpecAugment [10] for 600k iterations. For more details on CNN14 and other PANNs models, see [2].
## 3 Proposed Methodology to Reduce Complexity
To reduce the computational complexity and memory storage of CNNs such as PANNs-CNN14, we use a filter pruning approach [11, 12, 13] that involves removing or "pruning" unimportant filters from the network, i.e. these filters that contribute less to the performance of the CNN. Filter pruning is inspired by the idea that some filters in a CNN are redundant or have a negligible contribution to the overall accuracy of the network [14, 15]. These filters can be safely removed from the network without significantly affecting its performance. For example, previous studies [16, 17] found that 73% of the filters in a SoundNet network [18] do not provide discriminative information across different acoustic scene classes, and eliminating such filters give similar performance to that of using all the original filters in SoundNet. For a survey on pruning techniques, see [11].
The typical steps in a filter pruning process are:
1. For a baseline, train the original network to a desired level of accuracy, or utilise an already existing pre-trained network;
2. Rank the filters based on a certain "importance" criterion, such as their relevance in contributing to the performance of the network;
3. Remove the least important filters and their corresponding feature maps from the network to obtain a pruned network;
4. Fine-tune the pruned network to recover some of the lost accuracy.
There are several benefits of filter pruning in CNNs. Firstly, it can significantly reduce the size of the network, reducing the memory footprint and computation time required for inference by removing filters and the corresponding feature maps generated by the filters. Secondly, the robustness of the network in maintaining performance and generalization capabilities can improve by removing filters that may be sensitive to small perturbations in the input data.
In this paper, we will apply the filter pruning approach to the PANNs-CNN14 model, to produce an efficient PANNs model, E-PANNs. To obtain E-PANNs, we follow a three step pipeline as shown in Figure 3. A detailed description of the steps is given below,
**(Step 1) Take a baseline PANNs network**: We take the publicly available pre-trained CNN14 from PANNs and denote it as PANNs-CNN14. PANNs-CNN14 has approximately 81M parameters and requires 21G MACs6 for inference of a 10s audio clip with a performance metric, mAPs of 0.431.
Footnote 6: MACs computation Pytorch package.: [https://pypi.org/project/thop/](https://pypi.org/project/thop/)
**(Step 2) Compute filter importance across convolutional layers of the baseline network:** We compute the relevance of the filters to decide whether to retain or prune the filter for each layer of PANNs-CNN14 independently using (a), (b), and (c) steps as shown in Figure 3.
Assume that there are \(n\) filters in a given convolutional layer. With these \(n\) filters, first, we measure how well a given filter produces the output in the convolutional layer. Our hypothesis is that a filter producing significant output can capture specific patterns or features in the input significantly, which would be useful for the subsequent convolutional layers of CNN to better understand the input data at different levels of abstraction. Therefore, we consider a filter producing significant output more important than others. Then, we measure the importance of each filter and rank the filters according to their importance in a given convolutional layer. Subsequently, the process (a)-(c) is repeated for other convolutional layers as well. More information about the importance calculation of the filters can be found in [19].
**(Step 3) Obtaining E-PANNs:** Once we obtain the importance of each filter in various convolutional layers in PANNs-CNN14, we eliminate \(p\) least important filters and their corresponding feature maps from various convolutional layers. This results in a pruned network. The pruned network is then re-trained or fine-tuned with same data as used for training the baseline PANNs-CNN14 network, to regain some of the lost performance owing to elimination of the filters and their corresponding feature map. Finally, an efficient version of PANNs-CNN14, E-PANNs is obtained.
## 4 Experimental Setup
In this section, we describe the experimental setup to prune the PANNs-CNN14 and then fine-tune the pruned network.
In PANNs-CNN14 model, the last six convolutional layers (C7 to C12) yield approximately 99% of the total network parameters (See architecture in Figure 6 (a) in the Appendix), therefore we prune \(p\in\{25\%,50\%,75\%\}\) filters from the last six convolutional layers. The fine-tuning process of the pruned PANNs-CNN14 is performed in the same way as that used to train the original PANNs-CNN14 [2]. The optimization procedure uses the same loss function, data augmentation and batch size etc, but with fewer iterations, which are approximately 450k for the pruned PANNs-CNN14. To evaluate the performance of the proposed E-PANNs, we test it on the audio tagging problem using the AudioSet evaluation dataset.
Figure 3: A flow diagram to obtain E-PANNs.
## 5 Performance analysis
Figure 4 shows a convergence plot obtained during the fine-tuning process of E-PANNs, when different numbers of filters are pruned from the baseline network. We find that E-PANNs recovers the same performance as that of the baseline network, even after pruning 25% and 50% filters, with only 100k and 200k iterations respectively. We also find that it uses at least 3 times fewer iterations compared to the baseline network to achieve mAPs equal to 0.431. Therefore, E-PANNs can be used as an alternative to the original PANNs with an advantage of less computation and memory requirement.
We find that pruning 25% of the filters across C7 to C12 layers of the baseline network removes 41% of the parameters and 24% of the MACs with an improved mAPs equal to 0.442, compared to that of the baseline network. Also after pruning 50% of the filters, the mAPs obtained are better than that of the baseline network, with 70% fewer parameters and 36% fewer MACs. The architecture of the E-PANNs network obtained after pruning 50% of the filters is shown in Figure 6 (b) in the Appendix. On the other hand, when 75% of the filters are removed, the mAPs is 0.420 with 78% fewer parameters and 46% fewer MACs. This suggests that there is a trade-off between the number of parameters pruned and the mAPs. As shown in Figure 5 (see arrow trend), we find that when the number of parameters in E-PANNs is less than 25% of the total number of parameters in the baseline network, and that the mAPs obtained using E-PANNs becomes less than the mAPs obtained using the baseline network. Therefore, the maximum number of parameters that can be pruned from the baseline network, PANNs-CNN14, without any degradation in performance are approximately 75%.
Figure 5 also shows the number of parameters and mAPs obtained using the other existing CNN-based methods as given in Table 1. Our proposed method gives better mAPs with 70%
Figure 4: Convergence during fine-tuning process of E-PANNs when different number of filters are pruned from the baseline network. In brackets, the maximum mAPs obtained is also given.
Figure 5: mAPs and the number of parameters obtained using the proposed method and the other existing CNN-based methods.
fewer parameters and 60% fewer parameters respectively compared to the best performing PANNNs-(Wavegram-Logmel-CNN) and the PANNs-ResNet38 network. In general, compared to other methods such as PANNs-CNN10 or ERANNs-1-6 [6], we find that there is a trade-off between the number of parameters and the mAPs.
## 6 Conclusion
Convolutional neural networks (CNNs) are effective for audio recognition, but can be costly. We make these CNNs more efficient by reducing their computational cost and memory storage. We present a framework to obtain E-PANNs from a large-scale pre-trained audio neural network (PANNs) for audio tagging. We find that removing few of the unimportant filters from original PANNs reduces the computational complexity by 36% and 70% fewer parameters with an improved performance. Therefore, E-PANNs can be used as an alternative to the original PANNs for the downstream tasks for feature extraction or classification due to the less computational complexity and low memory requirement of E-PANNs compared to that of the PANNs.
## 7 Acknowledgements
This work was partly supported by Engineering and Physical Sciences Research Council (EPSRC) Grant EP/T019751/1 "AI for Sound (AI4S)". For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
|
2307.16031 | Splitting the local Hilbert space: MPS-based approach to large local
dimensions | A large, or even infinite, local Hilbert space dimension poses a significant
computational challenge for simulating quantum systems. In this work, we
present a matrix product state (MPS)-based method for simulating
one-dimensional quantum systems with a large local Hilbert space dimension, an
example being bosonic systems with a large on-site population. To this end, we
\textit{split} the local Hilbert space corresponding to one site into two
sites, each with a smaller Hilbert space dimension. An advantage of this method
is that it can be easily integrated into MPS-based techniques such as
time-dependent variational principle (TDVP) without changing their standard
algorithmic structure. Here, we implement our method using the TDVP to simulate
the dynamics of the spin-boson model, a prototypical model of a spin
interacting with a large bath of bosonic modes. We benchmark our method against
and find excellent agreement with previous studies. | Naushad Ahmad Kamar, Mohammad Maghrebi | 2023-07-29T17:41:27Z | http://arxiv.org/abs/2307.16031v1 | # Splitting the local Hilbert space: MPS-based approach to large local dimensions
###### Abstract
A large, or even infinite, local Hilbert space dimension poses a significant computational challenge for simulating quantum systems. In this work, we present a matrix product state (MPS)-based method for simulating one-dimensional quantum systems with a large local Hilbert space dimension, an example being bosonic systems with a large on-site population. To this end, we _split_ the local Hilbert space corresponding to one site into two sites, each with a smaller Hilbert space dimension. An advantage of this method is that it can be easily integrated into MPS-based techniques such as time-dependent variational principle (TDVP) without changing their standard algorithmic structure. Here, we implement our method using the TDVP to simulate the dynamics of the spin-boson model, a prototypical model of a spin interacting with a large bath of bosonic modes. We benchmark our method against and find excellent agreement with previous studies.
## I Introduction
Characterizing the interaction between the bosonic modes and electronic or spin degrees of freedom is essential for understanding properties of materials [1; 2], including superconductivity [3]. A well-known example is the effect of electron-phonon coupling on the mass of electrons, which leads to the emergence of quasi-particles known as polarons [4]. On the experimental front, circuit QED [5; 6; 7; 8] and trapped ions [9; 10] among others, provide highly controlled platforms for simulating a broad range of models of interest which also involve bosonic degrees of freedom with tunable coupling. A fundamental goal is to design perfect qubits in these platforms; however, in practice, such qubits are unavoidably coupled with the surrounding environment, which is often considered to be bosonic.
The infinite local Hilbert space dimension of the bath, due to its bosonic nature, presents a significant numerical challenge; an exact diagonalization, even for small systems, would be difficult unless the bosonic population is low, in contrast with spin-\(1/2\) or fermionic chains. To cure this problem, Zhang et al. [11] used the largest relevant eigenvalues and corresponding eigenvectors of the local density matrix to identify an effective local Hilbert space dimension that is smaller than the original one. In general, the local density matrix has \(d_{b}\) eigenvalues with \(d_{b}\) the original local Hilbert space dimension. However, in the ground state, these eigenvalues decrease rapidly; this allows for an approximation of the local density matrix through an optimal local Hilbert space with dimension \(d_{o}\ll d_{b}\). This method is called local-basis optimization, and the corresponding space is the optimal bosonic basis. Various techniques [12; 13; 14] that combine local basis optimization and MPS [15] based methods such as time-evolving block decimation (TEBD) [16], variational matrix product states (VMPS) [15], and TDVP [17; 18] have been utilized to investigate the ground state and dynamics of quantum systems that involve bosonic degrees of freedom. However, the local basis optimization changes the standard form of VMPS [13], TEBD [12], and TDVP [14], and modify their algorithmic structure. For example, in VMPS, TEBD, and TDVP methods, one optimizes the MPS and the matrix corresponding to the orthogonality center of the MPS. However, introducing an optimal bosonic basis, one should also optimize the local Hilbert space [12; 13; 14] which drastically changes the structure of these MPS-based methods.
In this paper, we propose a simple method to treat a large local Hilbert space dimension without truncating the local density matrix, which preserves the algorithmic structure of VMPS and TDVP techniques. We exploit the sparsity of the Hamiltonian's local matrix product operator (MPO) [15] and split the original local Hilbert space into two smaller ones using a matrix decomposition method, specifically the singular value decomposition. Upon splitting, the system doubles in linear size, but the local Hilbert space dimension reduces to \(\sqrt{d}_{b}\). We apply our proposed method to the spin-boson model [19], which describes the dynamics of a spin-\(1/2\) strongly coupled to an infinite number of bosonic degrees of freedom--this prototypical model emerges in a variety of quantum systems [5; 6; 7; 8; 9; 10]. Specifically, we simulate the dynamics by incorporating our method into the TDVP.
The structure of the paper is as follows: In Section II, we introduce the spin-boson model and present a mapping to a short-range semi-infinite chain suitable for numerical simulation. In Section III, we briefly explain the standard MPS approach, and then introduce our main method. We provide numerical results benchmarking our method in Section IV, and finally conclude and discuss future directions in Section V. We provide further details of the MPO decomposition in Appendix A.
## II Model
We consider a two-level system \(S\), coupled with an infinite number of non-interacting bosons, famously known as the spin-boson model [19]. We describe the system-bath coupling via the Hamiltonian
\[H=H_{S}+H_{B}+H_{SB}, \tag{1}\]
where the Hamiltonians \(H_{S}\), \(H_{B}\), and \(H_{SB}\) describe the system, bath, and the linear coupling between the system and the bath, respectively,
\[\begin{split} H_{S}&=-\frac{\Delta}{2}\sigma^{x},\\ H_{B}&=\sum_{k}\omega_{k}a_{k}^{\dagger}a_{k},\\ H_{SB}&=\frac{\sigma^{z}}{2}\sum_{k}\lambda_{k}(a_ {k}^{\dagger}+a_{k}).\end{split} \tag{2}\]
The effective coupling between the spin and the bath depends on \(\omega_{k}\) and \(\lambda_{k}\), and is fully characterized by the spectral function \(J(\omega)\) defined as
\[J(\omega)=\pi\sum_{k}\lambda_{k}^{2}\delta(\omega-\omega_{k}). \tag{3}\]
Depending on its form, the spectral function could describe a wide range of different qualitative behavior. A representative class of quantum baths are described by the spectral function
\[J(\omega)=2\pi\alpha\omega^{s}\omega_{c}^{1-s}\Theta(\omega_{c}-\omega), \tag{4}\]
corresponding to an Ohmic bath with \(s=1\) and sub-(super-)Ohmic baths where \(s<1\) (\(s>1\)). The parameters \(\alpha\) and \(\omega_{c}\) characterize the coupling strength and the frequency cutoff of the bath, respectively. For an Ohmic bath, \(s=1\), this model exhibits a quantum phase transition from a delocalized to localized state at \(\alpha\simeq 1+\mathcal{O}(\Delta/\omega_{c})\)[13; 14; 19]. Similar quantum phase transitions occur for the sub-Ohmic bath [20; 21].
The spin-boson model in Eq. (1) couples the spin to all the bosonic modes, mimicking a kind of long-range interaction, as depicted in the upper panel of Fig. 1; this makes a simulation based on matrix product states rather expensive. However, by using an appropriate basis transformation of the bosonic local operators \(a(a^{\dagger})\), this model can be mapped to a nearest-neighbor Hamiltonian as [20; 22]
\[\begin{split} H=&-\frac{\Delta}{2}\sigma^{x}+c_{0} \sigma^{z}(b_{0}+b_{0}^{\dagger})+\sum_{n=0}^{L}\omega_{n}b_{n}^{\dagger}b_{n} \\ &+\sum_{n=0}^{L-1}t_{n}(b_{n}^{\dagger}b_{n+1}+h.c),\end{split} \tag{5}\]
where \(b_{n}\)s define the bosonic operators in the new basis, and \(\omega_{n}\), \(t_{n}\), and \(c_{0}\) denote the local energy, site-dependent tunneling amplitude, and the coupling between the spin and the first site in the bath in the new basis; see also the lower panel of Fig. 1. The above coefficients can be computed exactly and are given by [22]
\[\begin{split}\omega_{n}&=\frac{\omega_{c}}{2} \Big{(}1+\frac{s^{2}}{(s+2n)(2+s+2n)}\Big{)},\\ t_{n}&=\frac{\omega_{c}(1+n)(1+s+n)}{(s+2+2n)(3+s+ 3n)}\sqrt{\frac{3+s+2n}{1+s+2n}},\\ c_{0}&=\sqrt{\frac{\alpha}{2(1+s)}\omega_{c}}. \end{split} \tag{6}\]
While being local, this model comprises bosonic modes whose population can be large, thus posing a challenge for numerical simulation. In the next section, we introduce an MPO decomposition to split a large local Hilbert space into smaller ones. Combined with MPS-based methods, this allows us to simulate systems with a large on-site bosonic population.
Figure 1: (color online) Two lattice representations of the spin-boson model. The top panel shows the spin-boson model introduced in Eq. (2). The large (red) circle represents the spin, and the small (black) circles indicate the bosonic modes. The spin-boson coupling \(\lambda_{n}\) is denoted by the (blue) curves. In the lower panel, the spin-boson model is shown in the transformed basis given by Eq. (5). The small (orange) circles represent the bosonic modes in the transformed basis that defines a tight binding model with site-dependent energy \(\omega_{n}\) and tunneling amplitude \(t_{n}\). The spin interacts directly with the first bosonic site with a strength \(c_{0}\).
## III Method
In this section, we briefly introduce the MPS and MPO [15] in order to simulate the spin-boson model. The state of the spin-boson model in the MPS language is given by
\[|\psi\rangle=\sum_{\sigma_{0},\sigma_{1},..,\sigma_{L}}A^{\sigma_{0}}[0]A^{ \sigma_{1}}[1]...A^{\sigma_{L}}[L]|\sigma_{0},\sigma_{1},..,\sigma_{L}\rangle, \tag{7}\]
where \(\sigma_{0}\) runs from \(1\) to \(d\) and \(\sigma_{1,2,..,L}\) run from \(1\) to \(d_{b}\), where \(d\) and \(d_{b}\) are the local Hilbert space dimension of the spin and bosons, respectively. The size of the \(A\) matrices bounds the maximum entanglement that can exist in the system. In a similar fashion, an operator can also be defined using a product of operators known as MPO. In general, the MPO of a given Hamiltonian can be constructed as
\[\begin{split} H=\sum_{\sigma_{u/l0},..,\sigma_{u/lL}}W_{\sigma_{ l0}}^{\sigma_{u0}}[0]W_{\sigma_{l1}}^{\sigma_{u1}}[1]\cdots W_{\sigma_{lL}}^{ \sigma_{uL}}[L]\\ \times\left|\sigma_{u0},\sigma_{u1}...,\sigma_{uL}\right\rangle \langle\sigma_{l0},\sigma_{l1}...,\sigma_{lL}|\,,\end{split} \tag{8}\]
where \(\sigma_{u/ln}\) denotes the ket/bra indices on site \(n\). For the spin-boson model, the \(W\) matrices in the MPO are explicitly given by
\[\begin{split} W[0]&=\begin{pmatrix}I_{s}&\sigma^{z}&0&0&- \frac{\Delta}{2}\sigma^{x}\end{pmatrix},\\ W[1]&=\begin{pmatrix}I_{b}&0&b^{\dagger}&b&\omega_{0}n_{b}\\ 0&0&0&0&c_{0}(b^{\dagger}+b)\\ 0&0&0&0&t_{0}b^{\dagger}\\ 0&0&0&0&t_{0}b^{\dagger}\\ 0&0&0&0&I_{b}\end{pmatrix},\\ W[1<n<L]&=\begin{pmatrix}I_{b}&0&b^{\dagger}&b&\omega_{n-1}n_{b}\\ 0&0&0&0&0\\ 0&0&0&0&t_{n-1}b\\ 0&0&0&0&t_{n-1}b^{\dagger}\\ 0&0&0&0&I_{b}\end{pmatrix},\\ W[L]&=\begin{pmatrix}\omega_{L-1}n_{b}\\ 0\\ t_{L-1}b^{\dagger}\\ I_{b}\end{pmatrix}.\end{split} \tag{9}\]
Here, \(I_{s}\) and \(I_{b}\) refer to the identity operators for the spin and the bath, respectively; \(I_{s}\) is a \(2\times 2\) matrix for the spin-\(1/2\) while \(I_{b}\) is a \(d_{b}\times d_{b}\) matrix. We have also defined the local number operator on a given site in the bath as \(n_{b}=b^{\dagger}b\). The MPO of the spin-boson model, as described in Eq. (9), is a \(5\times 5\) matrix of operators defined in the local Hilbert space. In the MPS-based methods such as VMPS and TDVP, the computational complexity scales with the \(3^{\rm rd}\) power of the local Hilbert space dimension \(d_{b}\). Therefore, for a large \(d_{b}\), these methods become computationally expensive. To circumvent this problem, we break up, or split, the local Hilbert space \(\mathcal{H}\) into two Hilbert spaces, \(\mathcal{H}=\mathcal{H}^{\prime}\otimes\mathcal{H}^{\prime\prime}\), each with a smaller dimension. A basis state \(|\sigma\rangle\) in \(\mathcal{H}\) can be then expressed as a product state
\[|\sigma\rangle=|\sigma^{\prime}\rangle\otimes|\sigma^{\prime\prime}\rangle, \tag{10}\]
where \(|\sigma^{\prime}\rangle\) and \(|\sigma^{\prime\prime}\rangle\) are defined in \(\mathcal{H}^{\prime}\) and \(\mathcal{H}^{\prime\prime}\), respectively, and the corresponding indices \(\sigma^{\prime},\sigma^{\prime\prime}\) run from \(1\) to \(\sqrt{d_{b}}\). There are of course many ways to split the original basis; here, we choose a particular factorization scheme where
\[\sigma=\sqrt{d_{b}}(\sigma^{\prime}-1)+\sigma^{\prime\prime}. \tag{11}\]
Such splitting scheme can easily be implemented using, for example, the Numpy's reshape library. Next, the state \(|\psi\rangle\) in Eq. (7) can be recast in the new basis as
\[|\psi\rangle=\sum_{\sigma_{0},\sigma^{\prime}_{1},\sigma^{\prime }_{1},\cdots,\sigma^{\prime}_{L},\sigma^{\prime\prime}_{L}}A^{\sigma_{0}}[0] \tilde{A}^{\sigma^{\prime}_{1}}[1]\tilde{A}^{\sigma^{\prime\prime}_{1}}[2]\cdots \tag{12}\] \[\times\tilde{A}^{\sigma^{\prime}_{L}}[2L-1]\tilde{A}^{\sigma^{ \prime}_{L}}[2L]|\sigma_{0},\sigma^{\prime}_{1},\sigma^{\prime\prime}_{1}, \cdots,\sigma^{\prime}_{L},\sigma^{\prime\prime}_{L}\rangle,\]
where we have introduced the new matrices \(\tilde{A}\) now spanning sites \(1\) to \(2L\). Each site being split into two, the linear size of the chain is doubled.
The local MPO matrices \(W\) can also be expressed in the new basis by using singular value decomposition (SVD) as
\[W=U\Lambda V, \tag{13}\]
where the matrix \(U\) and \(V\) are defined in the new basis spanned by \(|\sigma^{\prime}\rangle\) and \(|\sigma^{\prime\prime}\rangle\), respectively. We leave the technical details to Appendix A; for a schematic explanation, see Fig. 2. For simplicity, we can absorb the diagonal matrix \(\Lambda\) containing the singular values of the SVD into the definition of the \(U\) and \(V\) matrices as
\[\tilde{U}=U\sqrt{\Lambda},\quad\tilde{V}=\sqrt{\Lambda}V, \tag{14}\]
Figure 3: (color online) Singular values of an MPO of the spin-boson model at site \(n=2\) on a semi-log scale for \(d_{b}=100\), \(\alpha=1.0\), \(\omega_{c}=1.0\), \(\Delta=0.1\), and \(\omega_{c}=1\). The MPO is shown on site \(n=2\); we find similar behavior on all sites. The singular value decrease exponentially with \(k\), and are effectively zero beyond \(k=29\).The MPO bond dimension in the split basis is an order of magnitude smaller than its maximum value of \(X_{W}d_{b}=500\).
upon which Eq. (13) simply becomes
\[W=\tilde{U}\tilde{V}. \tag{15}\]
The column (row) dimension of \(\tilde{U}\) (\(\tilde{V}\)) is \(d_{b}X_{W}\) where \(X_{W}\) is the MPO bond dimension before splitting; e.g., \(X_{W}=5\) for the spin-boson model. In practice, however, the effective MPO bond dimension in the split basis could be taken to be much smaller as the singular values \(\Lambda_{k}\) of the matrix \(\Lambda\) decay rather quickly with the index \(k\). As a representative example, we consider the spin-boson model with \(d_{b}=100\), \(\omega_{c}=1\), \(\Delta=0.1\) and \(\alpha=1.0\), and show \(\Lambda_{k}\) in descending order in Fig. 3. We observe that \(\Lambda_{k}\) rapidly decreases and is practically vanishing beyond \(k=29\), therefore effectively 29 instead of \(X_{W}d_{b}=500\); the row (column) dimension of \(\tilde{U}\) (\(\tilde{V}\)) is still \(X_{W}=5\) since the MPO structure has not changed on the original bonds before splitting. MPS and MPO play a crucial role in MPS-based techniques such as VMPS, TDVP, TEBD, and MPO-MPS time evolution [23; 24]. In our approach, we have split the original local Hilbert space into local Hilbert spaces with smaller dimensions while leaving the algorithmic structure of the MPS intact in the new basis. The only difference is that the MPS is now optimized with the smaller local Hilbert space dimension of \(\sqrt{d}_{b}\) in the split basis. While the computational complexity scales as \(\mathcal{O}(d_{b}^{3})\) in the original basis, it scales as \(\mathcal{O}(2d_{b}^{3/2})\) in the new basis, where the factor of 2 is due to the system size being doubled. We thus expect that the MPS-based methods in the split basis feature a speedup by a factor of the order of \(\mathcal{O}(d_{b}^{3/2}/2)\) compared to the old basis; this is a massive speedup for large local Hilbert space dimensions. In the next section, we use the spin-boson model as a testbed for our method.
## IV Results
In this section, we apply our method to the spin-boson model and specifically study the dynamics of the spin. We start from an initial state at \(t=0\) where the spin is in the \(|\uparrow\rangle\) state (in the \(\sigma^{z}\) basis), and the bosonic modes are in their vacuum state,
\[|\psi(0)\rangle=|\uparrow\rangle\otimes|0\rangle\otimes\cdots\otimes|0\rangle, \tag{16}\]
where \(b|0\rangle=0\) and \(\sigma^{z}|\uparrow\rangle=|\uparrow\rangle\). We are mainly interested in the time evolution of magnetization defined by \(\langle\sigma^{z}(t)\rangle=\langle\psi(t)|\sigma^{z}|\psi(t)\rangle\) where
\[|\psi(t)\rangle=e^{-iHt}|\psi(0)\rangle. \tag{17}\]
In order to compute \(|\psi(t)\rangle\), we employ the TDVP algorithm in the new basis. We fix the interaction parameters at \(\Delta=0.1\), \(d_{b}=100\), \(\omega_{c}=1\), \(L=100\), and take the MPS bond dimension \(\chi=5\). We study the dynamics for both Ohmic (\(s=1\)) and sub-Ohmic (with \(s=0.5\)) baths. In Fig. 4, we depict \(\langle\sigma^{z}(t)\rangle\) as a function of time for different values of the interaction parameters in the range \(\alpha=0.1-1.5\) and for \(s=1\). For \(\alpha=0.1-0.4\), we find that the dynamics is underdamped; see the upper panel of Fig. 4. The frequency of oscillations decreases while the damping rate increases with \(\alpha\), in harmony with the previous studies [25; 26; 27; 19]. Specifically, the oscillation frequency is renormalized by the spin-bath coupling \(\alpha\) as \(\Delta_{r}=\Delta(\Delta/\omega_{c})^{\frac{1}{1-\alpha}}\)[25; 26; 27; 19]; we have verified that our results are in quantitative agreement with this equation. For \(\alpha=0.5,0.7\), we observe that \(\langle\sigma^{z}(t)\rangle\) decays exponentially to zero, a behavior which persists in the range \(0.5\leq\alpha<1\)[14; 19]. At or above the critical point \(\alpha_{c}=1.0\), the magnetization \(\langle\sigma^{z}(t)\rangle\) barely decays and is localized in the \(|\uparrow\rangle\) state; see the lower panel of Fig. 4. This signals a quantum phase transition from a delocalized to a localized state, again consistent with the previous results [14; 19; 20].
As another example, we consider the dynamics of the spin coupled to a sub-Ohmic bath. In Fig. 5, we show \(\langle\sigma^{z}(t)\rangle\) as a function of time in the presence of a sub-Ohmic bath with \(s=0.5\) and for \(\alpha=0.005\)-\(0.20\). Again, we can identify the underdamped regime (upper panel)
Figure 4: (color online) Magnetization \(\langle\sigma^{z}(t)\rangle\) as a function of time in the presence of an Ohmic bath, \(s=1\), and at different values of \(\alpha\); we have taken \(\Delta=0.1\) and \(\omega_{c}=1.0\). The upper panel depicts \(\langle\sigma^{z}(t)\rangle\) for \(\alpha=0.1-0.4\). The spin shows coherent damping as a function of time, with the frequency of oscillations decreasing with \(\alpha\). The lower panel depicts \(\langle\sigma^{z}(t)\rangle\) for \(\alpha=0.5,0.7,1.0,1.2\), and \(1.5\). The spin shows incoherent damping as a function of time for \(0.5\leq\alpha<1\). At and beyond the critical point \(\alpha_{c}=1.0\), the dynamics is frozen close to \(|\uparrow\rangle\), signalling a quantum phase transition from a delocalized to a localized phase.
as well as the overdamped and localized regimes (lower panel). We find that \(\langle\sigma^{z}\rangle\) saturates to a nonzero value beyond \(\alpha=0.125\), signaling a quantum phase transition to a localized phase, consistent with Ref. [14]. Finally, in Fig. 6, we compare our numerical results with those presented in Ref. [14] for \(s=0.5\) and different values of \(\alpha\). We find that our numerical results exactly match the data presented in Ref. [14], thus providing a nontrivial check of the accuracy and efficiency of our method. An advantage of our method is its simple structure which can be easily integrated into the standard MPS-based methods.
## V Conclusion and Perspective
We have proposed a simple computational approach to simulate systems involving a large local Hilbert space dimension. Our method is based on splitting a large local Hilbert space into two sites with a smaller dimension. We have shown that our approach correctly reproduces the results obtained from the TDVP combined with the local basis optimization for the spin-boson model [14]. Our method has the advantage that it does not change the algorithmic structure of MPS-based methods, in contrast with MPS approaches that utilize the local basis optimization [12; 13; 14].
Our numerical method becomes even more vital in simulating bosonic systems described by a mixed state either at finite temperature or in open quantum systems, e.g., in systems described by the Lindblad master equation. In these scenarios, one generally vectorizes the density matrix in order to bring it to a form that can be represented in the MPS form; however, the local Hilbert space dimension becomes the square of the original local Hilbert space dimension, which could pose a challenge for numerical simulations (see also [28]). Our approach provides a formidable alternative to simulate systems described by a large local Hilbert space dimension.
In this work, we have proposed a method to treat large local Hilbert spaces by splitting them into smaller ones. It is worthwhile extending this idea to a large _bond dimension_ where a local MPS is decomposed into two or more matrices with a smaller bond dimension, leading to ladder-like lattices. Such an approach could result in more efficient MPS-based calculations where the original bond dimension is large.
###### Acknowledgements.
We thank Alex Chin for useful discussions. This work is supported by the Air Force Office of Scientific Research (AFOSR) under the award number FA9550-20-1-0073. We also acknowledge support from the National Science Foundation under the NSF CAREER Award (DMR2142866), as well as the NSF grants DMR1912799 and PHY2112893.
Figure 5: (color online) Magnetization \(\langle\sigma^{z}(t)\rangle\) as a function of time in the presence of a sub-Ohmic bath with \(s=0.5\) and at different values of \(\alpha\); we have taken \(\Delta=0.1\) and \(\omega_{c}=1\). The upper panel depicts the \(\langle\sigma^{z}(t)\rangle\) for \(\alpha=0.001-0.05\), which displays coherent damping as a function of time; the oscillation frequency decreases with \(\alpha\) similar to the Ohmic case. The lower panel depicts \(\langle\sigma^{z}(t)\rangle\) for \(\alpha=0.075-0.20\). The spin exhibits overdamped dynamics for \(\alpha>0.1\) before it enters the localized phase around \(\alpha=0.125\).
Figure 6: (color online) Magnetization \(\langle\sigma^{z}(t)\rangle\) as a function of time in the presence of the sub-Ohmic bath at different values of \(\alpha\), and with \(\Delta=0.1\), \(\omega_{c}=1.0\) and \(s=0.5\). Our results (the solid lines) are contrasted against the data taken from Ref. [14] (dotted lines), which are obtained using TDVP combined with the optimal bosonic basis. The excellent agreement with this data is a nontrivial check of our method.
## Appendix A MPO Splitting in \(\sigma^{\prime}\) and \(\sigma^{\prime\prime}\) basis
In this Appendix, we provide further details for the decomposition of a local MPO in terms of the two MPOs with smaller local Hilbert space dimensions. We first split \(|\sigma\rangle\) into \(|\sigma^{\prime}\rangle\) and \(|\sigma^{\prime\prime}\rangle\) as
\[|\sigma\rangle=|\sigma^{\prime}\rangle\otimes|\sigma^{\prime\prime}\rangle, \tag{10}\]
where \(\sigma^{\prime}\) and \(\sigma^{\prime\prime}\) runs from \(1\) to \(\sqrt{d}_{b}\). We can express the local MPO matrix \(W\) as a four-dimensional array of size \(d_{b}\times d_{b}\times X_{W}\times X_{W}\), where \(X_{W}\) is the MPO bond dimension. We express the corresponding array elements as
\[W[\sigma_{u},\sigma_{l},w_{i-1},w_{i}]=W^{\sigma_{u}}_{\sigma_{l}}(w_{i-1},w_{ i}), \tag{11}\]
where \(w_{i}\) runs from \(1\) to \(X_{W}\), and in an abuse of notation we used the same symbol \(W\) to denote the array. Splitting the local basis as in Eq. (10), the above array can be recast as a six-dimensional array in the new basis:
\[W[\sigma^{\prime}_{u},\sigma^{\prime\prime}_{u},\sigma^{\prime}_{l},\sigma^{ \prime\prime}_{l},w_{i-1},w_{i}]=W^{\sigma^{\prime}_{u},\sigma^{\prime\prime }_{l}}_{\sigma^{\prime}_{l}\sigma^{\prime\prime}_{l}}(w_{i-1},w_{i}). \tag{12}\]
We can reshape \(W\) again to bring it into the form
\[W[\sigma^{\prime}_{u},\sigma^{\prime\prime}_{u},\sigma^{\prime}_{l},\sigma^{ \prime\prime}_{l},w_{i-1},w_{i}]\to W[w_{i-1},\sigma^{\prime}_{u},\sigma^{ \prime}_{l},\sigma^{\prime\prime}_{u},\sigma^{\prime\prime}_{l},w_{i}]\,. \tag{13}\]
Finally we can express \(W\) in matrix form as
\[W[w_{i-1},\sigma^{\prime}_{u},\sigma^{\prime}_{l},\sigma^{\prime\prime}_{u}, \sigma^{\prime\prime}_{l},w_{i}]\to W[w_{i-1}\sigma^{\prime}_{u}\sigma^{ \prime}_{l},\sigma^{\prime\prime}_{u}\sigma^{\prime\prime}_{l}w_{i}]\,. \tag{14}\]
The MPO \(W\) can be then factorized using the SVD as
\[W[w_{i-1}\sigma^{\prime}_{u}\sigma^{\prime}_{l},\sigma^{\prime\prime}_{u} \sigma^{\prime\prime}_{l}w_{i}]=\sum_{k=1}^{X_{W}d_{b}}U[w_{i-1}\sigma^{\prime} _{u}\sigma^{\prime}_{l},k]\Lambda_{k}V[k,\sigma^{\prime\prime}_{u}\sigma^{ \prime\prime}_{l}w_{i}]. \tag{15}\]
In the above equation \(k\) runs from \(1\) to \(X_{W}d_{b}\); however, in practice, \(\Lambda_{k}\) decays rapidly with \(k\) and most of the singular values are zeros, and we can set the upper limit to some \(k_{\text{eff}}<X_{W}d_{b}\).
Finally, the above equation can be written as
\[W[w_{i-1}\sigma^{\prime}_{u}\sigma^{\prime}_{l},\sigma^{\prime \prime}_{u}\sigma^{\prime\prime}_{l}w_{i}] =\sum_{k=1}^{k_{\text{eff}}}U[w_{i-1}\sigma^{\prime}_{u}\sigma^{ \prime}_{l},k]\Lambda_{k}V[k,\sigma^{\prime\prime}_{u}\sigma^{\prime\prime}_{l} w_{i}],\] \[=\sum_{k=1}^{k_{\text{eff}}}\widetilde{U}[w_{i-1}\sigma^{\prime }_{u}\sigma^{\prime}_{l},k]\widetilde{V}[k,\sigma^{\prime\prime}_{u}\sigma^{ \prime\prime}_{l}w_{i}], \tag{16}\]
where we have defined
\[\begin{split}&\widetilde{U}[w_{i-1}\sigma^{\prime}_{u}\sigma^{ \prime}_{l},k]=U[w_{i-1}\sigma^{\prime}_{u}\sigma^{\prime}_{l},k]\sqrt{ \Lambda_{k}}\,,\\ &\widetilde{V}[k,\sigma^{\prime\prime}_{u}\sigma^{\prime\prime}_{l} w_{i}]=\sqrt{\Lambda_{k}}V[k,\sigma^{\prime\prime}_{u}\sigma^{\prime\prime}_{l}w_{i}] \,.\end{split} \tag{17}\]
The matrices \(\widetilde{U}[w_{i-1}\sigma^{\prime}_{u}\sigma^{\prime}_{l},k]\) and \(\widetilde{V}[k,\sigma^{\prime\prime}_{u}\sigma^{\prime\prime}_{l}w_{i}]\) in the above equation can again be reshaped as
\[\begin{split}&\widetilde{U}[w_{i-1}\sigma^{\prime}_{u}\sigma^{ \prime}_{l},k]\leftrightarrow\widetilde{U}[w_{i-1},\sigma^{\prime}_{u},\sigma^{ \prime}_{l},k]\rightarrow\widetilde{U}[\sigma^{\prime}_{u},\sigma^{\prime}_{l },w_{i-1},k],\\ &\widetilde{V}[k,\sigma^{\prime\prime}_{u}\sigma^{\prime\prime}_{l} w_{i}]\rightarrow\widetilde{V}[k,\sigma^{\prime\prime}_{u},\sigma^{\prime\prime}_{l},w_{i}] \rightarrow\widetilde{V}[\sigma^{\prime\prime}_{u},\sigma^{\prime\prime}_{l},k,w_{i}],\end{split} \tag{18}\]
where \(\widetilde{U}\) and \(\widetilde{V}\) represent the MPO in \(|\sigma^{\prime}\rangle\) and \(|\sigma^{\prime\prime}\rangle\) basis, respectively. In the new basis the MPO of the split sites can be then expressed as
\[W=\widetilde{U}\widetilde{V}\,. \tag{19}\]
A schematic figure summarizing the above steps is illustrated in Fig. 2.
|
2308.09794 | Multiplicity of positive solutions for mixed local-nonlocal singular
critical problems | We prove the existence of at least two positive weak solutions for mixed
local-nonlocal singular and critical semilinear elliptic problems in the spirit
of [Haitao, 2003], extending the recent results in [Garain, 2023] concerning
singular problems and, at the same time, the results in [Biagi, Dipierro,
Valdinoci, Vecchi, 2022] regarding critical problems. | Stefano Biagi, Eugenio Vecchi | 2023-08-18T19:49:37Z | http://arxiv.org/abs/2308.09794v2 | # Multiplicity of positive solutions for mixed local-nonlocal singular critical problems
###### Abstract.
We prove the existence of at least two positive weak solutions for mixed local-nonlocal singular and critical semilinear elliptic problems in the spirit of [38], extending the recent results in [32] concerning singular problems and, at the same time, the results in [8] regarding critical problems.
Key words and phrases:Mixed local-nonlocal operators, singular PDEs, critical problems 2020 Mathematics Subject Classification: 35J75, 35M12, 35B33 E. Vecchi is member of the _Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni_ (GNAMPA) of the _Istituto Nazionale di Alta Matematica_ (INdAM) and is partially funded by the Indam-GNAMPA project "Variational and non-variational problems with lack of compactness".
the moving plane has been settled in [20] and variational methods may be used, see e.g. [25, 39, 38]. Since we will deal with mixed local-nonlocal problems, we also want to mention a few purely nonlocal results dealing with singular problems, see [5, 21, 22], dealing with existence and qualitative properties of solutions.
As already mentioned, the main character involved in this paper is the simplest linear mixed local-nonlocal operator,
\[\mathcal{L}:=-\Delta+(-\Delta)^{s},\quad s\in(0,1),\]
see Section 2. The operator \(\mathcal{L}\) written above is just a special instance of a wide more general class of operators whose study began in the '60s, see [13] and [19] for generalizations, in connection with the validity of a maximum principle. On the other hand, the operator \(\mathcal{L}\) can be seen as the infinitesimal generator of a stochastic process having both a Brownian motion and a Levy flight, and hence there is a vast literature which establishes several regularity properties _adopting probabilistic techniques_, see e.g. [23] and the references therein.
More recently, the study of regularity properties related to this operator (and its quasilinear generalizations) has seen an increasing interest, mainly adopting more analytical and PDEs approaches, see, e.g., [1, 6, 10, 18, 26, 29, 34, 35, 45]. It is worth mentioning that the operator \(\mathcal{L}\) seems to be of interest in biological applications, see, e.g., [30] and the references therein.
The second actor, appearing in the right hand side of (P), is given by the critical perturbation of a mild singular term, namely
\[\frac{\lambda}{u^{\gamma}}+u^{2^{*}-1},\]
where \(\lambda>0,\,\gamma\in(0,1)\) (hence the term _mild_) and \(2^{*}:=\frac{2n}{n-2}\) (for \(n\geq 3\)) is the critical Sobolev exponent. The problem we are interested in is pretty classical and can be formulated as follows
_Find values of the parameter \(\lambda\) for which (P) admits one or more positive weak solutions._
Before entering in the details of our result, let us first make a brief review of the existing literature dealing with mixed local-nonlocal singular problems.
* In [2], the authors prove existence and regularity results of positive solutions of the following _linear and purely singular_ problem (1.1) \[\begin{cases}-\Delta u+(-\Delta)^{s}u=\frac{f(x)}{u^{\gamma}}&\text{in } \Omega,\\ u=0&\text{in }\mathbb{R}^{n}\setminus\Omega,\end{cases}\] where \(\gamma>0\) is a fixed parameter.
* In [36], the authors prove existence of positive solutions of the _quasilinear counterpart_ of the singular problem (1.1), that is, \[\begin{cases}-\Delta_{p}u+(-\Delta_{p})^{s}u=\frac{f(x)}{u^{\gamma}}&\text{in } \Omega,\\ u=0&\text{in }\mathbb{R}^{n}\setminus\Omega,\end{cases}\] in connection with some mixed local-nonlocal Sobolev inequalities. Regularity under different summability assumptions on \(f\) and symmetry results are proved as well.
* In [32], Garain proved the existence of at least two positive solutions for a _subcritical_ perturbation of problem (1.1), namely \[\begin{cases}-\Delta_{p}u+(-\Delta_{p})^{s}u=\frac{\lambda}{u^{\gamma}}+u^{p}& \text{in }\Omega,\\ u=0&\text{in }\mathbb{R}^{n}\setminus\Omega,\end{cases}\] where \(\lambda>0,\,\gamma\in(0,1)\) and \(p\in(1,2^{*}-1)\). The case of nonlinearities of the form \(g(x,u)=\lambda h(u)u^{-\gamma}\) is also considered.
* In [40], the authors prove several summability results and the existence of a unique positive solutions for a singular problem with sublinear perturbation.
* In [11], Biroud extended some of the results in [36] to the case of variable exponent \(\gamma(x)\).
* In [33], the authors proved some existence results for a mixed anisotropic operator with variable singular exponent.
Keeping the previously mentioned results in mind, we are now ready to state the main result of this paper. In what follows, we refer to Definition 2.3 for the precise definition of _weak solution_ of (P)\({}_{\lambda}\).
**Theorem 1.1**.: _Let \(\Omega\subset\mathbb{R}^{n}\)\((\)with \(n\geq 3)\) be a bounded open set with smooth enough boundary, and let \(\gamma\in(0,1)\). Then, there exists \(\Lambda>0\) such that_
* _problem_ (P)\({}_{\lambda}\) _admits at least two weak solutions for every_ \(0<\lambda<\Lambda\)_;_
* _problem_ (P)\({}_{\Lambda}\) _admits at least one weak solution;_
* _problem_ (P)\({}_{\lambda}\) _does not admit weak solutions for every_ \(\lambda>\Lambda\)_._
The above theorem falls in the framework of classical results of Haitao [38] and Hirano, Saccon and Shoji [39], where the authors considered critical perturbations of mild singular terms using sub and supersolution methods ([38]) or a Nehari manifold approach ([39]). We also note that Theorem 1.1 extends the results in [32].
Let us now spend a few words on the proof of Theorem 1.1. First of all, we decided to follow the approach of Haitao [38]. The argument runs as follows.
* The existence of a first positive solution is obtained by means of a sub/supersolution variational scheme, which extends to the critical case a result in [32]. The role of the subsolution is naturally played by the solution of the purely singular problem (1.1), while a supersolution can be found exploiting the monotonicity of (P)\({}_{\lambda}\) with respect to the parameter, see Lemma 4.6.
* In Lemma 4.7, the first solution found is proved to be a local minimizer in the \(\mathcal{X}^{1,2}(\Omega)\)-topology for the functional \[\frac{1}{2}\rho(u)^{2}-\frac{\lambda}{1-\gamma}\int_{\Omega}|u|^{1-\gamma}\, dx-\frac{1}{2^{*}}\int_{\Omega}|u|^{2^{*}}\,dx,\] see Section 2 for the relevant definitions. In the purely local case [38], this is proved using a quantitative strong maximum principle established in [17, Theorem 3]. Since we do not have such a result at our disposal, we exploit a weak Harnack-type inequality for weak supersolutions of mixed operators with a zero order term, see Section 3. In doing this, we heavily rely on the recent regularity results obtained by Garain and Kinnunen [34].
* The second solution is found in Proposition 4.8 and Proposition 4.9 by means of the Ekeland variational principle, and in this last step we have to perturb the first solution with a suitable family of functions built upon the Aubin-Talenti functions. This choice is pretty natural in the purely local case and it is replaced by the nonlocal Talenti functions in the purely nonlocal case [37]. Thanks to the recent results in [8], in the mixed local-nonlocal case the situation is closer to the purely local case. Indeed, in [8, Theorem 1.1 and Theorem 1.2] it has been proved that the best constant in the natural mixed Sobolev quotient is never achieved and that it coincides with the purely local one, therefore suggesting that the classical Aubin-Talenti functions are the good candidates to play with in critical problems.
The paper is organized as follows: in Section 2 we collect all the relevant notation, definitions and preliminary results; in Section 3 we prove a weak Harnak-type inequality for weak supersolutions of \(\mathcal{L}+a\) which we believe being of independent interest and finally, in Section 4 we prove Theorem 1.1 following the scheme previously described.
## 2. Preliminaries
In this section we collect some preliminary definitions and results which will be used in the rest of the paper. First of all, we review some basic properties of the fractional Laplace operator \((-\Delta)^{s}\); we then properly introduce the adequate functional setting for the study of problem \((\mathbb{P})_{\lambda}\), and we give the precise definition of _weak sub/supersolution_ of this problem.
**i) The fractional Laplacian.** Let \(s\in(0,1)\) be fixed, and let \(u:\mathbb{R}^{n}\to\mathbb{R}\) be a measurable function. The _fractional Laplacian_ (of order \(s\)) of \(u\) at a point \(x\in\mathbb{R}^{n}\) is defined (up to a _normalization constant_\(C_{n,s}>0\) that we neglect) as
\[(-\Delta)^{s}u(x) =2\,\mathrm{P.V.}\int_{\mathbb{R}^{n}}\frac{u(y)-u(x)}{|x-y|^{n+2 s}}\,dy\] \[=2\,\lim_{\varepsilon\to 0^{+}}\int_{\{|x-y|\geq\varepsilon\}} \frac{u(y)-u(x)}{|x-y|^{n+2s}}\,dy,\]
provided that the above limit _exists and is finite_.
As is reasonable to expect, for \((-\Delta)^{s}u(x)\) to be well-defined one needs to impose some _growth conditions_ on \(u\), both when \(|y|\to+\infty\) and when \(y\to x\); for instance, if \(\mathcal{O}\subseteq\mathbb{R}^{n}\) is an arbitrary open set and if \(u\in\mathcal{L}^{s}(\mathbb{R}^{n})\cap C^{2}(\mathcal{O})\), one has
\[\exists\ (-\Delta)^{s}u(x)=-\int_{\mathbb{R}^{N}}\frac{u(x+z)+u(x-z)-2u(x)}{|z|^{ n+2s}}\,dy\quad\text{for all $x\in\mathcal{O}$.}\]
and the map \(\mathcal{O}\ni x\mapsto(-\Delta)^{s}u(x)\) is continuous on \(\mathcal{O}\). Here, \(\mathcal{L}^{s}(\mathbb{R}^{n})\) is the so-called _\(s\)-tail space_ (see also Section 3), and is defined as follows:
\[\mathcal{L}^{s}(\mathbb{R}^{n})=\Big{\{}u:\mathbb{R}^{n}\to\mathbb{R}:\,\|u\|_ {1,s}=\int_{\mathbb{R}^{n}}\frac{|u(x)|}{1+|x|^{n+2s}}\,dx<+\infty\Big{\}}.\]
However, since we are interested in _weak solutions_ of problem \((\mathbb{P})_{\lambda}\), we _do not review_ the _classical/pointwise theory_ for the operator \((-\Delta)^{s}\); rather, we recall some basic facts concerning the _Weak Theory_ of the this operator.
To begin with we recall that, if \(\mathcal{O}\subseteq\mathbb{R}^{n}\) is an arbitrary open set, the fractional Laplacian \((-\Delta)^{s}\) is related (essentially via the Euler-Lagrange equation) to the _fractional Sobolev space_\(H^{s}(\mathcal{O})\), which is defined as follows:
\[H^{s}(\mathcal{O}):=\Big{\{}u\in L^{2}(\mathcal{O}):\,[u]_{s,\mathcal{O}}^{2}= \iint_{\mathcal{O}\times\mathcal{O}}\frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2s}}\,dx\, dy<\infty\Big{\}}.\]
While we refer to [42] for a thorough introduction on fractional Sobolev spaces, here we list the few basic properties of \(H^{s}(\mathcal{O})\) we will exploit in this paper.
* \(H^{s}(\mathcal{O})\) is a real Hilbert space, with the scalar product \[\langle u,v\rangle_{s,\,\mathcal{O}}:=\iint_{\mathcal{O}\times\mathcal{O}} \frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}\,dx\,dy\qquad(u,v\in H^{s}(\mathcal{ O})).\]
* \(C_{0}^{\infty}(\mathcal{O})\) is a _linear subspace of_\(H^{s}(\mathcal{O})\); in addition, in the particular case when \(\mathcal{O}=\mathbb{R}^{n}\), we have that \(C_{0}^{\infty}(\mathbb{R}^{n})\) is _dense_ in \(H^{s}(\mathbb{R}^{n})\).
* If \(\mathcal{O}=\mathbb{R}^{n}\) or if \(\mathcal{O}\) has _bounded boundary_\(\partial\mathcal{O}\in C^{0,1}\), we have the _continuous embedding_\(H^{1}(\mathcal{O})\hookrightarrow H^{s}(\mathcal{O})\), that is, there exists \(\mathbf{c}=\mathbf{c}(n,s)>0\) s.t. (2.1) \[\iint_{\mathcal{O}\times\mathcal{O}}\frac{|u(x)-u(y)|^{2}}{|x-y|^{n+2s}}\,dx \,dy\leq\mathbf{c}\,\|u\|_{H^{1}(\mathcal{O})}^{2}\quad\text{for every $u\in H^{1}(\mathcal{O})$.}\] In particular, if \(\mathcal{O}\subseteq\mathbb{R}^{n}\) is a _bounded open set_ (with no regularity assumptions on \(\partial\mathcal{O}\)) and if \(u\in H^{1}_{0}(\mathcal{O})\), setting \(\hat{u}=u\cdot\mathbf{1}_{\mathcal{O}}\in H^{1}(\mathbb{R}^{n})\) we have (2.2) \[\iint_{\mathbb{R}^{2n}}\frac{|\hat{u}(x)-\hat{u}(y)|^{2}}{|x-y|^{n+2s}}\,dx\, dy\leq\beta\,\int_{\mathcal{O}}|\nabla u|^{2}\,dx,\] where \(\beta>0\) is a suitable constant depending on \(n,s\) and on \(|\Omega|\). Here and throughout, \(|\cdot|\) denotes the \(n\)-dimensional Lebesgue measure.
The relation between the space \(H^{s}(\mathcal{O})\) and the fractional Laplacian \((-\Delta)^{s}\) is rooted in the following _fractional integration-by-parts formula_: let \(\mathcal{O},\mathcal{V}\subseteq\mathbb{R}^{n}\) be open sets (bounded or not) such that \(\mathcal{O}\Subset\mathcal{V}\), and let \(u\in\mathcal{L}^{s}(\mathbb{R}^{n})\cap C^{2}(\mathcal{V})\subseteq H^{s}( \mathcal{O})\) (see the above (2.1)); given any \(\varphi\in C_{0}^{\infty}(\Omega)\), we have
\[\int_{\mathcal{O}}(-\Delta)^{s}u\,\varphi\,dx =\iint_{\mathbb{R}^{2n}}\frac{(u(x)-u(y))(\varphi(x)-\varphi(y)) }{|x-y|^{n+2s}}\,dx\,dy\] \[=\iint_{\mathcal{O}\times\mathcal{O}}\frac{(u(x)-u(y))(\varphi(x )-\varphi(y))}{|x-y|^{n+2s}}\,dx\,dy\] \[\qquad-2\,\iint_{(\mathbb{R}^{n}\setminus\mathcal{O})\times \mathcal{O}}\frac{(u(x)-u(y))\varphi(y)}{|x-y|^{n+2s}}\,dx\,dy\]
(notice that, since \(u\in\mathcal{L}^{s}(\mathbb{R}^{n})\cap C^{2}(\mathcal{V})\), we have \((-\Delta)^{s}u\in C(\mathcal{V})\)).
As a consequence of the above formula, it is then natural to define the _fractional Laplacian_\((-\Delta)^{s}u\) of a function \(u\in\mathcal{L}^{s}(\mathbb{R}^{n})\cap H^{s}(\mathcal{O})\) as the _linear functional_ acting on the space \(C_{0}^{\infty}(\mathcal{O})\) as follows
\[(-\Delta)^{s}u(\varphi)=\iint_{\mathbb{R}^{2n}}\frac{(u(x)-u(y))(\varphi(x)- \varphi(y))}{|x-y|^{n+2s}}\,dx\,dy.\]
Since we are assuming that \(u\in\mathcal{L}^{s}(\mathbb{R}^{n})\cap H^{s}(\mathcal{O})\), and since the kernel \(|z|^{-n-2s}\) is integrable at infinity, it is easy to recognize that the functional \((-\Delta)^{s}u\) is actually
a _distribution on \(\mathcal{O}\)_; more precisely, for every compact set \(K\subseteq\mathcal{O}\) there exists a constant \(C>0\) such that
\[\int\!\!\!\!\!\int_{\mathbb{R}^{2n}}\frac{|u(x)-u(y)||\varphi(x)- \varphi(y)|}{|x-y|^{n+2s}}\,dx\,dy\leq C\big{(}1+d(K,\mathbb{R}^{n}\setminus \mathcal{O})^{-2s}\big{)}\|\varphi\|_{H^{s}(\mathcal{O})}\] \[\qquad\qquad\qquad\qquad\text{for every $\varphi\in C_{0}^{ \infty}(\mathcal{O})$ with $\operatorname{supp}(\varphi)\subseteq K$} \tag{2.3}\]
(here, the constant \(C\) depends on \(n,s\) and on \(u\)), and this is enough to guarantee that \((-\Delta)^{s}u\in\mathcal{D}^{\prime}(\Omega)\) (as \(\|\varphi\|_{H^{s}(\Omega)}\leq c\,\|\varphi\|_{C^{1}(K)}\) for some absolute constant \(c>0\)). In the particular case when \(u\in H^{s}(\mathbb{R}^{n})\subseteq\mathcal{L}^{s}(\mathbb{R}^{n})\), the above (2.3) shows that the distribution \((-\Delta)^{s}u\) can be continuously extended to \(H^{s}(\mathbb{R}^{N})\) (recall that \(C_{0}^{\infty}(\mathbb{R}^{n})\) is _dense_ in \(H^{s}(\mathbb{R}^{n})\)), and thus
\[(-\Delta)^{s}u\in(H^{s}(\mathbb{R}^{n}))^{\prime}.\]
In this case, we also have
\[\frac{\mathrm{d}}{\mathrm{d}t}\Big{|}_{t=0}[u+tv]_{s,\mathbb{R}^{n}}^{2}=(- \Delta)^{s}u(v)\quad\text{for all $v\in H^{s}(\mathbb{R}^{n})$.}\]
**ii) The functional setting for problem \((\mathrm{P})_{\lambda}\).** Now we have briefly recalled some basic facts from the Weak Theory for \((-\Delta)^{s}\), we are in a position to introduce the functional setting for the study of problem \((\mathrm{P})_{\lambda}\).
Let then \(\Omega\subseteq\mathbb{R}^{n}\) be a bounded open set Lipschitz boundary \(\partial\Omega\). We define the space \(\mathcal{X}^{1,2}(\Omega)\) as the completion of \(C_{0}^{\infty}(\Omega)\) with respect to the _global norm_
\[\rho(u):=\left(\||\nabla u|\|_{L^{2}(\mathbb{R}^{n})}^{2}+[u]_{s,\mathbb{R}^{ n}}^{2}\right)^{1/2},\qquad u\in C_{0}^{\infty}(\Omega).\]
Due to its relevance in the sequel, we also introduce a distinguished notation for the _cone of the non-negative functions_ in \(\mathcal{X}^{1,2}(\Omega)\): we set
\[\mathcal{X}^{1,2}_{+}(\Omega):=\{u\in\mathcal{X}^{1,2}(\Omega):\,u\geq 0\text{ a.e.\ in }\Omega\}.\]
**Remark 2.1**.: Even if the function \(u\in C_{0}^{\infty}(\Omega)\)_identically vanishes outside \(\Omega\)_, it is often still convenient to consider in the definition of \(\rho\) the \(L^{2}\)-norm of \(|\nabla u|\)_on the whole of \(\mathbb{R}^{n}\)_, rather than restricted to \(\Omega\) (though of course the result would be the same): this is to stress that the elements in \(\mathcal{X}^{1,2}(\Omega)\) are functions defined _on the entire space \(\mathbb{R}^{n}\)_ and not only on \(\Omega\) (and this is consistent with the nonlocal nature of the operator \(\mathcal{L}\)). The benefit of having this global functional setting is that these functions can be _globally approximated on \(\mathbb{R}^{n}\)_ (with respect to the norm \(\rho\)) by smooth functions with compact support in \(\Omega\).
Since this norm \(\rho\) is induced by the scalar product
\[\mathcal{B}_{\rho}(u,v):=\int_{\mathbb{R}^{n}}\nabla u\cdot\nabla v\,dx+ \langle u,v\rangle_{s,\mathbb{R}^{n}}\]
(where \(\cdot\) denotes the usual scalar product in \(\mathbb{R}^{n}\)), the space \(\mathcal{X}^{1,2}(\Omega)\) is a real _Hilbert space_; most importantly, since \(\Omega\) is bounded and \(\partial\Omega\) is Lipschitz, by combining the above (2.1) with the classical Poincare inequality we infer that
\[\vartheta^{-1}\|u\|_{H^{1}(\mathbb{R}^{n})}\leq\rho(u)\leq\vartheta\|u\|_{H^{1 }(\mathbb{R}^{n})}\qquad\text{for every $u\in C_{0}^{\infty}(\Omega)$,}\]
where \(\vartheta>1\) is a suitable constant depending on \(n,s\) and on \(|\Omega|\). Thus, \(\rho(\cdot)\) and the full \(H^{1}\)-norm in \(\mathbb{R}^{n}\) are _actually equivalent_ on the space \(C^{\infty}_{0}(\Omega)\), so that
\[\begin{split}\mathcal{X}^{1,2}(\Omega)&=\overline{C^{ \infty}_{0}(\Omega)}^{\infty}\,\|\cdot\|_{H^{1}(\mathbb{R}^{n})}\\ &=\{u\in H^{1}(\mathbb{R}^{n}):\,u|_{\Omega}\in H^{1}_{0}(\Omega) \text{ and }u\equiv 0\text{ a.e.\,in }\mathbb{R}^{n}\setminus\Omega\}.\end{split} \tag{2.4}\]
We explicitly observe that, on account of (2.4), the functions in \(\mathcal{X}^{1,2}(\Omega)\) naturally satisfy the nonlocal Dirichlet condition prescribed in problem \((\mathbb{P})_{\lambda}\), that is,
\[u\equiv 0\text{ a.e.\,in }\mathbb{R}^{n}\setminus\Omega\text{ for every }u\in\mathcal{X}^{1,2}(\Omega).\]
**Remark 2.2** (Properties of the space \(\mathcal{X}^{1,2}(\Omega)\)).: For a future reference, we list in this remark some properties of the function space \(\mathcal{X}^{1,2}(\Omega)\) which will be repeatedly exploited in the rest of the paper.
1. Since both \(H^{1}(\mathbb{R}^{n})\) and \(H^{1}_{0}(\Omega)\) are _closed_ under the maximum/minimum operation, it is readily seen that \[u_{\pm}\in\mathcal{X}^{1,2}(\Omega)\quad\text{for every }u\in\mathcal{X}^{1,2}(\Omega),\] where \(u_{+}=\max\{u,0\}\) and \(u_{-}=\max\{-u,0\}\).
2. On account of (2.2), for every \(u\in\mathcal{X}^{1,2}(\Omega)\) we have (2.5) \[[u]^{2}_{s,\mathbb{R}^{n}}=\iint_{\mathbb{R}^{2n}}\frac{|u(x)-u(y)|^{2}}{|x-y |^{n+2s}}\,dx\,dy\leq\beta\int_{\Omega}|\nabla u|^{2}\,dx.\] As a consequence, the norm \(\rho\) is _globally equivalent_ on \(\mathcal{X}^{1,2}(\Omega)\) to the \(H^{1}_{0}\)-norm: in fact, by (2.5) there exists a constant \(\Theta>0\) such that (2.6) \[\||\nabla u|\|_{L^{2}(\Omega)}\leq\rho(u)\leq\Theta\||\nabla u|\|_{L^{2}( \Omega)}\quad\text{for every }u\in\mathcal{X}^{1,2}(\Omega).\]
3. By the (local) Sobolev inequality, for every \(u\in\mathcal{X}^{1,2}(\Omega)\) we have \[S_{n}\|u\|^{2}_{L^{2^{*}}(\Omega)}=\|u\|^{2}_{L^{2^{*}}(\mathbb{R}^{n})}\leq \int_{\mathbb{R}^{n}}|\nabla u|^{2}\,dx\leq\rho(u)^{2}.\] This, together with Holder's inequality (recall that \(\Omega\) is _bounded_), proves the _continuous embedding_\(\mathcal{X}^{1,2}(\Omega)\hookrightarrow L^{p}(\Omega)\) for every \(1\leq p\leq 2^{*}\).
4. By combining (2.6) with the _compact embedding_ of \(H^{1}_{0}(\Omega)\hookrightarrow L^{p}(\Omega)\) (holding true for every \(1\leq p<2^{*}\)), we derive that also the embedding \[\mathcal{X}^{1,2}(\Omega)\hookrightarrow L^{p}(\Omega)\text{ is compact for every }1\leq p<2^{*}.\] As a consequence, if \(\{u_{k}\}_{k}\) is a bounded sequence in \(\mathcal{X}^{1,2}(\Omega)\), it is possible to find a (unique) function \(u\in\mathcal{X}^{1,2}(\Omega)\) such that (up to a sub-sequence) 1. \(u_{n}\to u\) weakly in \(\mathcal{X}^{1,2}(\Omega)\); 2. \(u_{n}\to u\)_strongly_ in \(L^{p}(\Omega)\) for every \(1\leq p<2^{*}\); 3. \(u_{n}\to u\) pointwise a.e.\(\,\)in \(\Omega\). Clearly, since both \(u_{n}\) (for all \(n\geq 1\)) and \(u\)_identically vanish_ out of \(\Omega\), see (2.4), we can replace \(\Omega\) with \(\mathbb{R}^{n}\) in the above assertions b)-c).
We will exploit these properties without any further comment.
**Notation**.: In what follows, we adopt the following notation
1) Given any open set \(\mathcal{O}\subseteq\mathbb{R}^{n}\) (not necessarily bounded), we set
1. \(\mathcal{B}_{\rho,\mathcal{O}}(u,v)=\int_{\mathcal{O}}\nabla u \cdot\nabla v\,dx+\langle u,v\rangle_{s,\mathbb{R}^{n}}\) (for \(u,v\in\mathcal{X}^{1,2}(\Omega)\)); 2. \(\mathcal{Q}_{\rho,\mathcal{O}}(u)=\mathcal{B}_{\rho,\mathcal{O}}(u,u)\) (for \(u\in\mathcal{X}^{1,2}(\Omega)\)).
Since \(\mathcal{X}^{1,2}(\Omega)\subseteq H^{1}(\mathbb{R}^{n})\) (see (2.4)), the above forms \(\mathcal{B}_{\rho,\mathcal{O}}\) and \(\mathcal{Q}_{\rho,\mathcal{O}}\) are well-defined; moreover, again by taking into account (2.4) we have
* \(\mathcal{B}_{\rho,\Omega}(u,v)=\mathcal{B}_{\rho,\mathbb{R}^{n}}(u,v)\equiv \mathcal{B}_{\rho}(u,v)\) for all \(u,v\in\mathcal{X}^{1,2}(\Omega)\);
* \(\mathcal{Q}_{\rho,\Omega}(u)=\mathcal{Q}_{\rho,\mathbb{R}^{n}}(u)\equiv\rho(u)\) for all \(u\in\mathcal{X}^{1,2}(\Omega)\).
2) Given any _bounded_ open set \(\mathcal{O}\subseteq\mathbb{R}^{n}\), we set
\[\|u\|_{H^{1}_{0}(\mathcal{O})}:=\||\nabla u|\|_{L^{2}(\mathcal{O})}=\Big{(} \int_{\mathcal{O}}|\nabla u|^{2}\,dx\Big{)}^{1/2}.\]
**iii) Weak sub/supersolutions of problem \((\mathbb{P})_{\lambda}\).** Thanks to all the preliminaries recalled so far, we are finally ready to give the precise definition of _weak sub/supersolutions_ of problem \((\mathbb{P})_{\lambda}\). Actually, for a reason which will be clear in Section 3, we consider the more general problem
\[\begin{cases}\mathcal{L}u=\frac{\lambda}{u^{\gamma}}+f(x,u)&\text{in }\Omega\\ u>0&\text{in }\Omega,\\ u=0&\text{in }\mathbb{R}^{n}\setminus\Omega\end{cases} \tag{2.7}\]
where \(f:\Omega\times(0,+\infty)\to\mathbb{R}\) is an arbitrary Caratheodory function satisfying the following growth condition: _there exists a constant \(K_{f}>0\) such that_
\[|f(x,t)|\leq K_{f}(1+t^{2^{*}-1})\quad\text{for a.e.\,}x\in\Omega\text{ and every }t>0. \tag{2.8}\]
Clearly, problem \((\mathbb{P})_{\lambda}\) is of the form (2.7), with the choice \(f(x,t)=t^{2^{*}-1}\).
**Definition 2.3**.: Let \(f:\Omega\times(0,+\infty)\to\mathbb{R}\) be a Caratheodory function satisfying the above condition (2.8). We say that a function \(u\in\mathcal{X}^{1,2}(\Omega)\) is a _weak subsolution_ (resp. _supersolution_) of problem (2.7) if it satisfies the following properties:
* \(u>0\) a.e. in \(\Omega\) and \(u^{-\gamma}\in L^{1}_{\mathrm{loc}}(\Omega)\);
* for every test function \(\varphi\in C^{\infty}_{0}(\Omega)\), \(\varphi\geq 0\) in \(\Omega\), we have (2.9) \[\mathcal{B}_{\rho}(u,\varphi)\leq\,[\text{resp. }\geq]\ \lambda\int_{\Omega}u^{-\gamma}\varphi\,dx+\int_{\Omega}f(x,u)\varphi\,dx.\]
We say that \(u\) is a _weak solution_ of problem (2.7) if it is both a weak subsolution and a weak supersolution of the same problem.
**Remark 2.4**.: We now list here below some comments concerning the above Definition 2.3. In what follows, we tacitly understand that \(f:\Omega\to(0,+\infty)\to\mathbb{R}\) is a Caratheodory function satisfying the growth assumption (2.8).
1) If \(u\in\mathcal{X}^{1,2}(\Omega)\) is a weak sub/supersolution of problem (2.7), all the integrals appearing in (2.9) _exist and are finite_. Indeed, by combining (2.5) with the classical Holder inequality, for every \(\varphi\in C^{\infty}_{0}(\Omega)\), \(\varphi\geq 0\) in \(\Omega\), we obtain
\[\begin{split}|\mathcal{B}_{\rho}(u,\varphi)|&\leq \|u\|_{H^{1}_{0}(\Omega)}\|\varphi\|_{H^{1}_{0}(\Omega)}+[u]_{s,\mathbb{R}^{n} }\cdot[\varphi]_{s,\mathbb{R}^{n}}\\ &\leq(1+\beta^{2})\|u\|_{H^{1}_{0}(\Omega)}\|\varphi\|_{H^{1}_{0} (\Omega)}<+\infty.\end{split} \tag{2.10}\]
Moreover, using (2.8) and the Sobolev inequality, we also get
\[\begin{split}\int_{\Omega}|f(x,u)|\cdot|\varphi|&\leq K _{f}\Big{(}\|\varphi\|_{L^{1}(\Omega)}+\int_{\Omega}|u|^{2^{*}-1}|\varphi|\,dx \Big{)}\\ &\text{(using H\"{o}lder's inequality)}\\ &\leq K_{f}\big{(}\|\varphi\|_{L^{1}(\Omega)}+\|u\|_{L^{2^{*}-1}( \Omega)}^{2^{*}-1}\cdot\|\varphi\|_{L^{2^{*}}(\Omega)}\big{)}\\ &\leq K_{f}\big{(}\|\varphi\|_{L^{1}(\Omega)}+S_{n}^{-2^{*}/2}\|u \|_{H^{1}_{0}(\Omega)}^{2^{*}-1}\cdot\|\varphi\|_{H^{1}_{0}(\Omega)}\big{)}\\ &\text{(again by H\"{o}lder's inequality and Poincare inequality)}\\ &\leq C\|\varphi\|_{H^{1}_{0}(\Omega)}\big{(}1+\|u\|_{H^{1}_{0}( \Omega)}^{2^{*}-1}\big{)}<+\infty,\end{split} \tag{2.11}\]
where \(S_{n}>0\) is the best Sobolev constant in \(\mathbb{R}^{n}\), and \(C>0\) depends on \(n\), \(f\), \(|\Omega|\). Finally, since _we are assuming that \(u^{-\gamma}\in L^{1}_{\text{loc}}(\Omega)\)_, we obviously have
\[0\leq\int_{\Omega}u^{-\gamma}\varphi\,dx\leq\|\varphi\|_{L^{\infty}(\Omega)} \int_{\text{supp}(\varphi)}u^{-\gamma}\,dx<+\infty.\]
We explicitly stress that this last estimate (which is related with the _singular term_\(u^{-\gamma}\)) is the unique estimate involving the \(L^{\infty}\)-norm of the test function \(\varphi\); on the contrary, estimates (2.10)-(2.11) only involve the \(H^{1}_{0}\)_-norm_ of \(\varphi\).
2) If \(u\in\mathcal{X}^{1,2}(\Omega)\) is a weak _solution_ of problem (2.7), and if \(\varphi\in C^{\infty}_{0}(\Omega)\) is a non-negative test function (that is, \(\varphi\geq 0\) in \(\Omega\)), by (2.10)-(2.11) we have
\[\begin{split} 0\leq\int_{\Omega}u^{-\gamma}\varphi\,dx&= \frac{1}{\lambda}\Big{(}\mathcal{B}_{\rho}(u,\varphi)+\int_{\Omega}f(x,u) \varphi\,dx\Big{)}\\ &\leq C\|\varphi\|_{H^{1}_{0}(\Omega)}\big{(}1+\|u\|_{H^{1}_{0}( \Omega)}^{2^{*}-1}\big{)};\end{split}\]
from this, by using a standard _density argument_, and by taking into account Remark 2.2 -1), we can easily prove the following facts:
* \(u^{-\gamma}\varphi\in L^{1}(\Omega)\)_for every_\(\varphi\in\mathcal{X}^{1,2}(\Omega)\);
* identity (2.9) actually holds _for every_\(\varphi\in\mathcal{X}^{1,2}(\Omega)\), see [36].
3) Let \(u\in\mathcal{X}^{1,2}(\Omega)\) be a weak _solution_ of problem (2.7). Since, in particular, we know that \(u>0\) a.e. in \(\Omega\) (and \(u\equiv 0\) a.e. in \(\mathbb{R}^{n}\setminus\Omega\)), it is quite easy to recognize that \(u\) is a _weak supersolution of the equation_
\[\mathcal{L}u=0\quad\text{in }\Omega,\]
in the sense of [34, Definition 2.5] (see also [34, Remark 2.6] and note that, by (2.1) and the definition of \(\mathcal{X}^{1,2}(\Omega)\), we have \(u\in H^{1}(\mathbb{R}^{n})\subseteq H^{s}(\mathbb{R}^{n})\)). As a consequence, we are entitled to apply [34, Lemma 8.1], ensuring that
for every
\[\mathcal{O}\Subset\Omega\]
there exists
\[c(\mathcal{O})>0\]
s.t.
\[u\geq c(\mathcal{O})>0\]
a.e. in
\[\mathcal{O}\]
## 3. An auxiliary result: weak Harnack-type inequality for weak supersolutions of
\[\mathcal{L}u+a(x)u=0\]
Before embarking on the proof of Theorem 1.1, we establish in this section a weak Harnack-type inequality for the weak supersolutions of the equations
\[\mathcal{L}u+a(x)u=0 \tag{3.1}\]
in
\[\Omega\]
,
where \(a\in L^{\infty}(\Omega)\), \(a\geq 0\) a.e. in \(\Omega\). As already discussed in the introduction, such a result turns out to be a proper substitute of [17, Theorem 3], and it will be used as a _key tool_ in the proof of Theorem 1.1-b).
To begin with, since equation (3.1) _is not_ a particular case of the one appearing in problem (2.7), we give the following definition.
**Definition 3.1**.: Let \(a\in L^{\infty}(\Omega)\). We say that a function \(u\in\mathcal{L}^{s}(\mathbb{R}^{n})\cap H^{1}_{\mathrm{loc}}(\Omega)\) is a _weak subsolution_ (resp. _supersolution_) of equation (3.1) if
\[\mathcal{B}_{\rho,\mathcal{O}}(u,\varphi)+\int_{\mathcal{O}}a(x)u\varphi\,dx \leq\,[\text{resp. }\geq]\ 0 \tag{3.2}\]
for every \(\mathcal{O}\Subset\Omega\) and every \(\varphi\in\mathcal{X}^{1,2}_{+}(\mathcal{O})\).
**Remark 3.2**.: By combining (2.1)-(2.2) with (2.3), it is easy to recognize that the above Definition 3.1_is well-posed_. In fact, let \(\mathcal{V}\) be a bounded open set _with smooth boundary_ and such that \(\mathcal{O}\Subset\mathcal{V}\Subset\Omega\). Since \(u\in H^{1}(\mathcal{V})\) (recall that \(u\in H^{1}_{\mathrm{loc}}(\Omega)\) and that \(\mathcal{V}\Subset\Omega\)), from (2.1) we deduce that \(u\in H^{s}(\mathcal{V})\); as a consequence, we are then entitled to apply (2.3) with the choice \(K=\overline{\mathcal{O}}\Subset\mathcal{V}\): this gives
\[\iint_{\mathbb{R}^{2n}}\frac{|u(x)-u(y)||\varphi(x)-\varphi(y)|}{|x-y|^{n+2s} }\,dx\,dy\leq C\big{(}1+d(\overline{\mathcal{O}},\mathbb{R}^{n}\setminus \mathcal{V})^{-2s}\big{)}\|\varphi\|_{H^{s}(\mathcal{V})}\]
and this estimate holds for every \(\varphi\in C^{\infty}_{0}(\mathcal{O})\) (here, \(C>0\) is a constant depending on \(n,s\) and on \(u\)). Now, by combining this last estimate with (2.2), we get
\[\iint_{\mathbb{R}^{2n}}\frac{|u(x)-u(y)||\varphi(x)-\varphi(y)|}{|x-y|^{N+2s} }\,dx\,dy\leq C^{\prime}\big{(}1+d(\overline{\mathcal{O}},\mathbb{R}^{n} \setminus\mathcal{V})^{-2s}\big{)}\|\varphi\|_{H^{1}_{0}(\mathcal{O})},\]
for a suitable constant \(C^{\prime}>0\); from this, by taking into account that \(C^{\infty}_{0}(\mathcal{O})\) is dense in \(\mathcal{X}^{1,2}(\mathcal{O})\subseteq H^{1}(\mathbb{R}^{n})\), we easily derive that
\[|\mathcal{B}_{\rho,\mathcal{O}}(u,\varphi)|<+\infty\quad\text{for every } \varphi\in\mathcal{X}^{1,2}(\mathcal{O}).\]
Finally, since \(a\in L^{\infty}(\Omega)\), we also have
\[\int_{\mathcal{O}}|a(x)u\varphi|\,dx\leq\|a\|_{L^{\infty}(\Omega)}\|u\|_{L^{2} (\mathcal{O})}\|\varphi\|_{L^{2}(\mathcal{O})}<+\infty\]
and this proves that Definition 3.1 is well-posed.
With Definition 3.1 at hand, we are ready to state the announced weak Harnack-type inequality for weak supersolutions of equation (3.1). Throughout what follows, if \(u\in\mathcal{L}^{s}(\mathbb{R}^{n})\) and if \(B_{r}(x_{0})\subseteq\mathbb{R}^{n}\) is a given ball, we define
\[\mathrm{Tail}(u;x_{0},r):=r^{2}\int_{\mathbb{R}^{n}\setminus B_{r}(x_{0})} \frac{|u(y)|}{|y-x_{0}|^{n+2s}}\,dy<+\infty.\]
This quantity is usually referred to as the _tail of \(u\)_ (with respect to \(B_{r}(x_{0})\)).
**Proposition 3.3**.: _Let \(a\,\in\,L^{\infty}(\Omega),\,a\geq 0\) a.e. in \(\Omega\). Moreover, let \(u\) be a weak supersolution of equation (3.1) such that \(u\geq 0\) a.e. in \(B_{R}(x_{0})\Subset\Omega\)._
_Then, there exist \(\eta=\eta(n,s,a)>0\) and \(c=c(n,s,a)>0\) such that_
\[\left(\fint_{B_{r}(x_{0})}u^{\eta}\,dx\right)^{1/\eta}\leq c\operatorname*{ ess\,inf}_{B_{r}(x_{0})}u+c\left(\frac{r}{R}\right)^{2}\mathrm{Tail}(u_{-};x_{0},R), \tag{3.3}\]
_whenever \(B_{r}(x_{0})\subset B_{R}(x_{0})\) and \(r\in(0,1]\)._
In order to prove Proposition 3.3 we first establish the following _Caccioppoli-type inequality_ for \(\mathcal{L}+a(x)\), which is modeled on [34, Lemma 3.1].
**Lemma 3.4**.: _Let \(a\in L^{\infty}(\Omega)\), and let \(u\) be a weak subsolution \([\text{resp.\,supersolution}]\) of equation (3.1). We arbitrarily fix \(k\in\mathbb{R}\), and we set_
\[w=(u-k)_{+}\quad[\text{resp.\,}w=(u-k)_{-}].\]
_Then, there exist a positive constant \(C>0\) such that_
\[\begin{split}\int_{B_{r}(x_{0})}\psi^{2}|\nabla w|^{2}\,dx& +\iint_{B_{r}(x_{0})\times B_{r}(x_{0})}\frac{|w(x)\psi^{2}(x)-w(y) \psi^{2}(y)|^{2}}{|x-y|^{n+2s}}\,dxdy\\ &+\ [\text{resp.\,}-]\int_{B_{r}(x_{0})}a(x)uw\psi^{2}\,dx\\ \leq C&\bigg{(}\int_{B_{r}(x_{0})}w^{2}|\nabla\psi|^ {2}\,dx\\ &+\iint_{B_{r}(x_{0})\times B_{r}(x_{0})}\max\{w(x),w(y)\}\frac{| \psi(x)-\psi(y)|^{2}}{|x-y|^{n+2s}}\,dxdy\\ &+\operatorname*{ess\,sup}_{x\in\text{supp}(\psi)}\int_{\mathbb{ R}^{n}\setminus B_{r}(x_{0})}\frac{w(y)}{|x-y|^{n+2s}}\,dy\,\int_{B_{r}(x_{0})}w \psi^{2}\,dx\bigg{)},\end{split} \tag{3.4}\]
_whenever \(B_{r}(x_{0})\Subset\Omega\) and \(\psi\in C_{0}^{\infty}(B_{r}(x_{0}))\) is nonnegative._
Proof.: Assume that \(u\) is a weak _subsolution_ of equation (3.1), and let \(w,\psi\) be as in the statement. Since \(u\in H^{1}_{\text{loc}}(\Omega)\) and \(\psi\in C_{0}^{\infty}(B_{r}(x_{0}))\), we clearly have
\[\varphi:=w\psi^{2}\in\mathcal{X}_{+}^{1,2}(B_{r}(x_{0})).\]
As a consequence of this fact, we are then entitled to use this function \(\varphi\) as a test function in (3.2) (recall that \(\mathcal{O}=B_{r}(x_{0})\Subset\Omega\)), thus obtaining
\[\begin{split} 0&\geq B_{\rho,B_{r}(x_{0})}(u,\varphi)+ \int_{B_{r}(x_{0})}a(x)u\varphi\,dx\\ &\quad=\int_{B_{r}(x_{0})}\nabla u\cdot\nabla(w\psi^{2})\,dx\\ &\qquad+\iint_{\mathbb{R}^{2n}}\frac{(u(x)-u(y))((w\psi^{2})(x)- (w\psi^{2})(y))}{|x-y|^{n+2s}}\,dx\,dy\\ &\qquad\qquad+\int_{B_{r}(x_{0})}a(x)uw\psi^{2}\,dx\\ &\equiv I+J+\int_{B_{r}(x_{0})}a(x)uw\psi^{2}\,dx.\end{split} \tag{3.5}\]
Now, they are proved in [34, Lemma 3.1] the following estimates for the integrals \(I\) and \(J\), holding true _for every function \(u\in\mathcal{L}^{s}(\mathbb{R}^{n})\cap H^{1}_{\text{loc}}(\Omega)\)_:
i) \[I\geq c_{1}\int_{B_{r}(x_{0})}\psi^{2}|\nabla w|^{2}\,dx-C_{1} \int_{B_{r}(x_{0})}w^{2}|\nabla\psi|^{2}\,dx\] ii) \[J\geq c_{2}\iint_{B_{r}(x_{0})\times B_{r}(x_{0})}\frac{|w(x) \psi^{2}(x)-w(y)\psi^{2}(y)|^{2}}{|x-y|^{n+2s}}\,dxdy\] \[\qquad\qquad-C_{2}\iint_{B_{r}(x_{0})\times B_{r}(x_{0})}\max\{w( x),w(y)\}\frac{|\psi(x)-\psi(y)|^{2}}{|x-y|^{n+2s}}\,dxdy\]
\[-C_{2}\operatorname*{ess\,sup}_{x\in\operatorname{supp}(\psi)}\int_{\mathbb{R}^{n} \setminus B_{r}(x_{0})}\frac{w(y)}{|x-y|^{n+2s}}\,dy\,\int_{B_{r}(x_{0})}w\psi^ {2}\,dx,\]
for suitable _absolute_ constants \(c_{i}\), \(C_{i}>0\) (for \(i=1,2\)) By combining (3.5) with the above i)-ii), we then obtained the claimed (3.4).
Finally, if \(u\) is a weak _supersolution_ of equation (3.1) it suffices to apply the obtained estimate to \(v=-u\), which is a weak _subsolution_ of the same equation.
We now establish an _expansion of positivity_-type result, which slightly extends [34, Lemma 7.1] and whose proof heavily relies on that one. In order to avoid tiring repetitions, we will only highlight the few modifications needed.
**Lemma 3.5**.: _Let \(a\in L^{\infty}(\Omega),\,a\geq 0\) a.e. in \(\Omega\), and let \(u\) be a weak supersolution of equation (3.1) satisfying \(u\geq 0\) in \(B_{R}(x_{0})\Subset\Omega\). Let then \(k\geq 0\), and suppose there exists a constant \(\tau\in(0,1]\) such that_
\[|B_{r}(x_{0})\cap\{u\geq k\}|\geq\tau\,|B_{r}(x_{0})|,\]
_for some \(r\in(0,1]\) with \(0<r<R/16\)._
_Then, there exists a positive constant \(\delta=\delta(n,s,\tau,a)>0\) such that_
\[\operatorname*{ess\,inf}_{B_{4r}(x_{0})}u\geq\delta k-\left(\frac{r}{R}\right) ^{2}\operatorname{Tail}(u_{-};x_{0},R). \tag{3.6}\]
Proof.: As in the proof of [34, Lemma 7.1], we proceed by steps.
Step 1: In this step we prove that there exists \(c=c(n,s,a)>0\) such that
\[\begin{split}\left|B_{6r}(x_{0})&\cap\left\{u\leq 2 \delta k-\frac{1}{2}\left(\frac{r}{R}\right)^{2}\operatorname{Tail}(u_{-};x_{0},R)-\varepsilon\right\}\right|\\ &\leq\frac{c}{\tau\,\ln((2\delta)^{-1})}\,|B_{6r}(x_{0})|,\end{split} \tag{3.7}\]
for every \(\delta\in(0,\frac{1}{4})\) and for every \(\varepsilon>0\). We explicitly note that this is the analogous of [34, Equation (7.3)]. To prove (3.7), we fix \(\psi\in C_{0}^{\infty}(B_{7r}(x_{0}))\) such that
* \(0\leq\psi\leq 1\) in \(B_{7r}(x_{0})\);
* \(\psi=1\) in \(B_{6r}(x_{0})\);
* \(|\nabla\psi|\leq\frac{8}{r}\) in \(B_{7r}(x_{0})\).
We then take \(w:=u+t_{\varepsilon}\), where
\[t_{\varepsilon}:=\frac{1}{2}\left(\frac{r}{R}\right)^{2}\operatorname{Tail}(u _{-};x_{0},R)+\varepsilon,\]
and we notice that, since _we assuming that \(a\geq 0\) a.e. in \(\Omega\)_, this function \(w\) is a weak _supersolution_ of equation (3.1) as well. Therefore, by using the function
\[\phi:=w^{-1}\psi^{2}\in\mathcal{X}_{+}^{1,2}(B_{7r}(x_{0}))\]
as a test function in (3.2) (with \(u=w\)), we get
\[0\leq\mathcal{B}_{\rho,B_{7r}(x_{0})}(w,\phi)+\int_{B_{7r}(x_{0})}a (x)w\phi\,dx\] \[\qquad=\int_{B_{7r}(x_{0})}\nabla w\cdot\nabla(w^{-1}\psi^{2})\,dx\] \[\qquad\qquad+\iint_{B_{8r}(x_{0})\times B_{8r}(x_{0})}\frac{(w(x) -w(y))(\phi(x)-\phi(y))}{|x-y|^{n+2s}}\,dxdy\] \[\qquad\qquad+2\int_{B_{8r}(x_{0})}\int_{\mathbb{R}^{n}\setminus B _{8r}(x_{0})}\frac{(w(x)-w(y))\phi(x)}{|x-y|^{n+2s}}\,dx\,dy\] \[\qquad\qquad+\int_{B_{7r}(x_{0})}a(x)w(x)\frac{\psi^{2}(x)}{w(x) }\,dx\] \[\equiv I_{1}+I_{2}+I_{3}+I_{4}.\]
Taking from [34] the estimates for the first three integrals, we are left with
\[I_{4}\leq\|a\|_{\infty}\int_{B_{8r}(x_{0})}\psi^{2}(x)\,dx\leq c(n,\|a\|_{ \infty})r^{n-2},\]
where we have also used the fact that \(r\leq 1\). Combining this estimate with those of \(I_{i}\) (\(i=1,2,3\)), we then get [34, equation (7.11)] which in turn gives (3.7).
Step 2: In this step we prove the following fact: _for every \(\varepsilon>0\) there exists a positive constant \(\delta=\delta(n,s,\tau)\in(0,1/4)\) such that_
\[\operatorname*{ess\,inf}_{B_{4r}(x_{0})}u\geq\delta k-\left(\frac{r}{R}\right) ^{2}\operatorname*{Tail}(u_{-};x_{0},R)-2\varepsilon, \tag{3.8}\]
which is precisely [34, equation (7.16)]. To begin with, following [34] we can assume without loss of generality that
\[\delta k\geq\left(\frac{r}{R}\right)^{2}\operatorname*{Tail}(u_{-};x_{0},R)+2\varepsilon,\]
otherwise (3.8) is trivially satisfied (since \(u\geq 0\) in \(B_{R}(x_{0})\)). Let then \(\varrho\in[r,6r]\) and let \(\psi\in C_{0}^{\infty}(B_{\varrho}(x_{0}))\) be a cut-off function such that
\[0\leq\psi\leq 1\text{ in }B_{\varrho}(x_{0}).\]
We now arbitrarily fix a number \(l\in(\delta k,2\delta k)\), and we exploit the Caccioppoli-type inequality in Lemma 3.4 with \(w=(u-l)_{-}\leq l+u_{-}\): this gives
\[\begin{split}&\int_{B_{\varrho}(x_{0})}\psi^{2}|\nabla w|^{2}\, dx+\iint_{B_{\varrho}(x_{0})\times B_{\varrho}(x_{0})}\frac{|w(x)\psi(x)-w(y) \psi(y)|^{2}}{|x-y|^{n+2s}}\,dxdy\\ &\qquad\leq c\int_{B_{\varrho}(x_{0})}w^{2}|\nabla\psi|^{2}\,dx\\ &\qquad\qquad+c\iint_{B_{\varrho}(x_{0})\times B_{\varrho}(x_{0}) }\max\{w(x),w(y)\}\frac{|\psi(x)-\psi(y)|^{2}}{|x-y|^{n+2s}}\,dxdy\\ &\qquad\qquad+c\,l\,|B_{\varrho}(x_{0})\cap\{u<l\}|\cdot\,\operatorname* {ess\,sup}_{x\in\operatorname*{supp}(\psi)}\int_{\mathbb{R}^{n}\setminus B_{ \varrho}(x_{0})}\frac{l+u_{-}(y)}{|x-y|^{n+2s}}\,dy\\ &\qquad\qquad+c\,\int_{B_{\varrho}(x_{0})}a(x)uw\psi^{2}\,dx\\ &\qquad\equiv J_{1}+J_{2}+J_{3}+c\,\int_{B_{\varrho}(x_{0})}a(x) uw\psi^{2}\,dx.\end{split} \tag{3.9}\]
which is the analog of [34, Equation (7.18)] in the present context. With (3.9) at hand, we apply [27, Lemma 4.1] to complete the proof of (3.8).
To this aim, we have to introduce a bit of notation as it is used in [34]: for \(j=0,1,\ldots\), we denote
\[l=k_{j}=\delta k+2^{-j-1}\delta k,\quad\varrho=\varrho_{j}=4r+2^{1-j}r,\quad \hat{\varrho}_{j}=\frac{\varrho_{j}+\varrho_{j+1}}{2}.\]
With these choices, and since \(l\in(\delta k,2\delta k)\), for all \(j\geq 0\) we have
a) \(\varrho_{j},\hat{\varrho}_{j}\in[4r,6r]\) and \(\varrho_{j+1}<\hat{\varrho}_{j}<\varrho_{j}\);
b) \(k_{j}-k_{j+1}=2^{-j-2}\delta k\geq 2^{-j-3}k_{j}\)
We now set \(B_{j}:=B_{\varrho_{j}}(x_{0})\) and \(\hat{B}_{j}:=B_{\hat{\varrho}_{j}}(x_{0})\) and we observe that
\[w_{j}:=(u-k_{j})_{-}\geq 2^{-j-3}k_{j}\chi_{\{u<k_{j+1}\}}.\]
Finally, we take a sequence of cut-off functions \(\{\psi_{j}\}_{j}\subseteq C_{0}^{\infty}(\hat{B}_{\varrho_{j}})\) such that
* \(0\leq\psi_{j}\leq 1\) in \(\hat{B}_{j}\);
* \(\psi_{j}=1\) in \(B_{j+1}\);
* \(|\nabla\psi_{j}|\leq 2^{j+3}/r\).
We then choose \(\psi=\psi_{j}\) and \(w=w_{j}\) in (3.9) and we inherit from [34] the estimates of \(J_{1}\), \(J_{2}\) and \(J_{3}\), which we report here below for completeness:
\[\begin{split}\text{i) }J_{1},J_{2}\leq c2^{2j}k_{j}^{2}r^{-2}|B_{j} \cap\{u<k_{j}\}|\\ \text{ii) }J_{3}\leq c2^{j(n+2s)}k_{j}^{2}r^{-2}|B_{j}\cap\{u<k_{j}\}| \end{split} \tag{3.10}\]
On the other hand, recalling that \(r\leq 1\), we also have
\[\begin{split}&\int_{B_{\varrho}(x_{0})}a(x)uw\psi^{2}\,dx\leq\|a \|_{\infty}\int_{B_{j}}uw_{j}\psi_{j}^{2}\,dx\\ &\qquad\leq\|a\|_{\infty}\int_{B_{j}\cap\{u<k_{j}\}}k_{j}^{2} \psi_{j}^{2}\,dx\leq\|a\|_{\infty}\cdot k_{j}^{2}|B_{j}\cap\{u<k_{j}\}|\\ &\qquad\leq\|a\|_{\infty}\cdot r^{-2}k_{j}^{2}|B_{j}\cap\{u<k_{j }\}|.\end{split}\]
By combining this last estimate with (3.10), we can then follow the arguments of the proof of [34, Lemma 7.1], and we obtain the claimed (3.8).
Finally, as a consequence of (3.8) we obtain (3.6).
With Lemma 3.5 at hand, we can prove Proposition 3.3.
Proof (of Proposition 3.3).: The proof of (3.3) can be obtained by arguing as in the proof of [28, Lemma 4.1], by exploiting Lemma 3.5 in place of [28, Lemma 3.2].
From Proposition 3.3, and using a classical covering argument, we obtain the following corollary.
**Corollary 3.6**.: _Let \(a\in L^{\infty}(\Omega),\,a\geq 0\) a.e. in \(\Omega\). Moreover, let \(u\) be a weak supersolution of equation (3.1) such that \(u\geq 0\) a.e. in \(\mathbb{R}^{n}\setminus\Omega\) and_
\[u>0\text{ a.e.\,on every open ball }B\Subset\Omega.\]
_Then, for every open set \(\mathcal{O}\Subset\Omega\) there exists \(C=C(\mathcal{O},u)>0\) such that_
\[u\geq C(\mathcal{O},u)>0\text{ a.e.\,in }\mathcal{O}.\]
## 4. Proof of Theorem 1.1
Thanks to all the results established so far, we are finally in a position to provide the full proof of Theorem 1.1. In doing this, we mainly follow the approach in [38]; moreover, in order to keep the exposition as clear as possible, we split such a proof into several independent results.
**1) Existence of a first solution.** To begin with, we define
\[\Lambda:=\sup\{\lambda>0:(\text{P})_{\lambda}\text{ admits a weak solution}\}. \tag{4.1}\]
We then turn to prove in this first part of the section the following facts:
a) \(\Lambda\) is well-defined and \(\Lambda<+\infty\);
b) problem \((\text{P})_{\lambda}\) admits a weak solution for every \(0<\lambda\leq\Lambda\).
We begin by proving assertion a).
**Lemma 4.1**.: _Let \(\Lambda\) be as in (4.1). Then \(\Lambda\in(0,+\infty)\)._
Proof.: We consider the functional
\[I_{\lambda}(u):=\frac{1}{2}\rho(u)^{2}-\frac{\lambda}{1-\gamma}\int_{\Omega}| u|^{1-\gamma}\,dx-\frac{1}{2^{*}}\int_{\Omega}|u|^{2^{*}}\,dx,\quad u\in \mathcal{X}^{1,2}(\Omega).\]
First of all, by combining Holder's and Sobolev's inequalities, we have
\[\begin{split}&\text{a) }\int_{\Omega}|u|^{2^{*}}\,dx\leq C\||\nabla u|\|_{L^{2}(\Omega)}^{2^{*}}\leq C \rho(u)^{2^{*}};\\ &\text{b) }\int_{\Omega}|u|^{1-\gamma}\,dx\leq C\|u\|_{L^{2^{*}}( \Omega)}^{(1-\gamma)/2^{*}}\leq C\||\nabla u|\|_{L^{2}(\Omega)}^{1-\gamma} \leq C\rho(u)^{1-\gamma};\end{split} \tag{4.2}\]
as a consequence, denoting by
\[B_{r}:=\left\{u\in\mathcal{X}^{1,2}(\Omega):\rho(u)\leq r_{0}\right\},\]
the above estimate (4.2) implies the existence of \(r_{0}>0\) and \(\delta_{0}>\) such that
\[\left\{\begin{array}{rl}\frac{1}{2}\rho(u)^{2}-\frac{1}{2^{*}}\|u\|_{L^{2^{* }}(\Omega)}^{2^{*}}\geq 2\delta_{0}&\text{for all }u\in\partial B_{r_{0}},\\ \frac{1}{2}\rho(u)^{2}-\frac{1}{2^{*}}\|u\|_{L^{2^{*}}(\Omega)}^{2^{*}}\geq 0& \text{for all }\in B_{r_{0}},\end{array}\right. \tag{4.3}\]
hence, again by (4.2) we conclude that there exists \(\lambda_{\star}>0\) such that
\[I_{\lambda}\big{|}_{\partial B_{r_{0}}}\geq 2\delta_{0}-\frac{\lambda\,C}{1- \gamma}r_{0}^{1-\gamma}\geq\delta_{0},\quad\text{for every }\lambda\in(0,\lambda_{\star}]. \tag{4.4}\]
We now define \(c_{\star}:=\inf_{B_{r_{0}}}I_{\lambda_{\star}}\), and we notice that \(c_{\star}<0\). Indeed, for every \(v\not\equiv 0\), it holds that
\[I_{\lambda_{\star}}(tv)=t^{2}\rho(v)^{2}-\frac{\lambda_{\star}}{1-\gamma}t^{1- \gamma}\int_{\Omega}|v|^{1-\gamma}\,dx-\frac{t^{2^{\ast}}}{2^{\ast}}\int_{ \Omega}|v|^{2^{\ast}}\,dx,\]
which becomes negative for \(t>0\) small enough because \(0<1-\gamma<1\). The argument is now pretty standard. We first consider a minimizing sequence \(\{u_{j}\}_{j}\subset B_{r_{0}}\) related to \(c_{\star}\) and we know that there exists \(u_{\star}\) such that, up to subsequences,
* \(u_{j}\to u_{\star}\) as \(j\to+\infty\) weakly in \(\mathcal{X}^{1,2}(\Omega)\);
* \(u_{j}\to u_{\star}\) as \(j\to+\infty\) strongly in \(L^{r}(\Omega)\) for every \(r\in[2,2^{\ast})\);
* \(u_{j}\to u_{\star}\) as \(j\to+\infty\) pointwise a.e. in \(\Omega\).
Moreover, since \(I_{\lambda}(|u|)\leq I_{\lambda}(u)\) for every \(\lambda>0\), we may also assume that \(u_{j}\geq 0\). Combining (4.4) with \(c_{\star}<0\), we realize that there exists a positive, and independent of \(j\), constant \(\varepsilon_{0}>0\) such that
\[\rho(u_{j})\leq r_{0}-\varepsilon_{0}. \tag{4.5}\]
Now, by combining the algebraic inequality \((a+b)^{p}\leq a^{b}+b^{p}\) (holding true for all \(a,b\geq 0\) and \(0<p<1\)) with the Holder inequality, as \(j\to+\infty\) we have
\[\int_{\Omega}u_{j}^{1-\gamma} \leq\int_{\Omega}u_{\star}^{1-\gamma}+\int_{\Omega}|u_{j}-u_{ \star}|^{1-\gamma}\,dx\] \[\leq\int_{\Omega}u_{\star}^{1-\gamma}+C\|u_{j}-u_{\star}\|_{L^{2 }(\Omega)}^{1-\gamma}=\int_{\Omega}u_{\star}^{1-\gamma}+o(1),\]
and similarly
\[\int_{\Omega}u_{\star}^{1-\gamma}\leq\int_{\Omega}u_{j}^{1-\gamma}+\int_{ \Omega}|u_{j}-u_{\star}|^{1-\gamma}=\int_{\Omega}u_{j}^{1-\gamma}+o(1),\]
which in turn implies
\[\int_{\Omega}u_{j}^{1-\gamma}\,dx=\int_{\Omega}u_{\star}^{1-\gamma}\,dx+o(1), \quad\text{as }j\to+\infty. \tag{4.6}\]
By Brezis-Lieb lemma, see [14], it is now well known that
\[\|u_{j}\|_{L^{2^{\ast}}(\Omega)}^{2^{\ast}}=\|u_{\star}\|_{L^{2^{\ast}}( \Omega)}^{2^{\ast}}+\|u_{j}-u_{\star}\|_{L^{2^{\ast}}(\Omega)}^{2^{\ast}}+o(1),\quad\text{as }j\to+\infty,\]
and
\[\rho(u_{j})^{2}=\rho(u_{\star})^{2}+\rho(u_{j}-u_{\star})^{2}+o(1),\quad\text{ as }j\to+\infty. \tag{4.7}\]
By combining (4.7) and (4.5), it follows that \(u_{\star}\in B_{r_{0}}\) and that \(u_{j}-u_{\star}\in B_{r_{0}}\) for big enough \(j\), and this allows to use the second line of (4.3) on \(u_{j}-u_{\star}\). Using now (4.6)-(4.7), as \(j\to+\infty\), we find
\[c_{\star} =I_{\lambda_{\star}}(u_{j})+o(1)\] \[=I_{\lambda_{\star}}(u_{\star})+\frac{1}{2}\rho(u_{j}-u_{\star})^ {2}-\frac{1}{2^{\ast}}\|u_{j}-u_{\star}\|_{L^{2^{\ast}}(\Omega)}^{2^{\ast}}+o(1)\] \[\geq I_{\lambda_{\star}}(u_{\star})+o(1)\geq c_{\star}+o(1),\]
which proves that \(u_{\star}\geq 0\), \(u_{\star}\not\equiv 0\) is a local minimizer of \(I_{\lambda_{\star}}\) in the \(\mathcal{X}^{1,2}(\Omega)\)-topology. From this, using the strong maximum principle in [9, Theorem 3.1] and arguing
as in the proof of [38, Lemma 2.1], we show that \(u_{\star}\) is actually a weak solution of \((\mathrm{P})_{\lambda}\), and hence we get that
\[\Lambda\geq\lambda_{\star}>0.\]
Let us now prove that \(\Lambda<+\infty\). Following [38], we consider the _first Dirichlet eigenvalue \(\mu_{1}\) of the operator \(\mathcal{L}\)_ in \(\Omega\), namely
\[\mu_{1}=\min\big{\{}\rho(u)^{2}:\,u\in\mathcal{X}^{1,2}(\Omega)\text{ and }\|u\|_{L^{2}(\Omega)}^{2}=1\big{\}},\]
and we let \(e_{1}\in\mathcal{X}^{1,2}(\Omega)\) be the principal eigenfunction associated with \(\mu_{1}\), i.e.,
a) \(\|e_{1}\|_{L^{2}(\Omega)}=1\) and \(e_{1}>0\) a.e. in \(\Omega\);
b) \(\rho(e_{1})^{2}=\mu_{1}\).
We refer, e.g., to [7] for the proof of the existence of \(e_{1}\).
Now, assuming that there exists a weak solution \(u\in\mathcal{X}^{1,2}(\Omega)\) of problem \((\mathrm{P})_{\lambda}\) (for some \(\lambda>0\)), and using this \(e_{1}\) as a test function in identity (2.9), we get
\[\mu_{1}\int_{\Omega}ue_{1}\,dx=\mathcal{B}_{\rho}(u,e_{1})=\int_{\Omega}( \lambda u^{-\gamma}+u^{2^{\star}-1})e_{1}\,dx.\]
Setting \(\overline{\Lambda}\) a constant such that
\[\overline{\Lambda}t^{-\gamma}+t^{2^{\star}-1}>2\mu_{1}t,\quad\text{for every }t>0,\]
we find that \(\lambda<\overline{\Lambda}\) and then \(\Lambda<\overline{\Lambda}<+\infty\), as desired.
Now we have established Lemma 4.1, we then turn to prove assertion b), namely the existence of at least one weak solution of problem \((\mathrm{P})_{\lambda}\) for every \(0<\lambda\leq\Lambda\). To this end, following [38] we first establish some preliminary results.
To begin with, we prove the following simple yet important technical lemma.
**Lemma 4.2**.: _Let \(w,u\in\mathcal{X}^{1,2}(\Omega)\) be a weak subsolution [resp. weak supersolution] and a weak solution of problem \((\mathrm{P})_{\lambda}\), respectively. We assume that_
* \(w\leq u\)__[resp.\(\,w\geq u\)] _a.e. in_ \(\Omega\)_;_
* _for every open set_ \(\mathcal{O}\Subset\Omega\) _there exists_ \(C=C(\mathcal{O},w)>0\) _such that_ \[w\geq C\text{ a.e.\ in }\mathcal{O}.\]
_Then, either \(w\equiv u\) or \(w<u\)_[resp.\(\,w>u\)] _a.e. in_ \(\Omega\).
Proof.: We limit ourselves to consider only the case when \(w\) is a _weak subsolution_ of problem \((\mathrm{P})_{\lambda}\), since the case when \(w\) is a weak supersolution is analogous.
To begin with, we arbitrarily fix a bounded open set \(\mathcal{O}\Subset\Omega\) and we observe that, since \(w\) is a weak subsolution of problem \((\mathrm{P})_{\lambda}\) and since \(u\) is a weak solution of the same problem, we have the following computations:
\[\mathcal{L}(u-w) \geq\lambda(u^{-\gamma}-w^{-\gamma})+(u^{2^{\star}-1}-w^{2^{ \star}-1})\] \[\text{(since }w\leq u\text{, see assumption a))}\] \[\geq\lambda(u^{-\gamma}-w^{-\gamma})\] \[\text{(by the Mean Value Theorem, for some }\theta\in(0,1))\] \[=-\gamma\lambda(\theta u+(1-\theta)w)^{-\gamma-1}(u-w)\] \[\geq-\gamma\lambda w^{-\gamma-1}(u-w)\] \[\text{(by assumption b))}\] \[\geq-\gamma\lambda C^{-\gamma-1}(u-w),\]
in the weak sense on \(\mathcal{O}\) (here, \(C>0\) is a constant depending on \(\mathcal{O}\) and on \(w\)).
As a consequence of this fact, and since \(w\leq u\) a.e. in \(\Omega\) (hence, in \(\mathbb{R}^{n}\)), we are then entitled to apply the Strong Maximum Principle in [9, Theorem 3.1] to the function \(v=u-w\in\mathcal{X}_{+}^{1,2}(\Omega)\) (with \(f\equiv\gamma\lambda C^{-\gamma-1}t\)), obtaining that
\[\text{either }v\equiv 0\text{ or }v>0\text{ a.e.\,in }\mathcal{O}.\]
Due to the arbitrariness of \(\mathcal{O}\Subset\Omega\), this completes the proof.
**Remark 4.3**.: We explicitly observe that, if \(w\in\mathcal{X}^{1,2}(\Omega)\) is a weak _supersolution_ of problem (P)\({}_{\lambda}\), it follows from Remark 2.4-3) that assumption b) in Lemma 4.2 is _always satisfied_. Hence, if \(u\in\mathcal{X}^{1,2}(\Omega)\) is a weak _solution_ of (P)\({}_{\lambda}\), we get
\[(u\leq w\text{ a.e.\,in }\Omega)\implies(\text{either }u\equiv w\text{ or }u<w\text{ a.e.\,in }\Omega).\]
We now turn to establish a crucial Perron-type lemma with extends [32, Lemma 2.1] to the case of _critical nonlinearities_ and [38, Lemma 2.2] to the case of mixed local-nonlocal operators.
**Lemma 4.4**.: _Let \(\underline{u},\overline{u}\in\mathcal{X}^{1,2}(\Omega)\) be a weak subsolution and a weak supersolution, respectively, of problem (P)\({}_{\lambda}\). We assume that_
* \(\underline{u}(x)\leq\overline{u}(x)\) _for a.e._ \(x\in\Omega\)_;_
* _for every open set_ \(\mathcal{O}\Subset\Omega\) _there exists_ \(C=C(\mathcal{O},\underline{u})>0\) _such that_ \[\underline{u}\geq C\text{ a.e.\,in }\mathcal{O}.\]
_Then, there exists a weak solution \(u\in\mathcal{X}^{1,2}(\Omega)\) of (P) such that_
\[\underline{u}(x)\leq u(x)\leq\overline{u}(x)\text{ for a.e. }x\in\Omega.\]
Proof.: We adapt to our setting the proof of [38, Lemma 2.2]. We consider the set
\[M:=\left\{u\in\mathcal{X}^{1,2}(\Omega):\underline{u}\leq u\leq\overline{u} \text{ a.e. in }\Omega\right\},\]
which is closed and convex.
_Step 1:_ we claim that there exists a relative minimizer \(u_{\lambda}\) of \(I_{\lambda}\) on \(M\).
It is enough to show that \(I_{\lambda}\) is w.l.s.c. on \(M\). To this aim, let \(u_{j}\in M\) be weakly convergent to \(u\) in \(\mathcal{X}^{1,2}(\Omega)\). Without loss of generality, possibly passing to a subsequence, we may assume that \(u_{j}\to u\) pointwise a.e. in \(\Omega\), so that \(u\in M\). Thanks to the continuous embedding of \(\mathcal{X}^{1,2}(\Omega)\), we have that
\[\int_{\Omega}\overline{u}^{2^{*}}\,dx<+\infty\quad\int_{\Omega}\overline{u}^{ 1-\gamma}\,dx<+\infty,\]
where in the latter we used first Holder inequality. Hence, by dominated convergence, we have that, as \(j\to+\infty\),
\[\|u_{j}\|_{L^{2^{*}}(\Omega)}^{2^{*}}\to\|u\|_{L^{2^{*}}(\Omega)}^{2^{*}}\quad \text{and}\quad\int_{\Omega}|u_{j}|^{1-\gamma}\,dx\to\int_{\Omega}|u|^{1-\gamma }\,dx.\]
Therefore,
\[\liminf_{j\to+\infty}I_{\lambda}(u_{j})\geq I_{\lambda}(u),\]
as desired.
_Step 2:_\(u\) is a weak solution of (P)\({}_{\lambda}\).
We take \(\varphi\in\mathcal{X}^{1,2}(\Omega)\) and \(\varepsilon>0\). We define the function
\[v_{\varepsilon}:=u+\varepsilon\varphi-\varphi^{\varepsilon}+\varphi_{ \varepsilon}\in M,\]
where
\[\varphi^{\varepsilon}:=(u+\varepsilon\varphi-\overline{u})_{+}\qquad\varphi^{ \varepsilon}:=(u+\varepsilon\varphi-\underline{u})_{-}.\]
Since \(u+t(v_{\varepsilon}-u)\in M\) for \(t\in(0,1)\), we have that
\[\begin{split} 0&\leq\lim_{t\to 0^{+}}\frac{I_{ \lambda}(u+t(v_{\varepsilon}-u))-I_{\lambda}(u)}{t}\\ &=\mathcal{B}_{\rho}(u,v_{\varepsilon}-u)-\lambda\int_{\Omega} \frac{v_{\varepsilon}-u}{u^{\gamma}}\,dx-\int_{\Omega}u^{2^{*}-1}(v_{ \varepsilon}-u)\,dx.\end{split} \tag{4.8}\]
We omit the details concerning the second integral, we refer to [38] for the details. Set now
\[E^{\varepsilon}:=\mathcal{B}_{\rho}(u,\varphi^{\varepsilon})-\lambda\int_{ \Omega}\frac{\varphi^{\varepsilon}}{u^{\gamma}}\,dx-\int_{\Omega}u^{2^{*}-1} \varphi^{\varepsilon}\,dx,\]
and
\[E_{\varepsilon}:=\mathcal{B}_{\rho}(u,\varphi_{\varepsilon})-\lambda\int_{ \Omega}\frac{\varphi_{\varepsilon}}{u^{\gamma}}\,dx-\int_{\Omega}u^{2^{*}-1} \varphi_{\varepsilon}\,dx.\]
With this notation at hand, we can write (4.8) as
\[\mathcal{B}_{\rho}(u,\varphi)-\lambda\int_{\Omega}\frac{\varphi}{u^{\gamma}} \,dx-\int_{\Omega}u^{2^{*}-1}\varphi\,dx\geq\frac{E^{\varepsilon}-E_{ \varepsilon}}{\varepsilon}.\]
We want to show that
\[\frac{E^{\varepsilon}}{\varepsilon}\geq o(1)\quad\text{and}\quad\frac{E_{ \varepsilon}}{\varepsilon}\leq o(1),\quad\text{as }\varepsilon\to 0^{+}.\]
We will show only the first one, being the second very similar. Firstly, due to Lemma 4.2, either \(\overline{u}\equiv u\), and in this case
\[E^{\varepsilon}\geq 0,\quad\text{being }\overline{u}\text{ a supersolution},\]
or \(|\{x\in\Omega:\overline{u}(x)=u(x)\}|=0\). In this second case, we define the sets
\[\Omega^{\varepsilon}:=\{x\in\Omega:u(x)+\varepsilon\varphi(x)\geq\overline{u} (x)>u(x)\}\,,\]
and
\[\mathcal{C}\Omega^{\varepsilon}:=\{x\in\Omega:u(x)+\varepsilon\varphi(x)< \overline{u}(x)\}.\]
and notice that, being \(u\) and \(\overline{u}\) measurable functions,
\[|\Omega^{\varepsilon}|,|\Omega^{\varepsilon}\times\mathcal{C}\Omega^{ \varepsilon}|\to 0\quad\text{as }\varepsilon\to 0^{+}.\]
We also stress that
\[\mathbb{R}^{n}=(\mathbb{R}^{n}\setminus\Omega)\cup\mathcal{C}\Omega^{ \varepsilon}\cup\Omega^{\varepsilon}.\]
Therefore, following [38] for the local part and [37] for the nonlocal one, we have that
\[\begin{split}\frac{E^{\varepsilon}}{\varepsilon}&=\frac{1 }{\varepsilon}\left(\mathcal{B}_{\rho}(u-\overline{u},\varphi^{\varepsilon})+ \mathcal{B}_{\rho}(\overline{u},\varphi^{\varepsilon})-\int_{\Omega}(\lambda u ^{-\gamma}+u^{2^{*}-1})\varphi^{\varepsilon}\,dx\right)\\ &\geq\frac{1}{\varepsilon}\int_{\Omega}\nabla(u-\overline{u}) \cdot\nabla\varphi^{\varepsilon}\,dx\\ &+\iint_{\mathbb{R}^{2n}}\frac{((u-\overline{u})(x)-(u- \overline{u})(y))(\varphi^{\varepsilon}(x)-\varphi^{\varepsilon}(y))}{|x-y|^ {n+2s}}\,dxdy\\ &\qquad+\frac{1}{\varepsilon}\int_{\Omega^{\varepsilon}}(\lambda \overline{u}^{-\gamma}+\overline{u}^{2^{*}-1}-\lambda u^{-\gamma}-u^{2^{*}-1 })\varphi^{\varepsilon}\,dx\\ &\geq\frac{1}{\varepsilon}\int_{\Omega}\nabla(u-\overline{u}) \cdot\nabla\varphi^{\varepsilon}\,dx\\ &+\iint_{\mathbb{R}^{2n}}\frac{((u-\overline{u})(x)-(u- \overline{u})(y))(\varphi^{\varepsilon}(x)-\varphi^{\varepsilon}(y))}{|x-y| ^{n+2s}}\,dxdy\\ &\qquad-\frac{\lambda}{\varepsilon}\int_{\Omega^{\varepsilon}}| \overline{u}^{-\gamma}-u^{-\gamma}||\varphi|\,dx.\end{split} \tag{4.9}\]
Now, as in [38], we note that
\[\begin{split}\frac{1}{\varepsilon}&\int_{\Omega} \nabla(u-\overline{u})\cdot\nabla\varphi^{\varepsilon}\,dx=\frac{1}{ \varepsilon}\int_{\Omega^{\varepsilon}}|\nabla(u-\overline{u})|^{2}\,dx+\int_ {\Omega^{\varepsilon}}\nabla(u-\overline{u})\cdot\nabla\varphi\,dx\\ &\geq\int_{\Omega^{\varepsilon}}\nabla(u-\overline{u})\cdot \nabla\varphi\,dx=o(1),\qquad\text{as $\varepsilon\to 0^{+}$}.\end{split} \tag{4.10}\]
On the other hand, following [37], we have that
\[\begin{split}&\frac{1}{\varepsilon}\iint_{\mathbb{R}^{2n}}\frac{((u- \overline{u})(x)-(u-\overline{u})(y))(\varphi^{\varepsilon}(x)-\varphi^{ \varepsilon}(y))}{|x-y|^{n+2s}}\,dxdy\\ &=\frac{1}{\varepsilon}\iint_{\Omega^{\varepsilon}\times\Omega^ {\varepsilon}}\frac{|(u-\overline{u})(x)-(u-\overline{u})(y)|^{2}}{|x-y|^{n+2s }}\,dxdy\quad(\geq 0)\\ &+\iint_{\Omega^{\varepsilon}\times\Omega^{\varepsilon}}\frac{((u -\overline{u})(x)-(u-\overline{u})(y))(\varphi(x)-\varphi(y))}{|x-y|^{n+2s}} \,dxdy\ (=o(1)\ \text{as $\varepsilon\to 0^{+}$})\\ &+\frac{2}{\varepsilon}\iint_{\Omega^{\varepsilon}\times\mathcal{ C}\Omega^{\varepsilon}}\frac{((u-\overline{u})(x)-(u-\overline{u})(y))(u(x)- \overline{u}(x)+\varepsilon\varphi(x))}{|x-y|^{n+2s}}\,dxdy\ (=:I)\\ &+\frac{2}{\varepsilon}\iint_{\Omega^{\varepsilon}\times(\mathbb{ R}^{n}\setminus\Omega)}\frac{((u-\overline{u})(x))(u(x)-\overline{u}(x)+\varepsilon \varphi(x))}{|x-y|^{n+2s}}\,dxdy\ (=:II)\\ &\geq o(1)+I+II.\end{split} \tag{4.11}\]
while the double integral on \((\mathbb{R}^{n}\setminus\Omega)\times\mathcal{C}\Omega^{\varepsilon}\) identically vanishes. Now,
\[\begin{split}(I)&=\frac{2}{\varepsilon}\int_{\Omega^ {\varepsilon}}\int_{\mathcal{C}\Omega^{\varepsilon}}\frac{|(u-\overline{u})(x )|^{2}}{|x-y|^{n+2s}}\,dxdy\quad(\geq 0)\\ &+2\int_{\Omega^{\varepsilon}}\int_{\mathcal{C}\Omega^{ \varepsilon}}\frac{((u-\overline{u})(x))\varphi(x)}{|x-y|^{n+2s}}\,dxdy\quad( =o(1)\ \text{as $\varepsilon\to 0^{+}$})\\ &+\frac{2}{\varepsilon}\int_{\Omega^{\varepsilon}}\int_{\mathcal{ C}\Omega^{\varepsilon}}\frac{-((u-\overline{u})(y))(u(x)-\overline{u}(x)+ \varepsilon\varphi(x))}{|x-y|^{n+2s}}\,dxdy\quad(\geq 0),\end{split} \tag{4.12}\]
while
\[(II) =\frac{2}{\varepsilon}\int_{\Omega^{\varepsilon}}\int_{\mathbb{R}^{ n}\setminus\Omega}\frac{|(u-\overline{u})(x)|^{2}}{|x-y|^{n+2s}}\,dxdy\quad( \geq 0)\] \[+2\int_{\Omega^{\varepsilon}}\int_{\mathbb{R}^{n}\setminus\Omega} \frac{((u-\overline{u})(x))\varphi(x)}{|x-y|^{n+2s}}\,dxdy\quad(=o(1)\text{ as }\varepsilon\to 0^{+}). \tag{4.13}\]
Combining (4.9), (4.10), (4.11), (4.12) and (4.13), we finally get
\[\frac{E^{\varepsilon}}{\varepsilon}\geq o(1),\quad\text{as }\varepsilon\to 0^{+}. \tag{4.14}\]
Similarly
\[\frac{E_{\varepsilon}}{\varepsilon}\leq o(1),\quad\text{as }\varepsilon\to 0^{+}. \tag{4.15}\]
Combining (4.14) and (4.15), we get that
\[\mathcal{B}_{\rho}(u,\varphi)-\int_{\Omega}(\lambda u^{-\gamma}-u^{2^{*}-1}) \varphi\,dx\geq o(1)\quad\text{as }\varepsilon\to 0^{+}.\]
Taking now \(-\varphi\), and passing to the limit as \(\varepsilon\to 0^{+}\), one closes the proof.
The last auxiliary result we need concerns the existence of solutions of the _unperturbed_ version of problem (P)\({}_{\lambda}\), that is, problem (2.7) with \(f(x,t)\equiv 0\).
**Proposition 4.5**.: _Let \(\lambda>0\) and \(\gamma\in(0,1)\). Then, the problem_
\[\begin{cases}\mathcal{L}u=\lambda u^{-\gamma}&\text{in }\Omega,\\ u>0&\text{in }\Omega,\\ u=0&\text{in }\mathbb{R}^{n}\setminus\Omega,\end{cases} \tag{4.16}\]
_admits a unique weak solution \(w_{\lambda}\in\mathcal{X}^{1,2}(\Omega)\cap L^{\infty}(\Omega)\), further satisfying the following property:_ for every open set \(\mathcal{O}\Subset\Omega\) there exists \(C=C(\mathcal{O},w_{\lambda})>0\) s.t.
\[w_{\lambda}\geq C\text{ a.e.\,in }\mathcal{O}. \tag{4.17}\]
We explicitly stress that, since problem (4.16) is of the form (2.7) (with \(f\equiv 0\)), the definition of weak solution of (4.16) is given in Definition 2.3.
Proof.: Existence and uniqueness of a _bounded_ weak solution \(w_{\lambda}\) of problem (4.16) follow from [36, Theorems 2.13 and 2.17]. To prove (4.17) it suffices to observe that, since \(w_{\lambda}>0\) a.e. in \(\Omega\), this function \(w_{\lambda}\) is a _weak supersolution_ (in the sense of Definition 3.1) of the equation \(\mathcal{L}u=0\) in \(\Omega\); taking into account that \(w_{\lambda}>0\) a.e. in \(\Omega\), we can apply Corollary 3.6 (with \(a\equiv 0\)) to \(w_{\lambda}\), obtaining (4.17).
We are now ready to prove the existence of a weak solution of (P)\({}_{\lambda}\). It is an adaptation to our setting of [38, Lemma 2.3].
**Lemma 4.6**.: _Problem (P)\({}_{\lambda}\) admits_ (_at least_) _one weak solution \(u_{\lambda}\in\mathcal{X}^{1,2}(\Omega)\) for every \(\lambda\in(0,\Lambda]\)._
Proof.: The idea of the proof is rather standard: we want to construct both a weak subsolution and a weak supersolution and then apply Lemma 4.4.
Let us start with the weak subsolution. By Proposition 4.5, we know that for every \(\lambda\in(0,\Lambda)\) (and actually for all \(\lambda>0\)) there exists a unique solution \(w_{\lambda}\) of
the _purely singular_ problem (4.16), which is the Euler-Lagrange equation naturally associated with the functional
\[J_{\lambda}(u):=\frac{1}{2}\,\rho(u)^{2}-\frac{\lambda}{1-\gamma}\int_{\Omega} \left|u\right|^{1-\gamma}dx,\quad u\in\mathcal{X}^{1,2}(\Omega).\]
Since \(\gamma\in(0,1)\), the infimum of the functional \(J_{\lambda}\) is achieved by \(w_{\lambda}\) and the minimum is negative, namely
\[J_{\lambda}(w_{\lambda})=\inf_{u\in\mathcal{X}^{1,2}(\Omega)}J_{\lambda}(u)<0.\]
The function \(w_{\lambda}\) is a weak subsolution of \((\mathbb{P})_{\lambda}\).
Let us now look for a weak supersolution. By the very definition of \(\Lambda\), we know that there necessarily exists \(\lambda^{\prime}\in(\lambda,\Lambda)\) such that \((\mathbb{P})_{\lambda^{\prime}}\) admits a weak solution \(u_{\lambda^{\prime}}\), and this can be easily taken as a weak supersolution of \((\mathbb{P})_{\lambda}\).
We now claim that
\[w_{\lambda}(x)\leq u_{\lambda^{\prime}}(x),\quad\text{for a.e. }x\in\Omega. \tag{4.18}\]
To this aim, let us consider a smooth non-decreasing function \(\theta:\mathbb{R}\to\mathbb{R}\) such that
\[\theta(t)=1\,\text{ for }t\geq 1\quad\text{ and }\quad\theta(t)=0\,\text{ for }t\leq 0,\]
and it is linked in a smooth way for \(t\in(0,1)\). We further define the function
\[\theta_{\varepsilon}(t):=\theta\left(\frac{t}{\varepsilon}\right),\quad \varepsilon>0,t\in\mathbb{R}.\]
Due to its definition, we are entitled to use the function \(\theta_{\varepsilon}(w_{\lambda}-u_{\lambda^{\prime}})\) as a test function in both \((\mathbb{P})_{\lambda^{\prime}}\) (solved by \(u_{\lambda^{\prime}}\)) and (4.16) (solved by \(w_{\lambda}\)). Thus, we have
\[\int_{\Omega}\nabla w_{\lambda}\cdot\nabla(w_{\lambda}-u_{ \lambda^{\prime}})\theta^{\prime}_{\varepsilon}(w_{\lambda}-u_{\lambda^{ \prime}})\,dx\] \[+\iint_{\mathbb{R}^{2n}}\frac{(w_{\lambda}(x)-w_{\lambda}(y))( \theta_{\varepsilon}(w_{\lambda}(x)-u_{\lambda^{\prime}}(x))-\theta_{ \varepsilon}(w_{\lambda}(y)-u_{\lambda^{\prime}}(y)))}{|x-y|^{n+2s}}\,dxdy\] \[\qquad-\lambda\int_{\Omega}\frac{\theta_{\varepsilon}(w_{\lambda }-u_{\lambda^{\prime}})}{w_{\lambda}^{\gamma}}\,dx=0, \tag{4.19}\]
and
\[\int_{\Omega}\nabla u_{\lambda^{\prime}}\cdot\nabla(w_{\lambda}- u_{\lambda^{\prime}})\theta^{\prime}_{\varepsilon}(w_{\lambda}-u_{\lambda^{ \prime}})\,dx\] \[+\iint_{\mathbb{R}^{2n}}\frac{(u_{\lambda^{\prime}}(x)-u_{\lambda ^{\prime}}(y))(\theta_{\varepsilon}(w_{\lambda}(x)-u_{\lambda^{\prime}}(x))- \theta_{\varepsilon}(w_{\lambda}(y)-u_{\lambda^{\prime}}(y)))}{|x-y|^{n+2s}} \,dxdy\] \[-\lambda^{\prime}\int_{\Omega}\frac{\theta_{\varepsilon}(w_{ \lambda}-u_{\lambda^{\prime}})}{u_{\lambda^{\prime}}^{\gamma}}\,dx-\int_{ \Omega}u_{\lambda^{\prime}}^{2*-1}\theta_{\varepsilon}(w_{\lambda}-u_{\lambda ^{\prime}})\,dx=0. \tag{4.20}\]
Subtracting (4.19) from (4.20) we get
\[\begin{split} 0\geq&-\int_{\Omega}|\nabla(u_{\lambda^{ \prime}}-w_{\lambda})|^{2}\theta^{\prime}_{\varepsilon}(w_{\lambda}-u_{\lambda^ {\prime}})\,dx\\ &\quad-\iint_{\mathbb{R}^{2n}}\frac{1}{|x-y|^{n+2s}}\Big{\{}\big{(} u_{\lambda^{\prime}}(x)-u_{\lambda^{\prime}}(y)-w_{\lambda}(x)+w_{\lambda}(y) \big{)}\times\\ &\qquad\qquad\qquad\times\big{(}\theta_{\varepsilon}(w_{\lambda} (x)-u_{\lambda^{\prime}}(x))-\theta_{\varepsilon}(w_{\lambda}(y)-u_{\lambda^ {\prime}}(y))\big{)}\Big{\}}\,dxdy\\ =&\int_{\Omega}\left(\frac{\lambda^{\prime}}{u_{ \lambda^{\prime}}^{\gamma}}-\frac{\lambda}{w_{\lambda}^{\gamma}}+u_{\lambda^ {\prime}}^{2^{*}-1}\right)\theta_{\varepsilon}(w_{\lambda}-u_{\lambda^{ \prime}})\,dx\\ \geq&\lambda\int_{\Omega}\left(\frac{1}{u_{\lambda^ {\prime}}^{\gamma}}-\frac{1}{w_{\lambda}^{\gamma}}\right)\theta_{\varepsilon }(w_{\lambda}-u_{\lambda^{\prime}})\,dx.\end{split} \tag{4.21}\]
Let us justify the first inequality in (4.21). The local part is clearly non-positive. Regarding the nonlocal-part, if \((x,y)\) are such that
\[w_{\lambda}(x)-u_{\lambda^{\prime}}(x)\geq w_{\lambda}(y)-u_{\lambda^{\prime} }(y),\]
then
\[\theta_{\varepsilon}(w_{\lambda}(x)-u_{\lambda^{\prime}}(x))-\theta_{ \varepsilon}(w_{\lambda}(y)-u_{\lambda^{\prime}}(y))\geq 0,\]
while, if \((x,y)\) are such that
\[w_{\lambda}(x)-u_{\lambda^{\prime}}(x)\leq w_{\lambda}(y)-u_{\lambda^{\prime} }(y),\]
then
\[\theta_{\varepsilon}(w_{\lambda}(x)-u_{\lambda^{\prime}}(x))-\theta_{ \varepsilon}(w_{\lambda}(y)-u_{\lambda^{\prime}}(y))\leq 0.\]
Coming back to (4.21), letting \(\varepsilon\to 0^{+}\) we find that
\[\int_{\{w_{\lambda}>u_{\lambda^{\prime}}\}}\left(\frac{1}{u_{\lambda^{\prime} }^{\gamma}}-\frac{1}{w_{\lambda}^{\gamma}}\right)\,dx\leq 0,\]
and this implies that
\[|\{x\in\Omega:w_{\lambda}(x)>u_{\lambda^{\prime}}(x)\}|=0,\]
as claimed in (4.18).
With (4.18) at hand, we are ready to complete the proof of the lemma: in fact, setting \(\overline{u}=u_{\lambda^{\prime}}\) and \(\underline{u}=w_{\lambda}\), by (4.18) and Proposition 4.5 we know that
i) \(\underline{u}\) is weak subsolution and \(\overline{u}\) is a weak supersolution of problem (P)\({}_{\lambda}\);
ii) \(\underline{u}\) and \(\overline{u}\) satisfy assumptions a)-b) in Lemma 4.4.
We can then apply Lemma 4.4, which therefore proving that problem (P)\({}_{\lambda}\) admits a weak solution \(u_{\lambda}\) for every \(\lambda\in(0,\Lambda)\). Moreover,
\[I_{\lambda}(u_{\lambda})\leq I_{\lambda}(w_{\lambda})\leq J_{\lambda}(w_{ \lambda})<0.\]
For \(\lambda=\Lambda\) it is now sufficient to repeat the last part of the proof of [38, Lemma 2.3] with the obvious modifications.
**2) Existence of a second solution.** Now we have proved the existence of a weak solution of problem (P)\({}_{\lambda}\) for all \(0<\lambda\leq\Lambda\), see Lemma 4.6, in this second part of the section we show the existence of _another weak solution_ of (P)\({}_{\lambda}\) when \(\lambda\in(0,\Lambda)\). Recalling the definition of \(\Lambda\) in (4.1), this will prove Theorem 1.1.
To begin with, we establish the following lemma.
**Lemma 4.7**.: _Let \(\underline{u},\overline{u},u_{\lambda}\in\mathcal{X}^{1,2}(\Omega)\) be, respectively, the weak subsolution, the weak supersolution and the weak solution of problem (P)\({}_{\lambda}\) obtained in Lemma 4.6, and assume that \(0<\lambda<\Lambda\). Then, \(u_{\lambda}\) is a local minimizer for_
\[I_{\lambda}=\frac{1}{2}\rho(u)^{2}-\frac{\lambda}{1-\gamma}\int_{\Omega}|u|^{1- \gamma}\,dx-\frac{1}{2^{*}}\int_{\Omega}|u|^{2^{*}}\;dx\quad(u\in\mathcal{X}^{ 1,2}(\Omega)).\]
Proof.: By contradiction, suppose that \(u_{\lambda}\)_is not_ a local minimizer for \(I_{\lambda}\). Then, we can construct a sequence \(\{u_{j}\}_{j}\subseteq\mathcal{X}^{1,2}(\Omega)\) satisfying the following properties:
i) \(u_{j}\to u_{\lambda}\) in \(\mathcal{X}^{1,2}(\Omega)\) as \(j\to+\infty\);
ii) \(I_{\lambda}(u_{j})<I_{\lambda}(u_{\lambda})\) for every \(j\in\mathbb{N}\).
We explicitly observe that, by possibly replacing \(u_{j}\) with \(z_{j}=|u_{j}|\), we may assume that \(u_{j}\geq 0\) a.e. in \(\Omega\) for every \(j\geq 1\). In fact, since \(u_{j}\to u_{\lambda}\) in \(\mathcal{X}^{1,2}(\Omega)\) and since \(u_{\lambda}>0\) almost everywhere in \(\Omega\), it is easy to recognize that
\[|u_{j}|\to|u_{\lambda}|=u_{\lambda}\text{ in }\mathcal{X}^{1,2}(\Omega)\text{ as }j\to+\infty,\]
and this shows that property i) is still satisfied by \(\{z_{j}\}_{j}\). Moreover, we have
\[I_{\lambda}(|u_{j}|)\leq I_{\lambda}(u_{j})<I_{\lambda}(u_{\lambda})\quad \text{for every }j\geq 1,\]
and this shows that also property ii) is still satisfied by the sequence \(\{z_{j}\}_{j}\). Hence, from now on we tacitly understand that \(\{u_{j}\}_{j}\) is a sequence of _non-negative functions_ satisfying properties i)-ii) above. Accordingly, we set
\[v_{j}:=\max\{\underline{u},\min\{\overline{u},u_{j}\}\}\in\mathcal{X}^{1,2}(\Omega)\]
and we define
* \(\overline{w}_{j}=(u_{j}-\overline{u})_{+}\in\mathcal{X}^{1,2}_{+}(\Omega)\) and \(\overline{S}_{j}=\operatorname{supp}(\overline{w}_{j})=\{u_{j}\geq\overline{ u}\}\);
* \(\underline{w}_{j}=(u_{j}-\underline{u})_{-}\in\mathcal{X}^{1,2}_{+}(\Omega)\) and \(\underline{S}_{j}=\operatorname{supp}(\underline{w}_{j})=\{u_{j}\leq \underline{u}\}\).
We explicitly observe that, by definition, the following identities hold:
\[\begin{split}\text{a)}&\;v_{j}\in M=\{u\in \mathcal{X}^{1,2}(\Omega):\,\underline{u}\leq u\leq\overline{u}\};\\ \text{b)}&\;v_{j}\equiv\overline{u}\text{ on } \overline{S}_{j}\text{, }v_{j}\equiv\underline{u}\text{ on }\underline{S}_{j}\text{ and }v_{j}\equiv u_{j}\text{ on }\{\underline{u}<u_{j}<\overline{u}\};\\ \text{c)}&\;u_{j}=\overline{u}+\overline{w}_{j} \text{ on }\overline{S}_{j}\text{ and }u_{j}=\underline{u}-\underline{w}_{j}\text{ on }\underline{S}_{j}.\end{split} \tag{4.22}\]
Following [38], we now claim that
\[\lim_{n\to+\infty}|\overline{S}_{j}|=\lim_{n\to+\infty}|\underline{S}_{j}|=0. \tag{4.23}\]
Indeed, let \(\varepsilon>0\) be arbitrarily fixed and let \(\delta>0\) be such that \(|\Omega\setminus\Omega_{\delta}|<\frac{\varepsilon}{2}\), where we have set \(\Omega_{\delta}=\{x\in\Omega:\,d(x,\partial\Omega)>\delta\}\Subset\Omega\). Since, by construction,
\[\underline{u}=w_{\lambda}\in\mathcal{X}^{1,2}(\Omega)\]
is the unique solution of problem (4.16), we know from Proposition 4.5 that
\[u_{\lambda}\geq\underline{u}\geq C>0\text{ a.e.\,in }\Omega_{\delta}, \tag{4.24}\]
where \(C=C(\delta,\underline{u})>0\) is a suitable constant (recall that \(\underline{u}\leq u_{\lambda}\leq\overline{u}\)).
On the other hand, since \(u_{\lambda}\) is a weak solution of problem (P)\({}_{\lambda}\), and since \(\overline{u}=u_{\lambda^{\prime}}\) for some \(\lambda<\lambda^{\prime}<\Lambda\) (see the proof of Lemma 4.6), by (4.24) we have
\[\begin{split}\mathcal{L}(\overline{u}-u_{\lambda})& =\lambda^{\prime}\overline{u}^{-\gamma}-\lambda u_{\lambda}^{-\gamma }+(\overline{u}^{2^{*}-1}-u_{\lambda}^{2^{*}-1})\\ &(\text{since, by construction, }\underline{u}\leq u_{\lambda}\leq \overline{u}\text{ and }\lambda<\lambda^{\prime})\\ &\geq\lambda(\overline{u}^{-\gamma}-u_{\lambda}^{-\gamma})\end{split}\]
(by the Mean Value Theorem, for some \(\theta\in(0,1)\))
\[=-\gamma\lambda(\theta\overline{u}+(1-\theta)u_{\lambda})^{-\gamma-1 }(\overline{u}-u_{\lambda})\] \[\geq-\gamma\lambda\underline{u}^{-\gamma-1}(\overline{u}-u_{ \lambda})\] (here we use (4.24)) \[\geq-\gamma\lambda C^{-\gamma-1}(\overline{u}-u_{\lambda}),\]
in the weak sense on \(\Omega_{\delta}\); as a consequence, we see that \(v:=\overline{u}-u_{\lambda}\in\mathcal{X}^{1,2}(\Omega)\) is a _weak supersolution_ (in the sense of Definition 3.1) of equation (3.1), with
\[a(x)=\gamma\lambda C(\delta,\overline{u})^{-\gamma-1}>0.\]
Since \(v>0\) a.e. on every ball \(B\Subset\Omega\) (as \(\overline{u}=u_{\lambda^{\prime}}\) and \(\lambda\neq\lambda^{\prime}\)), we can apply again Corollary 3.6, ensuring the existence of \(C_{1}=C_{1}(\delta,u_{\lambda},\overline{u})>0\) such that
\[v=\overline{u}-u_{\lambda}\geq C_{1}>0\text{ a.e.\,in }\Omega_{\delta}. \tag{4.25}\]
With (4.25) at hand, we can finally complete the proof of (4.23): in fact, recalling that \(u_{j}\to u_{\lambda}\) in \(\mathcal{X}^{1,2}(\Omega)\hookrightarrow L^{2}(\Omega)\) as \(j\to+\infty\), from (4.25) we obtain
\[|\overline{S}_{j}| \leq|\Omega\setminus\Omega_{\delta}|+|\Omega_{\delta}\cap \overline{S}_{j}|<\frac{\varepsilon}{2}+\frac{1}{C_{1}^{2}}\int_{\Omega_{ \delta}\cap\overline{S}_{j}}(\overline{u}-u_{\lambda})^{2}\,dx\] \[(\text{since }0\leq\overline{u}-u_{\lambda}\leq u_{j}-u_{ \lambda}\text{ a.e.\,in }\overline{S}_{j})\] \[<\frac{\varepsilon}{2}+\frac{1}{C_{1}^{2}}\|u_{j}-u_{\lambda}\|_{ L^{2}(\Omega)}^{2}<\varepsilon,\]
provided that \(j\) is large enough, and this proves that \(|\overline{S}_{j}|\to 0\) as \(j\to+\infty\). In a very similar fashion, one can prove that \(|\underline{S}_{j}|\to 0\) as \(j\to+\infty\).
Now we have established (4.23), we can proceed toward the end of the demonstration of the lemma. To begin with, using identities b)-c) in (4.22) we write
\[I_{\lambda}(u_{j}) =I_{\lambda}(v_{j})+\frac{1}{2}\big{(}\rho(u_{j})^{2}-\rho(v_{j})^ {2}\big{)}\] \[\qquad-\frac{\lambda}{1-\gamma}\int_{\Omega}(|u_{j}|^{1-\gamma}- |v_{j}|^{1-\gamma})\,dx-\frac{1}{2^{*}}\int_{\Omega}(|u_{j}|^{2^{*}}-|v_{j}|^{ 2^{*}})\,dx\] \[=I_{\lambda}(v_{j})+\frac{1}{2}\int_{\overline{S}_{j}\cup \underline{S}_{j}}(|\nabla u_{j}|^{2}-|\nabla v_{j}|^{2})\,dx+\frac{1}{2} \big{(}[u_{j}]_{s}^{2}-[v_{j}]_{s}^{2}\big{)}\] \[\qquad-\frac{\lambda}{1-\gamma}\int_{\overline{S}_{j}\cup \underline{S}_{j}}(|u_{j}|^{1-\gamma}-|v_{j}|^{1-\gamma})\,dx-\frac{1}{2^{*}} \int_{\overline{S}_{j}\cup\underline{S}_{j}}(|u_{j}|^{2^{*}}-|v_{j}|^{2^{*}}) \,dx\] \[=I_{\lambda}(v_{j})+\frac{1}{2}\big{(}[u_{j}]_{s}^{2}-[v_{j}]_{s} ^{2}\big{)}+\mathcal{R}_{j}^{(1)}+\mathcal{R}_{j}^{(2)}=(\bigstar),\]
where we have introduced the shorthand notation
\[(*)\ \mathcal{R}_{j}^{(1)} =\frac{1}{2}\int_{\overline{S}_{j}}\big{(}|\nabla(\overline{u}+ \overline{w}_{j})|^{2}-|\nabla\overline{u}|^{2}\big{)}\,dx\] \[-\int_{\overline{S}_{j}}\Big{\{}\frac{\lambda}{1-\gamma}(| \overline{u}+\overline{w}_{j}|^{1-\gamma}-|\overline{u}|^{1-\gamma})+\frac{1}{ 2^{*}}(|\overline{u}+\overline{w}_{j}|^{2^{*}}-|\overline{u}|^{2^{*}})\Big{\}}dx;\] \[(*)\ \mathcal{R}_{j}^{(2)} =\frac{1}{2}\int_{\underline{S}_{j}}\big{(}|\nabla(\underline{u} -\underline{w}_{j})|^{2}-|\nabla\underline{u}|^{2}\big{)}\,dx\]
\[-\int_{\underline{S}_{j}}\Big{\{}\frac{\lambda}{1-\gamma}(|\underline{u}- \underline{w}_{j}|^{1-\gamma}-|\underline{u}|^{1-\gamma})+\frac{1}{2^{*}}(| \underline{u}-\underline{w}_{j}|^{2^{*}}-|\underline{u}|^{2^{*}})\Big{\}}dx.\]
Then, we exploit the estimate for the _purely nonlocal_ term \(J_{0}=[u_{j}]_{s}^{2}-[v_{j}]_{s}^{2}\) established in [37, Lemma 3.3]: this gives the following computation
\[(\bigstar) \geq I_{\lambda}(v_{j})+\frac{1}{2}\big{(}[\overline{w}_{j}]_{s} ^{2}+[\underline{w}_{j}]_{s}^{2}\big{)}+\langle\overline{u},\overline{w}_{j} \rangle_{s,\mathbb{R}^{2n}}-\langle\underline{u},\underline{w}_{j}\rangle_{s, \mathbb{R}^{2n}}+\mathcal{R}_{j}^{(1)}+\mathcal{R}_{j}^{(2)}\] \[=I_{\lambda}(v_{j})+\frac{1}{2}\Big{(}\int_{\Omega}|\nabla \overline{w}_{j}|^{2}\,dx+[\overline{w}_{j}]_{s}^{2}\Big{)}+\frac{1}{2}\Big{(} \int_{\Omega}|\nabla\underline{w}_{j}|^{2}\,dx+[\underline{w}_{j}]_{s}^{2} \Big{)}\] \[\qquad+\int_{\Omega}\nabla\overline{u}\cdot\nabla\overline{w}_{j }\,dx+\langle\overline{u},\overline{w}_{j}\rangle_{s,\mathbb{R}^{2n}}-\int_{ \Omega}\nabla\underline{u}\cdot\nabla\underline{w}_{j}\,dx-\langle\underline {u},\underline{w}_{j}\rangle_{s,\mathbb{R}^{2n}}\] \[\qquad-\int_{\overline{S}_{j}}\Big{\{}\frac{\lambda}{1-\gamma}(| \overline{u}+\overline{w}_{j}|^{1-\gamma}-|\overline{u}|^{1-\gamma})+\frac{1} {2^{*}}(|\overline{u}+\overline{w}_{j}|^{2^{*}}-|\overline{u}|^{2^{*}})\Big{\}}dx\] \[\qquad-\int_{\underline{S}_{j}}\Big{\{}\frac{\lambda}{1-\gamma}(| \underline{u}-\underline{w}_{j}|^{1-\gamma}-|\underline{u}|^{1-\gamma})+\frac{1 }{2^{*}}(|\underline{u}-\underline{w}_{j}|^{2^{*}}-|\underline{u}|^{2^{*}}) \Big{\}}dx\] \[=I_{\lambda}(v_{j})+\frac{1}{2}\rho(\overline{w}_{j})^{2}+\frac{ 1}{2}\rho(\underline{w}_{j})^{2}+\mathcal{B}_{\rho}(\overline{u},\overline{w} _{j})-\mathcal{B}_{\rho}(\underline{u},\underline{w}_{j})\] \[\qquad-\int_{\overline{S}_{j}}\Big{\{}\frac{\lambda}{1-\gamma}(| \overline{u}+\overline{w}_{j}|^{1-\gamma}-|\overline{u}|^{1-\gamma})+\frac{1} {2^{*}}(|\overline{u}+\overline{w}_{j}|^{2^{*}}-|\overline{u}|^{2^{*}})\Big{\}}dx\] \[\qquad-\int_{\underline{S}_{j}}\Big{\{}\frac{\lambda}{1-\gamma}(| \underline{u}-\underline{w}_{j}|^{1-\gamma}-|\underline{u}|^{1-\gamma})+\frac {1}{2^{*}}(|\underline{u}-\underline{w}_{j}|^{2^{*}}-|\underline{u}|^{2^{*}}) \Big{\}}dx.\]
Summing up, we obtain
\[I_{\lambda}(u_{j})=I_{\lambda}(v_{j})+A_{j}+B_{j},\]
where we have used the notation
\[(*)\ A_{j} =\frac{1}{2}\rho(\overline{w}_{j})^{2}+\mathcal{B}_{\rho}( \overline{u},\overline{w}_{j})\] \[\qquad-\int_{\overline{S}_{j}}\Big{\{}\frac{\lambda}{1-\gamma}(| \overline{u}+\overline{w}_{j}|^{1-\gamma}-|\overline{u}|^{1-\gamma})+\frac{1} {2^{*}}(|\overline{u}+\overline{w}_{j}|^{2^{*}}-|\overline{u}|^{2^{*}})\Big{\}}dx;\] \[(*)\ B_{j} =\frac{1}{2}\rho(\underline{w}_{j})^{2}-\mathcal{B}_{\rho}( \underline{u},\underline{w}_{j})\] \[\qquad-\int_{\underline{S}_{j}}\Big{\{}\frac{\lambda}{1-\gamma}(| \underline{u}-\underline{w}_{j}|^{1-\gamma}-|\underline{u}|^{1-\gamma})+\frac{1 }{2^{*}}(|\underline{u}-\underline{w}_{j}|^{2^{*}}-|\underline{u}|^{2^{*}}) \Big{\}}dx.\]
Now, since we have already recognized that \(v_{j}\in M\) and since, by construction, we know that \(I_{\lambda}(u_{\lambda})=\inf_{M}I_{\lambda}\) (see the proof of Lemma 4.6), we get
\[I_{\lambda}(u_{j})\geq I_{\lambda}(u_{\lambda})+A_{j}+B_{j}. \tag{4.26}\]
On the other hand, since \(\overline{u}=u_{\lambda^{\prime}}\) is a _weak supersolution_ of \((\mathrm{P})_{\lambda}\), we have
\[A_{j} =\frac{1}{2}\rho(\overline{w}_{j})^{2}+\mathcal{B}_{\rho}( \overline{u},\overline{w}_{j})\] \[\qquad-\int_{\overline{S}_{j}}\Big{\{}\frac{\lambda}{1-\gamma}(| \overline{u}+\overline{w}_{j}|^{1-\gamma}-|\overline{u}|^{1-\gamma})+\frac{1} {2^{*}}(|\overline{u}+\overline{w}_{j}|^{2^{*}}-|\overline{u}|^{2^{*}})\Big{\}}dx\] \[\qquad(\text{by the Mean Value Theorem, for some $\theta\in(0,1)$})\]
\[\geq\frac{1}{2}\rho(\overline{w}_{j})^{2}+\int_{\overline{S}_{j}}( \lambda\overline{u}^{-\gamma}+\overline{u}^{2^{*}-1})\overline{w}_{j}\,dx\] \[\qquad-\int_{\overline{S}_{j}}\big{\{}\lambda(\overline{u}+ \theta\overline{w}_{j})^{-\gamma}\overline{w}_{j}+(\overline{u}+\theta\overline {w}_{j})^{2^{*}-1}\overline{w}_{j}\big{\}}dx\] \[\geq\frac{1}{2}\rho(\overline{w}_{j})^{2}-\int_{\overline{S}_{j}} \big{(}(\overline{u}+\theta\overline{w}_{j})^{2^{*}-1}-\overline{u}^{2^{*}-1} \big{)}\overline{w}_{j}\,dx\] \[\text{(again by the Mean Value Theorem)}\] \[\geq\frac{1}{2}\rho(\overline{w}_{j})^{2}-C\int_{\overline{S}_{j}} (\overline{u}^{2^{*}-2}+\overline{w}_{j}^{2^{*}-2})\overline{w}_{j}^{2}\,dx,\]
where \(C>0\) is a suitable constant only depending on the dimension \(n\). From this, by exploiting Holder's and Sobolev's inequalities (see Remark 2.2), we obtain
\[A_{j} =\frac{1}{2}\rho(\overline{w}_{j})^{2}-C\int_{\overline{S}_{j}} (\overline{u}^{2^{*}-2}+\overline{w}_{j}^{2^{*}-2})\overline{w}_{j}^{2}\,dx\] \[\geq\frac{1}{2}\rho(\overline{w}_{j})^{2}\Big{\{}1-\hat{C}\Big{(} \int_{\overline{S}_{j}}\overline{u}^{2^{*}}\,dx\Big{)}^{\frac{2^{*}-2}{2^{*}} }-\hat{C}\rho(\overline{w}_{j})^{2^{*}-2}\Big{\}}, \tag{4.27}\]
where \(\hat{C}>0\) is another constant depending on \(n\).
With (4.27) at hand, we are finally ready to complete the proof. Indeed, taking into account the above (4.23), we have
\[\lim_{n\to+\infty}\Big{(}\int_{\overline{S}_{j}}\overline{u}^{2^{*}}\,dx\Big{)} ^{\frac{2^{*}-2}{2^{*}}}=0;\]
moreover, since \(u_{j}\to u_{\lambda}\) in \(\mathcal{X}^{1,2}(\Omega)\) as \(j\to+\infty\), one also get
\[0 \leq\rho(\overline{w}_{j})\leq\Theta\|\overline{w}_{j}\|_{H^{1}_{ 0}(\Omega)}=\Theta\int_{\overline{S}_{j}}|\nabla(u_{j}-\overline{u})|^{2}\,dx\] \[\leq 2\Theta\|u_{j}-u_{\lambda}\|_{H^{1}_{0}(\Omega)}+2\Theta \int_{\overline{S}_{j}}|\nabla(u_{\lambda}-\overline{u})|^{2}\,dx\] \[\leq 2\Theta\rho(u_{j}-u_{\lambda})^{2}+2\Theta\int_{\overline{S} _{j}}|\nabla(u_{\lambda}-\overline{u})|^{2}\,dx\to 0\qquad\text{as $j\to+\infty$},\]
where we have also used the equivalence between \(\rho\) and \(\|\cdot\|_{H^{1}_{0}(\Omega)}\), see (2.6). Ga-thering these facts, we then infer the existence of some \(j_{0}\geq 1\) such that
\[A_{j}\geq\frac{1}{2}\rho(\overline{w}_{j})^{2}\Big{\{}1-\hat{C}\Big{(}\int_{ \overline{S}_{j}}\overline{u}^{2^{*}}\,dx\Big{)}^{\frac{2^{*}-2}{2^{*}}}-\hat {C}\rho(\overline{w}_{j})^{2^{*}-2}\Big{\}}\geq 0\quad\text{for all $j\geq j_{0}$}.\]
By arguing in a very similar way, one can prove that \(B_{j}\geq 0\) for every \(j\geq j_{0}\) (by possibly enlarging \(j_{0}\) if needed); as a consequence, from (4.26) we get
\[I_{\lambda}(u_{j})\geq I_{\lambda}(u_{\lambda})+A_{j}+B_{j}\geq I_{\lambda}(u_ {\lambda}),\]
but this is contradiction with property ii) of the sequence \(\{u_{j}\}_{j}\).
Thanks to Lemma 4.7, we are ready to establish the existence of a _second solution_ of problem \((\mathsf{P})_{\lambda}\) (when \(0<\lambda<\Lambda\)). To this end we observe that, if \(u_{\lambda}\) is the solution
of problem \((\mathrm{P})_{\lambda}\) constructed in Lemma 4.6, we know from Lemma 4.7 that \(u_{\lambda}\) is a _local minimizer for \(I_{\lambda}\)_, that is, there exists some \(0<\varrho_{0}\leq\rho(u_{\lambda})\) such that
\[I_{\lambda}(u)\geq I_{\lambda}(u_{\lambda})\quad\text{for every $u\in\mathcal{X}^{1,2}( \Omega)$ with $\rho(u-u_{\lambda})<\varrho_{0}$.} \tag{4.28}\]
As a consequence of (4.28), if we consider the cone
\[T=\{u\in\mathcal{X}^{1,2}(\Omega):\,u\geq u_{\lambda}>0\text{ a.e.\,in $\Omega$}\},\]
only one of the following two cases hold:
* \(\inf\{I_{\lambda}(u):\,u\in T\text{ and }\rho(u-u_{\lambda})=\varrho\}=I_{ \lambda}(u_{\lambda})\) for every \(0<\varrho<\varrho_{0}\);
* there exists \(\varrho_{1}\in(0,\varrho_{0})\) such that (4.29) \[\inf\{I_{\lambda}(u):\,u\in T\text{ and }\rho(u-u_{\lambda})=\varrho_{1}\}>I_ {\lambda}(u_{\lambda}).\]
Following [38], we then turn to consider the cases A)-B) separately.
**Proposition 4.8**.: _Let \(\lambda\in(0,\Lambda)\) be fixed, and assume that Case A) holds. Then, for every \(\varrho\in(0,\varrho)\) there exists a solution \(v_{\lambda}\) of problem \((\mathrm{P})_{\lambda}\) such that_
\[\rho(u_{\lambda}-v_{\lambda})=\varrho.\]
_In particular, \(v_{\lambda}\not\equiv u_{\lambda}\)._
Proof.: Let \(0<\varrho<\varrho_{0}\) be arbitrarily fixed. Since we are assuming that Case A) holds, we can find a sequence \(\{u_{k}\}_{k}\subseteq T\) satisfying the following properties:
* \(\rho(u_{k}-u_{\lambda})=\varrho\) for every \(k\geq 1\);
* \(I_{\lambda}(u_{k})\to I_{\lambda}(u_{\lambda})=:\mathbf{c}_{\lambda}\) as \(k\to+\infty\).
We then choose \(\delta>0\) so small that \(\varrho-\delta>0\) and \(\varrho+\delta<\varrho_{0}\) and, accordingly, we consider the subset of \(T\) defined as follows:
\[X=\{u\in T:\,\varrho-\delta\leq\rho(u-u_{\lambda})\leq\varrho+\delta\}\subseteq T\]
(note that \(u_{k}\in X\) for every \(k\geq 1\), see a)). Since it is _closed_, this set \(X\) is a _complete metric space_ when endowed with the distance induced by \(\rho\); moreover, since \(I_{\lambda}\) is a _real-valued and continuous functional_ on \(X\), and since
\[\inf_{X}I_{\lambda}=I_{\lambda}(u_{\lambda})\]
we are entitled to apply the Ekeland Variational Principle (see [3]) to the functional \(I_{\lambda}\) on \(X\), providing us with a sequence \(\{v_{k}\}_{k}\subseteq X\) such that
\[\begin{split}\text{i)}&\;I_{\lambda}(v_{k})\leq I_{ \lambda}(u_{k})\leq I_{\lambda}(u_{\lambda})+1/k^{2},\\ \text{ii)}&\;\rho(v_{k}-u_{k})\leq 1/k,\\ \text{iii)}&\;I_{\lambda}(v_{k})\leq I_{\lambda}(u)+ 1/k\cdot\rho(v_{k}-u)\quad\text{for every $u\in X$.}\end{split} \tag{4.30}\]
We now observe that, since \(\{v_{k}\}_{k}\subseteq X\) and since the set \(X\) is _bounded_ in \(\mathcal{X}^{1,2}(\Omega)\), there exists \(v_{\lambda}\in\mathcal{X}^{1,2}(\Omega)\) such that (as \(k\to+\infty\) and up to a sub-sequence)
\[\begin{split}\text{i)}&\;v_{k}\to v_{\lambda}\text{ weakly in }\mathcal{X}^{1,2}(\Omega);\\ \text{ii)}&\;v_{k}\to v_{\lambda}\text{ strongly in }L^{p}\text{ for every }1\leq p<2^{*};\\ \text{iii)}&\;v_{k}\to v_{\lambda}\text{ pointwise a.e.\,in $\Omega$.}\end{split} \tag{4.31}\]
where we have also used the _compact embedding_\(\mathcal{X}^{1,2}(\Omega)\hookrightarrow L^{2}(\Omega)\). To complete the proof, we then turn to prove the following two facts:
* \(v_{\lambda}\) is a weak solution of \((\mathrm{P})_{\lambda}\);
* \(\rho(v_{\lambda}-u_{\lambda})=\varrho>0\).
Proof of 1).: To begin with, we fix \(w\in T\) and we choose \(\varepsilon_{0}=\varepsilon_{0}(w,\lambda)>0\) so small that \(u_{\varepsilon}=v_{k}+\varepsilon(w-v_{k})\in X\) for every \(0<\varepsilon<\varepsilon_{0}\). We explicitly stress that the existence of such an \(\varepsilon_{0}\) easily follows from (4.30)-ii) and property a) above.
On account of (4.30)-iii) (with \(u=u_{\varepsilon}\)), we have
\[\frac{I_{\lambda}(v_{k}+\varepsilon(w-v_{k}))-I_{\lambda}(v_{k})}{\varepsilon }\geq-\frac{1}{k}\rho(w-v_{k});\]
From this, by letting \(\varepsilon\to 0^{+}\) and by arguing _exactly_ as in [38, Lemma 2.2] (see also identity (4.8) in the proof of Lemma 4.4), we obtain
\[-\frac{1}{k}\rho(w-v_{k})\leq\mathcal{B}_{\rho}(v_{k},w-v_{k})- \int_{\Omega}v_{k}^{2^{*}-1}(w-v_{k})\,dx\] \[\qquad\qquad\qquad\qquad-\lambda\int_{\Omega}v_{k}^{-\gamma}(w-v _{k})\,dx. \tag{4.32}\]
Now, given any \(\varphi\in\mathcal{X}^{1,2}(\Omega)\) and any \(\varepsilon>0\), we define
* \(\psi_{k,\varepsilon}=v_{k}+\varepsilon\varphi-u_{\lambda}\) and \(\phi_{k,\varepsilon}=(\psi_{k,\varepsilon})_{-}\);
* \(\psi_{\varepsilon}=v_{\lambda}+\varepsilon\varphi-u_{\lambda}\) and \(\phi_{\varepsilon}=(\psi_{\varepsilon})_{-}\).
Since, obviously, \(w=v_{k}+\varepsilon\varphi+\phi_{k,\varepsilon}\in T\), by exploiting (4.32) we get
\[-\frac{1}{k}\rho(\varepsilon\varphi+\phi_{k,\varepsilon})\leq \mathcal{B}_{\rho}(v_{k},\varepsilon\varphi+\phi_{k,\varepsilon})-\int_{ \Omega}v_{k}^{2^{*}-1}(\varepsilon\varphi+\phi_{k,\varepsilon})\,dx\] \[\qquad\qquad\qquad\qquad-\lambda\int_{\Omega}v_{k}^{-\gamma}( \varepsilon\varphi+\phi_{k,\varepsilon})\,dx. \tag{4.33}\]
Then, we aim to pass to the limit as \(k\to+\infty\) and \(\varepsilon\to 0^{+}\) in the above (4.33). To this end we first observe that, on account of (4.31)-iii), we have
\[\phi_{k,\varepsilon}\to\phi_{\varepsilon}\text{ pointwise a.e.\ in }\Omega\text{ as }k\to+\infty; \tag{4.34}\]
moreover, by the very definition of \(\phi_{k,\varepsilon}\) we also have the following estimate
\[|\phi_{k,\varepsilon}|=(u_{\lambda}-\varepsilon\varphi-v_{k})\cdot\mathbf{1}_ {\{u_{\lambda}-\varepsilon\varphi-v_{k}\geq 0\}}\leq u_{\lambda}+\varepsilon| \varphi|;\]
thus, since \(v_{k}\geq u_{\lambda}>0\) a.e. in \(\Omega\) (as \(v_{k}\in X\subseteq T\)), we get
\[\text{i) }v_{k}^{2^{*}-1}|\phi_{k,\varepsilon}|=v_{k}^{2^{*}-1}(u_{ \lambda}-\varepsilon\varphi-v_{k})\cdot\mathbf{1}_{\{u_{\lambda}-\varepsilon \varphi-v_{k}\geq 0\}}\leq(u_{\lambda}+\varepsilon|\varphi|)^{2^{*}};\] \[\text{ii) }v_{k}^{-\gamma}|\varepsilon\varphi+\phi_{k, \varepsilon}|\leq u_{\lambda}^{-\gamma}(u_{\lambda}+2\varepsilon|\varphi|). \tag{4.35}\]
On account (4.34)-(4.35), and since \(u_{\lambda}^{-\gamma}|\varphi|\in L^{1}(\Omega)\) (see Remark 2.4-2)), we can apply Lebesgue's Dominated Convergence theorem, obtaining
\[\lim_{k\to+\infty}\int_{\Omega}v_{k}^{2^{*}-1}\phi_{k,\varepsilon }\,dx=\int_{\Omega}v_{\lambda}^{2^{*}-1}\phi_{\varepsilon}\,dx,\] \[\lim_{k\to+\infty}\int_{\Omega}v_{k}^{-\gamma}(\varepsilon\varphi +\phi_{k,\varepsilon})\,dx=\int_{\Omega}v_{\lambda}^{-\gamma}(\varepsilon \varphi+\phi_{\varepsilon})\,dx.\]
This, together with (4.31)-ii), allows us to conclude that
\[\begin{split}\lim_{k\to+\infty}\Big{(}&\int_{\Omega} v_{k}^{2^{*}-1}(\varepsilon\varphi+\phi_{k,\varepsilon})\,dx+\lambda\int_{ \Omega}v_{k}^{-\gamma}(\varepsilon\varphi+\phi_{k,\varepsilon})\,dx\Big{)}\\ &=\int_{\Omega}v_{\lambda}^{2^{*}-1}(\varepsilon\varphi+\phi_{ \varepsilon})\,dx+\lambda\int_{\Omega}v_{\lambda}^{-\gamma}(\varepsilon \varphi+\phi_{\varepsilon})\,dx.\end{split} \tag{4.36}\]
As regards the _operator term_\(\mathcal{B}_{\rho}(v_{k},\varepsilon\varphi+\phi_{k,\varepsilon})\) we first observe that, by using the computations already carried out in [4, Lemma 3.4] (for the local part) and in [37, Lemma 4.1] (for the nonlocal part), we have
\[\mathcal{B}_{\rho}(v_{k},\phi_{k,\varepsilon})\] \[\qquad=\int_{\Omega}\nabla v_{k}\cdot\nabla\phi_{k,\varepsilon}\, dx+\iint_{\mathbb{R}^{2n}}\frac{(v_{k}(x)-v_{k}(y))(\phi_{k,\varepsilon}(x)-\phi_{k, \varepsilon}(y))}{|x-y|^{n+2s}}\,dx\,dy\] \[\qquad=\int_{\{v_{k}+\varepsilon\varphi\leq u_{\lambda}\}}\nabla v _{k}\cdot\nabla(u_{\lambda}-\varepsilon\varphi-v_{\lambda})\,dx\] \[\qquad\qquad+\int_{\{v_{k}+\varepsilon\varphi\leq u_{\lambda}\}} \nabla v_{k}\cdot\nabla(v_{\lambda}-v_{k})\,dx\] \[\qquad\qquad+\iint_{\mathbb{R}^{2n}}\frac{(v_{k}(x)-v_{k}(y))( \phi_{\varepsilon}(x)-\phi_{\varepsilon}(y))}{|x-y|^{n+2s}}\,dx\,dy\] \[\qquad\qquad+\iint_{\mathbb{R}^{2n}}\frac{(v_{k}(x)-v_{k}(y))}{| x-y|^{n+2s}}\big{(}(\phi_{k,\varepsilon}-\phi_{\varepsilon})(x)-(\phi_{k, \varepsilon}-\phi_{\varepsilon})(y))\big{)}\,dx\,dy\] \[\leq\int_{\Omega}\nabla v_{k}\cdot\nabla\phi_{\varepsilon}\,dx\] \[\qquad\qquad+\iint_{\mathbb{R}^{2n}}\frac{(v_{k}(x)-v_{k}(y))( \phi_{\varepsilon}(x)-\phi_{\varepsilon}(y))}{|x-y|^{n+2s}}\,dx\,dy+o(1)\] \[=\mathcal{B}_{\rho}(v_{k},\phi_{\varepsilon})+o(1)\qquad\text{as $k \to+\infty$.}\]
This, together with the fact that \(v_{k}\to v_{\lambda}\) weakly in \(\mathcal{X}^{1,2}(\Omega)\), gives
\[\mathcal{B}_{\rho}(v_{k},\varepsilon\varphi+\phi_{k,\varepsilon})\leq\mathcal{ B}_{\rho}(v_{\lambda},\varepsilon\varphi+\phi_{\varepsilon})+o(1)\qquad\text{as $k\to+\infty$.} \tag{4.37}\]
Gathering (4.36) and (4.37), and taking into account that \(\rho(\phi_{k,\varepsilon})\) is _uniformly bounded_ with respect to \(k\) (as the same is true of \(v_{k}\)), we can finally pass to the limit as \(k\to+\infty\) in (4.33), obtaining
\[\mathcal{B}_{\rho}(v_{\lambda},\varepsilon\varphi+\phi_{\varepsilon})\geq\int _{\Omega}v_{\lambda}^{2^{*}-1}(\varepsilon\varphi+\phi_{\varepsilon})\,dx+ \lambda\int_{\Omega}v_{\lambda}^{-\gamma}(\varepsilon\varphi+\phi_{ \varepsilon})\,dx. \tag{4.38}\]
With (4.38) at hand, we can now exploit once again the computations carried out in [38, Lemma 2.6] and in [37, Lemma 4.1], getting
\[\begin{split}&\mathcal{B}_{\rho}(v_{\lambda},\varphi)-\lambda \int_{\Omega}v_{\lambda}^{-\gamma}\varphi\,dx-\int_{\Omega}v_{\lambda}^{2^{*}- 1}\varphi\,dx\\ &\geq-\frac{1}{\varepsilon}\Big{(}\mathcal{B}_{\rho}(v_{ \lambda},\phi_{\varepsilon})-\lambda\int_{\Omega}v_{\lambda}^{-\gamma}\phi_{ \varepsilon}\,dx-\int_{\Omega}v_{\lambda}^{2^{*}-1}\phi_{\varepsilon}\,dx \Big{)}\\ &\text{(since $u_{\lambda}$ is a solution of (P)}_{\lambda})\\ &=\frac{1}{\varepsilon}\Big{(}-\mathcal{B}_{\rho}(v_{\lambda}-u_ {\lambda},\phi_{\varepsilon})+\lambda\int_{\Omega}(v_{\lambda}^{-\gamma}-u_{ \lambda}^{-\gamma})\phi_{\varepsilon}\,dx\\ &\qquad+\int_{\Omega}(v_{\lambda}^{2^{*}-1}-u_{\lambda}^{2^{*}-1 })\phi_{\varepsilon}\,dx\Big{)}\\ &\text{(since $v_{\lambda}=\lim_{k\to+\infty}v_{k}\geq u_{ \lambda}$)}\\ &\geq\frac{1}{\varepsilon}\Big{(}-\int_{\{v_{\lambda}+\varepsilon \varphi\leq u_{\lambda}\}}\nabla(v_{\lambda}-u_{\lambda})\cdot\nabla(v_{ \lambda}-u_{\lambda}+\varepsilon\varphi)\,dx\\ &\qquad-\iint_{\mathbb{R}^{2n}}\frac{((v_{\lambda}-u_{\lambda})( x)-(v_{\lambda}-u_{\lambda})(y))(\phi_{\varepsilon}(x)-\phi_{\varepsilon}(y))}{|x-y| ^{n+2s}}\,dx\,dy\\ &\qquad+\lambda\int_{\{v_{\lambda}+\varepsilon\varphi\leq u_{ \lambda}\}}(v_{\lambda}^{-\gamma}-u_{\lambda}^{-\gamma})(v_{\lambda}-u_{ \lambda}-\varepsilon\varphi)\,dx\Big{)}\\ &\geq o(1)\text{ as $\varepsilon\to 0^{+}$};\end{split} \tag{4.39}\]
as a consequence, by letting \(\varepsilon\to 0^{+}\) in (4.39), we obtain
\[\mathcal{B}_{\rho}(v_{\lambda},\varphi)-\lambda\int_{\Omega}v_{\lambda}^{- \gamma}\varphi\,dx-\int_{\Omega}v_{\lambda}^{2^{*}-1}\varphi\,dx\geq 0.\]
This, together with the _arbitrariness_ of the fixed \(\varphi\in\mathcal{X}^{1,2}(\Omega)\), finally proves that the function \(v_{\lambda}\) is a weak solution of problem \((\mathbb{P})_{\lambda}\), as claimed.
Proof of 2.: To prove assertion 2) it suffices to show that
\[v_{k}\to v_{\lambda}\text{ strongly in $\mathcal{X}^{1,2}(\Omega)$ as $k\to+\infty$}. \tag{4.40}\]
In fact, owing to property b) of \(\{u_{k}\}_{k}\) we have
\[\varrho-\rho(u_{k}-v_{k})\leq\rho(v_{k}-u_{\lambda})\leq\rho(v_{k}-u_{k})+ \varrho;\]
this, together with (4.40) and (4.30)-ii), ensures that \(\rho(u_{\lambda}-v_{\lambda})=\varrho.\) Hence, we turn to to prove (4.40), namely the _strong convergence_ of \(\{v_{k}\}_{k}\) to \(v_{\lambda}\).
First of all, since \(v_{k}\to v_{\lambda}\) weakly in \(\mathcal{X}^{1,2}(\Omega)\) as \(k\to+\infty\), we can proceed as in the proof of Lemma 4.1, obtaining the following analogs of (4.6)-to-(4.7):
\[\begin{split}&(*)\,\,\int_{\Omega}v_{k}^{1-\gamma}\,dx=\int_{ \Omega}v_{\lambda}^{1-\gamma}\,dx+o(1),\\ &(*)\,\,\|v_{k}\|_{L^{2^{*}}(\Omega)}^{2^{*}}=\|v_{\lambda}\|_{L^{ 2^{*}}(\Omega)}^{2^{*}}+\|v_{k}-v_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*}}+o(1) ;\\ &(*)\,\,\rho(v_{k})^{2}=\rho(v_{\lambda})^{2}+\rho(v_{k}-v_{ \lambda})^{2}+o(1).\end{split} \tag{4.41}\]
Moreover, by (4.31)-ii), we also get
\[\lim_{k\to+\infty}\int_{\Omega}|v_{k}-v_{\lambda}|^{1-\gamma}\,dx=0 \tag{4.42}\]
Owing to (4.41), and choosing \(w=v_{\lambda}\in T\) in (4.32), we then get
\[\rho(v_{k}-v_{\lambda})^{2}=-\mathcal{B}_{\rho}(v_{k},v_{\lambda} -v_{k})+\mathcal{B}_{\rho}(v_{\lambda},v_{\lambda}-v_{k})\] \[\qquad\leq\frac{1}{k}\rho(v_{k}-v_{\lambda})+\lambda\int_{\Omega} v_{k}^{-\gamma}(v_{k}-v_{\lambda})\,dx\] \[\qquad\qquad+\int_{\Omega}v_{k}^{2^{*}-1}(v_{k}-v_{\lambda})\,dx+ \mathcal{B}_{\rho}(v_{\lambda},v_{\lambda}-v_{k})\] \[\text{(since $\{v_{k}\}_{k}$ is bounded and $v_{k}\to v_{\lambda}$ weakly in $\mathcal{X}^{1,2}(\Omega)$)}\] \[=\lambda\int_{\Omega}v_{k}^{1-\gamma}\,dx+\int_{\Omega}v_{k}^{2^ {*}-1}(v_{k}-v_{\lambda})\,dx-\lambda\int_{\Omega}v_{k}^{-\gamma}v_{\lambda} \,dx+o(1)\] \[=\lambda\int_{\Omega}v_{\lambda}^{1-\gamma}\,dx+\|v_{k}-v_{ \lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*}}+\|v_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2 ^{*}}-\int_{\Omega}v_{k}^{2^{*}-1}v_{\lambda}\,dx\] \[\qquad-\lambda\int_{\Omega}v_{k}^{-\gamma}v_{\lambda}\,dx+o(1)\] \[\text{(since $v_{k}\to v_{\lambda}$ strongly in $L^{p}(\Omega)$ for all $1\leq p<2^{*}$)}\] \[=\lambda\int_{\Omega}v_{\lambda}^{1-\gamma}\,dx+\|v_{k}-v_{ \lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*}}-\lambda\int_{\Omega}v_{k}^{-\gamma}v_ {\lambda}\,dx+o(1).\]
On the other hand, since \(0\leq v_{k}^{-\gamma}v_{\lambda}\leq u_{\lambda}^{-\gamma}v_{\lambda}\in L^{1 }(\Omega)\) (recall that \(v_{\lambda}\geq u_{\lambda}\) and see Remark 2.4-ii)), by Lebesgue's Dominated Convergence Theorem we have
\[\int_{\Omega}v_{k}^{-\gamma}v_{\lambda}\,dx\to\int_{\Omega}v_{\lambda}^{1- \gamma}\,dx\qquad\text{ as $k\to+\infty$};\]
as a consequence, we obtain
\[\rho(v_{k}-v_{\lambda})^{2}\leq\|v_{k}-v_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2^ {*}}+o(1)\quad\text{as $k\to+\infty$}. \tag{4.43}\]
To proceed further, we now choose \(w=2v_{k}\in T\) in (4.32): this yields
\[\rho(v_{k})^{2}-\|v_{k}\|_{L^{2^{*}}(\Omega)}^{2^{*}}-\lambda\int_{\Omega}v_{ k}^{1-\gamma}\,dx\geq-\frac{1}{k}\rho(v_{k})^{2}=o(1);\]
thus, recalling that \(v_{\lambda}\) is a weak solution of problem (P)\({}_{\lambda}\), we get
\[\begin{split}\rho(v_{k}-v_{\lambda})^{2}&=\rho(v_ {k})^{2}-\rho(v_{\lambda})^{2}+o(1)\\ &\geq\left(\|v_{k}\|_{L^{2^{*}}(\Omega)}^{2^{*}}+\lambda\int_{ \Omega}v_{k}^{1-\gamma}\,dx\right)-\mathcal{B}_{\rho}(v_{\lambda},v_{\lambda}) \\ &=\|v_{k}\|_{L^{2^{*}}(\Omega)}^{2^{*}}+\lambda\int_{\Omega}v_{k} ^{1-\gamma}\,dx-\|v_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*}}-\lambda\int_{ \Omega}v_{\lambda}^{1-\gamma}\,dx\\ &=\|v_{k}-v_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*}}+o(1)\quad \text{as $k\to+\infty$},\end{split} \tag{4.44}\]
where we have also used (4.41). Gathering (4.43)-(4.44), we then obtain
\[\rho(v_{k}-v_{\lambda})^{2}=\|v_{k}-v_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*}}+o (1)\quad\text{as $k\to+\infty$}. \tag{4.45}\]
With (4.45) at hand, we can finally end the proof of (4.40). In fact, assuming (to fix the ideas) that \(I_{\lambda}(u_{\lambda})\leq I_{\lambda}(v_{\lambda})\), from (4.30) and (4.41)-(4.42) we get
\[\begin{split} I_{\lambda}(v_{k}-v_{\lambda})&=\frac{ 1}{2}\rho(v_{k}-v_{\lambda})^{2}+\frac{\lambda}{1-\gamma}\int_{\Omega}\left|v_ {k}-v_{\lambda}\right|^{1-\gamma}dx+\frac{1}{2^{*}}\|v_{k}-v_{\lambda}\|_{L^{2^ {*}}(\Omega)}^{2^{*}}\\ &=I(v_{k})-I(v_{\lambda})+o(1)\leq I(u_{\lambda})-I(v_{\lambda}) +\frac{1}{k^{2}}+o(1)\\ &=o(1)\qquad\text{as }k\to+\infty;\end{split}\]
this, together with (4.42), gives
\[\begin{split}&\frac{1}{2}\rho(v_{k}-v_{\lambda})^{2}-\frac{1}{2^ {*}}\|v_{k}-v_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*}}\\ &\qquad=I_{\lambda}(v_{k}-v_{\lambda})+\frac{\lambda}{1-\gamma} \int_{\Omega}\left|v_{k}-v_{\lambda}\right|^{1-\gamma}dx\leq o(1).\end{split} \tag{4.46}\]
Thus, by combining (4.45)-(4.46), we easily obtain
\[\lim_{k\to+\infty}\|v_{k}-v_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*}}=\lim_{k \to+\infty}\rho(v_{k}-v_{\lambda})^{2}=0,\]
and this proves (4.40).
**Proposition 4.9**.: _Let \(\lambda\in(0,\Lambda)\) be fixed, and assume that Case B) holds. Then, there exists a solution \(v_{\lambda}\) of problem (P)\({}_{\lambda}\) such that \(v_{\lambda}\not\equiv u_{\lambda}\)._
Proof.: To begin with, we consider the set
\[\Gamma=\big{\{}\eta\in C([0,1];T):\,\eta(0)=u_{\lambda},\,I_{\lambda}(\eta(1) )<I_{\lambda}(u_{\lambda})\text{ and }\rho(\eta(1)-u_{\lambda})>\varrho_{1}\big{\}},\]
(where \(\varrho_{1}>0\) is as in (4.29)), and we claim that \(\Gamma\neq\varnothing\).
To prove this claim, we closely follow the approach in [38]: first of all, we arbitrarily fix a point \(y\in\Omega\) and we choose \(r>0\) such that \(B_{r}(y)\Subset\Omega\); we then choose a cut-off function \(\varphi\in C_{0}^{\infty}(\Omega)\) such that
* \(0\leq\varphi\leq 1\) in \(\Omega\);
* \(\varphi\equiv 1\) on \(B_{r}(y)\);
and we consider the one-parameter family of functions
\[U_{\varepsilon}=V_{\varepsilon}\,\varphi,\qquad\text{where }V_{\varepsilon}(x)= \frac{\varepsilon^{\frac{n-2}{2}}}{(\varepsilon^{2}+|x-y^{2}|)^{\frac{n-2}{2}} }\quad\text{(with }\varepsilon>0).\]
We explicitly highlight that the above family \(\{V_{\varepsilon}\}_{\varepsilon}\) corresponds to the well-known _Aubin-Talenti functions_, which are the unique (up to translation) extremals in the (local) Sobolev inequality. This means, precisely, that
\[\frac{\||\nabla V_{\varepsilon}|\|_{L^{2}(\mathbb{R}^{n})}^{2}}{\|V_{ \varepsilon}\|_{L^{2^{*}}(\mathbb{R}^{n})}^{2}}=S_{n}, \tag{4.47}\]
where \(S_{n}>0\) is the _best constant_ in the Sobolev inequality.
We now recall that, owing to [15, Lemma 1.1], we have (as \(\varepsilon\to 0^{+}\))
\[\begin{split}\text{i) }\||\nabla U_{\varepsilon}|\|_{H_{0}^{1}( \Omega)}^{2}=\varepsilon^{n-2}\Big{(}\frac{K_{1}}{\varepsilon^{n-2}}+O(1) \Big{)}=K_{1}+o(\varepsilon^{\frac{n-2}{2}});\\ \text{ii) }\|U_{\varepsilon}\|_{L^{2^{*}}(\Omega)}^{2^{*}}= \varepsilon^{n}\Big{(}\frac{K_{2}}{\varepsilon^{n}}+O(1)\Big{)}=K_{2}+o( \varepsilon^{n/2});\end{split} \tag{4.48}\]
where the constants \(K_{1},K_{2}>0\) are given, respectively, by
\[K_{1}=\||\nabla V_{1}|\|_{L^{2}(\mathbb{R}^{n})}^{2},\qquad K_{2}=\|V_{1}\|_{L^{2^ {*}}(\mathbb{R}^{n})}^{2}\]
(we explicitly stress that, owing to (4.47), we have \(K_{1}/K_{2}=S_{n}\)).
On the other hand, by [8, Lemma 4.11] and (4.48)-ii), we also have
\[\begin{split}[U_{\varepsilon}]_{s}^{2}&=\|U_{ \varepsilon}\|_{L^{2^{*}}(\Omega)}^{2}\cdot O(\varepsilon^{2-2s})\\ &=\big{(}K_{2}+o(\varepsilon^{n/2})\big{)}^{2/2^{*}}\cdot O( \varepsilon^{2-2s})=o(\varepsilon^{1-s}).\end{split} \tag{4.49}\]
Gathering (4.48)-(4.49), and defining \(w=u_{\lambda}+tRU_{\varepsilon}\) (for \(R\geq 1\) and \(t\in[0,1]\)), we then obtain the following computation (as \(\varepsilon\to 0^{+}\)):
\[\begin{split} I_{\lambda}(w)&=I_{\lambda}(u_{ \lambda})+\frac{t^{2}R^{2}}{2}\rho(U_{\varepsilon})^{2}+tR\mathcal{B}_{\rho}( u_{\lambda},U_{\varepsilon})\\ &\qquad-\frac{1}{2^{*}}\big{(}\|u_{\lambda}+tRU_{\varepsilon}\|_ {L^{2^{*}}(\Omega)}^{2^{*}}-\|u_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*}}\big{)} \\ &\qquad-\frac{\lambda}{1-\gamma}\Big{(}\int_{\Omega}|u_{\lambda}+ tRU_{\varepsilon}|^{1-\gamma}\,dx-\int_{\Omega}|u_{\lambda}|^{1-\gamma}\,dx\Big{)} \\ &\text{(recalling that $u_{\lambda}$ solves (P)$ {\lambda})}\\ &=I_{\lambda}(u_{\lambda})+\frac{t^{2}R^{2}}{2}\rho(U_{ \varepsilon})^{2}+tR\int_{\Omega}(\lambda u_{\lambda}^{-\gamma}+u_{\lambda}^{2 ^{*}-1})U_{\varepsilon}\,dx\\ &\qquad-\frac{1}{2^{*}}\big{(}\|u_{\lambda}+tRU_{\varepsilon}\|_ {L^{2^{*}}(\Omega)}^{2^{*}}-\|u_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*}}\big{)} \\ &\qquad-\frac{\lambda}{1-\gamma}\Big{(}\int_{\Omega}|u_{\lambda}+ tRU_{\varepsilon}|^{1-\gamma}\,dx-\int_{\Omega}|u_{\lambda}|^{1-\gamma}\,dx\Big{)} \\ &=I_{\lambda}(u_{\lambda})+\frac{t^{2}R^{2}}{2}\rho(U_{\varepsilon })^{2}-\frac{t^{2^{*}}R^{2^{*}}}{2^{*}}\|U_{\varepsilon}\|_{L^{2^{*}}(\Omega) }^{2^{*}}\\ &\qquad-t^{2^{*}-1}R^{2^{*}-1}\int_{\Omega}U_{\varepsilon}^{2^{*} -1}u_{\lambda}\,dx-\mathcal{R}_{\varepsilon}-\mathcal{D}_{\varepsilon}\\ &=I_{\lambda}(u_{\lambda})+\frac{t^{2}R^{2}}{2}K_{1}-\frac{t^{2^ {*}}R^{2^{*}}}{2^{*}}K_{2}\\ &\qquad-t^{2^{*}-1}R^{2^{*}-1}\int_{\Omega}U_{\varepsilon}^{2^{*} -1}u_{\lambda}\,dx-\mathcal{R}_{\varepsilon}-\mathcal{D}_{\varepsilon}\\ &\qquad\qquad+\big{(}t^{2}R^{2}+t^{2^{*}}R^{2^{*}}\big{)}o( \varepsilon^{\kappa_{s,n}}),\end{split} \tag{4.50}\]
where we have introduced the notation
\[\begin{split}(*)\ \mathcal{R}_{\varepsilon}&=\frac{1}{2^{*}} \int_{\Omega}\Big{\{}|u_{\lambda}+tRU_{\varepsilon}|^{2^{*}}-u_{\lambda}^{2^{* }}-(tRU_{\varepsilon})^{2^{*}}\\ &\qquad\qquad-2^{*}u_{\lambda}(tRU_{\varepsilon})\big{(}u_{ \lambda}^{2^{*}-2}+(tRU_{\varepsilon})^{2^{*}-2}\big{)}\Big{\}}dx;\\ (*)\ \mathcal{D}_{\varepsilon}&=\frac{\lambda}{1-\gamma} \int_{\Omega}\Big{\{}|u_{\lambda}+tRU_{\varepsilon}|^{1-\gamma}-|u_{\lambda}|^{1 -\gamma}-tR(1-\gamma)u_{\lambda}^{-\gamma}U_{\varepsilon}\Big{\}}dx;\\ (*)\ \kappa_{s,n}&=\min\Big{\{}1-s,\frac{n-2}{2}\Big{\}}. \end{split}\]
With the above (4.50) at hand, we can now argue _exactly_ as in [38]: first of all, by exploiting [16, Lemma 4] we get (see [16, eq. (22)])
\[\mathcal{R}_{\varepsilon}=R^{\beta}o(\varepsilon^{\frac{n-2}{2}})\quad\text{ as }\varepsilon\to 0^{+}, \tag{4.51}\]
for some \(\beta\in(0,2^{*})\); moreover, by using [38, eq. (2.35)], we also get
\[-\mathcal{D}_{\varepsilon}\leq C(tR+t^{2}R^{2})\,o(\varepsilon^{\frac{n-2}{2} })\quad\text{as }\varepsilon\to 0^{+}, \tag{4.52}\]
for some constant \(C>0\). As a consequence, by combining (4.50) with (4.51)-(4.52), we obtain the following estimate, holding true as \(\varepsilon\to 0\):
\[I_{\lambda}(u_{\lambda}+tRU_{\varepsilon})\leq I_{\lambda}(u_{ \lambda})+\frac{t^{2}R^{2}}{2}K_{1}-\frac{t^{2^{*}}R^{2^{*}}}{2^{*}}K_{2}\] \[\qquad\qquad-t^{2^{*}-1}R^{2^{*}-1}\int_{\Omega}U_{\varepsilon}^ {2^{*}-1}u_{\lambda}\,dx\] \[\qquad\qquad+C(tR+t^{2}R^{2}+t^{2^{*}}R^{2^{*}}+R^{\beta})o( \varepsilon^{\kappa_{s,n}}).\]
From this, by arguing exactly as in [4, Lemma 3.3] (note that \(\kappa_{s,n}>0\)), we conclude that there exist \(\varepsilon_{0}>0\) and \(R_{0}\geq 1\) such that
\[\begin{cases}I_{\lambda}(u_{\lambda}+RU_{\varepsilon})<I_{\lambda}(u_{ \lambda})&\text{for all }\varepsilon\in(0,\varepsilon_{0})\text{ and }R\geq R_{0},\\ I_{\lambda}(u_{\lambda}+tR_{0}U_{\varepsilon})<I_{\lambda}(u_{\lambda})+\frac{ 1}{n}S_{n}^{n/2}&\text{for all }\varepsilon\in(0,\varepsilon_{0})\text{ and }t\in[0,1].\end{cases} \tag{4.53}\]
In particular, from (4.53) we easily see that
\[\eta_{0}(t)=u_{\lambda}+tR_{0}U_{\varepsilon}\in\Gamma\quad\text{for all } \varepsilon\in(0,\varepsilon_{0})\]
(by enlarging \(R_{0}\) if needed), and thus \(\Gamma\neq\varnothing\), as claimed.
Now we have proved that \(\Gamma\neq\varnothing\), we can proceed towards the end of the proof. To this end we first observe that, since it is non-empty, this set \(\Gamma\) is a _complete metric space_, when endowed with the distance
\[d_{\Gamma}(\eta_{1},\eta_{2}):=\max_{0\leq t\leq 1}\rho\big{(}\eta_{1}(t)-\eta _{2}(t)\big{)};\]
moreover, since \(I_{\lambda}\) is _real-valued and continuous_ on \(\mathcal{X}^{1,2}(\Omega)\), it is easy to recognize that the functional \(\Phi:\Gamma\to\mathbb{R}\) defined as
\[\Phi(\eta):=\max_{0\leq t\leq 1}I_{\lambda}(\eta(t)),\]
is (well-defined and) continuous on \(\Gamma\). In view of these facts, we are then entitled to apply the Ekeland Variational Principle to this functional \(\Phi\) on \(\Gamma\): setting
\[\gamma_{0}:=\inf\Gamma\Phi(\eta),\]
there exists a sequence \(\{\eta_{k}\}_{k}\subseteq\Gamma\) such that
\[\text{i)}\ \Phi(\eta_{k}) \leq\gamma_{0}+1/k,\] \[\text{ii)}\ \Phi(\eta_{k}) \leq\Phi(\eta)+1/k\,d_{\Gamma}(\eta_{k},\eta)\quad\text{for every }\eta\in\Gamma. \tag{4.54}\]
Now, starting from (4.54) and proceeding exactly as in the proof of [4, Lemma 3.5] (see also [37, Lemma 4.3]), we can find another sequence
\[v_{k}=\eta_{k}(t_{k})\in T\]
(for some \(t_{k}\in[0,1]\)) such that
a) \(I_{\lambda}(v_{k})\to\gamma_{0}\) as \(k\to+\infty\);
b) there exists some \(C>0\) such that, for every \(w\in T\), one has
\[\begin{split}\mathcal{B}_{\rho}(v_{k},w-v_{k})-\lambda\int_{ \Omega}v_{k}^{-\gamma}(w-v_{k})\,dx\\ -\int_{\Omega}v_{k}^{2^{*}-1}(w-v_{k})\,dx\geq-\frac{C}{k}(1+ \rho(w)).\end{split} \tag{4.55}\]
In particular, choosing \(w=2v_{k}\) in (4.55), we get
\[\rho(v_{k})^{2}-\lambda\int_{\Omega}v^{1-\gamma}\,dx-\int_{\Omega}v_{k}^{2^{*} }\,dx\geq-\frac{C}{k}(1+2\rho(v_{k})). \tag{4.56}\]
By combining (4.56) with assertion a), and exploiting Holder's and Sobolev's inequalities, we then obtain the following estimate
\[\begin{split}\gamma_{0}+o(1)&=\frac{1}{2}\rho(v_{k })^{2}-\frac{\lambda}{1-\gamma}\int_{\Omega}v_{k}^{1-\gamma}\,dx-\frac{1}{2^{* }}\int_{\Omega}v_{k}^{2^{*}}\,dx\\ &\geq\Big{(}\frac{1}{2}-\frac{1}{2^{*}}\Big{)}\rho(v_{k})^{2}- \lambda\Big{(}\frac{1}{1-\gamma}-\frac{1}{2^{*}}\Big{)}\int_{\Omega}v_{k}^{1- \gamma}\,dx\\ &\qquad-\frac{C}{2^{*}\,k}(1+2\rho(v_{k}))\\ &\geq\Big{(}\frac{1}{2}-\frac{1}{2^{*}}\Big{)}\rho(v_{k})^{2}-C \big{(}\rho(v_{k})^{1-\gamma}-2\rho(v_{k})-1\big{)},\end{split} \tag{4.57}\]
where \(C>0\) is a constant depending on \(n\) and on \(|\Omega|\). Since, obviously,
\[c_{0}=\frac{1}{2}-\frac{1}{2^{*}}>0,\]
it is readily seen from (4.57) that the sequence \(\{v_{k}\}_{k}\) is _bounded in \(\mathcal{X}^{1,2}(\Omega)\)_ (otherwise, by possibly choosing a sub-sequence we would have \(\rho(v_{k})\to+\infty\), and hence the right-hand side of (4.57) would diverges as \(k\to+\infty\), which is not possible).
In view of this fact, we can thus proceed as in the proof of Lemma 4.8 to show that \(\{v_{k}\}_{k}\) weakly converges (up to a sub-sequence) to a _weak solution_\(v_{\lambda}\in\mathcal{X}^{1,2}(\Omega)\) of problem (P)\({}_{\lambda}\), further satisfying the identity
\[\rho(v_{k}-v_{\lambda})^{2}-\|v_{k}-v_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*} }=o(1)\quad\text{as $k\to+\infty$}. \tag{4.58}\]
In view of these facts, to complete the proof we are left to show that \(v_{\lambda}\not\equiv u_{\lambda}\). To this end we first observe that, given any \(\eta\in\Gamma\), we have
\[\rho(\eta(0)-u_{\lambda})=0\quad\text{and}\quad\rho(\eta(1)-u_{\lambda})> \varrho_{1},\]
and hence there exists a point \(t_{\eta}\in[0,1]\) such that \(\rho(\eta(t_{\eta})-u_{\lambda})=\varrho_{1}\); as a consequence, since _we are assuming that_ Case B) holds, we obtain
\[\gamma_{0} =\inf_{\eta\in\Gamma}\Phi(\eta)\geq\inf\big{\{}I_{\lambda}(\eta(t _{\eta})):\,\eta\in\Gamma\big{\}}\] \[\geq\inf\{I_{\lambda}(u):u\in T\text{ and }\rho(u-u_{\lambda})= \varrho_{1}\}>I_{\lambda}(u_{\lambda}).\]
On the other hand, since we already know that \(\eta_{0}(t)=u_{\lambda}+tR_{0}U_{\varepsilon}\in\Gamma\), from (4.53) (and the very definition of \(\gamma_{0}\)) we derive the following estimate
\[\gamma_{0}\leq\Phi(\eta_{0})=\max_{0\leq t\leq 1}I_{\lambda}(\eta_{0}(t))<I_{ \lambda}(u_{\lambda})+\frac{1}{n}S_{n}^{n/2}.\]
Summing up, we have
\[I_{\lambda}(u_{\lambda})<\gamma_{0}<I_{\lambda}(u_{\lambda})+\frac{1}{n}S_{n}^{n/2}. \tag{4.59}\]
Now, since the sequence \(\{v_{k}\}_{k}\) weakly converges in \(\mathcal{X}^{1,2}(\Omega)\) to \(v_{\lambda}\) as \(k\to+\infty,\)_the same assertions_ in (4.41) hold also in this context; this, together with (4.59) and the above property a) of the sequence \(\{v_{k}\}_{k},\) gives
\[\begin{split}&\frac{1}{2}\rho(v_{k}-v_{\lambda})^{2}-\frac{1}{2^{ *}}\|v_{k}-v_{\lambda}\|_{L^{2^{*}}(\Omega)}^{2^{*}}\\ &\qquad=\frac{1}{2}\big{(}\rho(v_{k})^{2}-\rho(v_{\lambda})^{2} \big{)}-\frac{1}{2^{*}}\big{(}\|v_{k}\|_{L^{2^{*}}(\Omega)}^{2^{*}}-\|v\|_{L^{ 2^{*}}(\Omega)}^{2^{*}}\big{)}+o(1)\\ &\qquad=I_{\lambda}(v_{k})-I_{\lambda}(u_{\lambda})+o(1)=\gamma_{ 0}-I_{\lambda}(u_{\lambda})+o(1)\\ &\qquad<\frac{1}{n}S_{n}^{n/2}-\delta_{0},\end{split} \tag{4.60}\]
for some \(\delta_{0}>0\) such that \(1/n\,S_{n}^{n/2}-\delta_{0}>0\) (provided that \(k\) is large enough).
Gathering (4.58)-(4.60), arguing as in [46, Proposition 3.1], it is then easy to recognize that \(v_{k}\to v_{\lambda}\)_strongly in \(\mathcal{X}^{1,2}(\Omega)\)_; as a consequence, by the continuity of the functional \(I_{\lambda}\) and by (4.55)-(4.59), we get
\[I_{\lambda}(u_{\lambda})<\gamma_{0}=\lim_{k\to+\infty}I_{\lambda}(v_{k})=I_{ \lambda}(v_{\lambda}),\]
and this finally proves that \(v_{\lambda}\not\equiv u_{\lambda},\) as desired.
Using all the results established so far, we are finally ready to provide the
Proof (of Theorem 1.1).: Let \(\Lambda\) be as in (4.1), that is,
\[\Lambda:=\sup\{\lambda>0:\text{(P)}_{\lambda}\text{ admits a weak solution}\}.\]
Taking into account Lemma 4.1, we know that \(\Lambda\in(0,+\infty)\); moreover, by combining Lemma 4.6 with Propositions 4.8-4.9, we infer that
i) there exist _at least two distinct weak solutions_\(u_{\lambda},v_{\lambda}\) of (P)\({}_{\lambda}\) for \(\lambda\in(0,\Lambda)\);
ii) there exists _at least one weak solution_\(u_{\lambda}\) of (P)\({}_{\Lambda}\).
Hence, assertions a)-b) in the statement of the theorem are established.
Finally, by the very definition of \(\Lambda\) we derive that (P)\({}_{\lambda}\)_does not admit_ weak solutions when \(\lambda>\Lambda\); this establishes also assertion c), and the proof is complete.
|
2306.01102 | LLMatic: Neural Architecture Search via Large Language Models and
Quality Diversity Optimization | Large Language Models (LLMs) have emerged as powerful tools capable of
accomplishing a broad spectrum of tasks. Their abilities span numerous areas,
and one area where they have made a significant impact is in the domain of code
generation. Here, we propose using the coding abilities of LLMs to introduce
meaningful variations to code defining neural networks. Meanwhile,
Quality-Diversity (QD) algorithms are known to discover diverse and robust
solutions. By merging the code-generating abilities of LLMs with the diversity
and robustness of QD solutions, we introduce \texttt{LLMatic}, a Neural
Architecture Search (NAS) algorithm. While LLMs struggle to conduct NAS
directly through prompts, \texttt{LLMatic} uses a procedural approach,
leveraging QD for prompts and network architecture to create diverse and
high-performing networks. We test \texttt{LLMatic} on the CIFAR-10 and
NAS-bench-201 benchmarks, demonstrating that it can produce competitive
networks while evaluating just $2,000$ candidates, even without prior knowledge
of the benchmark domain or exposure to any previous top-performing models for
the benchmark. The open-sourced code is available in
\url{https://github.com/umair-nasir14/LLMatic}. | Muhammad U. Nasir, Sam Earle, Christopher Cleghorn, Steven James, Julian Togelius | 2023-06-01T19:33:21Z | http://arxiv.org/abs/2306.01102v8 | # LLMatic: Neural Architecture Search via Large Language Models and Quality-Diversity Optimization
###### Abstract
Large Language Models (LLMs) have emerged as powerful tools capable of accomplishing a broad spectrum of tasks. Their abilities span numerous areas, and one area where they have made a significant impact is in the domain of code generation. Here, we propose to use the coding abilities of LLMs to introduce meaningful variations to code defining neural networks. Meanwhile, Quality-Diversity (QD) algorithms are known to discover diverse and robust solutions. By merging the code-generating abilities of LLMs with the diversity and robustness of QD solutions, we introduce LLMatic, a Neural Architecture Search (NAS) algorithm. While LLMs struggle to conduct NAS directly through prompts, LLMatic uses a procedural approach, leveraging QD for prompts and network architecture to create diverse and high-performing networks. We test LLMatic on the CIFAR-10 image classification benchmark, demonstrating that it can produce competitive networks while evaluating just \(2,000\) candidates, even without prior knowledge of the benchmark domain or exposure to any previous top-performing models for the benchmark.
\({}^{1}\) University of the Witwatersrand
2New York University Tandon
[email protected], [email protected], [email protected],
[email protected], [email protected]
## Introduction
A major challenge in deep learning is designing good neural network architectures. Neural Architecture Search (NAS) is the generic term for various approaches to automating this design process. The idea is to formulate an objective, such as maximum accuracy on a classification problem with a given budget of parameters and training cycles, and cast the problem as a search for the architecture that maximizes the objective. This typically means that many thousands of architectures are tested and discarded in the process. Every test consists of training the candidate network architecture using some form of gradient descent on the chosen benchmark dataset to measure its performance.
Two common algorithmic approaches to NAS are reinforcement learning and evolutionary computation. Reinforcement learning approaches to NAS train a controller (typically another neural network) that outputs network architectures; these network architectures are tested and their performance is used as a reward signal. Evolutionary computation approaches to NAS, on the other hand, directly search the space of neural architectures. A population of architectures are kept, and their performance is used as a fitness score. Evolutionary NAS approaches are similar to neuroevolution, which has existed since the 1970s, and one might even see NAS as a form of neuroevolution. The main difference is that in NAS, the search process does not concern the parameters of the neural network, only its architecture.
One could argue that search by evolutionary computation or reinforcement learning is quite mindless and wasteful, given how many architectures need to be tested and how uninformed the changes that lead to each new architecture are. Is there some way we can inform the search by exploiting stored knowledge about how to design neural networks? This paper explores the idea that we can do exactly this using code-generating large language models (LLMs). More precisely, we propose using LLMs to generate new architectural variations.
The argument for this is simply that modern LLMs fine-tuned on code are very capable. Given the amount of machine learning code they have been trained on, it is not surprising that they can design good neural network architectures. However, an LLM by itself cannot in general find an optimal architecture for a given problem, as it cannot try architectures out and learn from its experiments. Therefore, we propose to combine the domain knowledge of code-generating LLMs with a robust search mechanism.
While generating a single architecture that maximizes a given objective is good for many use cases, there is in general more value to generating a set of architectures that vary across some relevant dimensions. For example, one might want to have a set of architectures that vary in their parameter counts or depths. This helps in understanding the trade-offs between various desirable metrics and could assist in making better-informed decisions about which architecture to use for a specific application. For example, one might want a range of networks for edge deployments to clients with different RAM sizes. To enable this, the solution proposed here leverages quality-diversity search, specifically a version of the MAP-Elites algorithm.
## Related Work
Designing good, learnable neural architectures can be an expensive and unintuitive process for human designers. Neural Architecture Search (NAS) aims to automatically find neural architectures capable of strong performance after train
ing (Elsken, Metzen, and Hutter 2019). Bayesian methods are a popular choice given their low sample complexity and the fact that evaluating each architecture (by training it) can be computationally expensive (Kandasamy et al. 2018). Alternatively, Reinforcement Learning can be used to train an agent (usually another neural network) to output candidate architectures for a given task, with the performance after training of the candidate architecture acting as a reward signal (Jaafra et al. 2019). Evolutionary methods can also be used to search directly through the space of possible architectures (Liu et al. 2021). Similarly, Monte Carlo Tree Search has been used to search the space of possible architectures (Wistuba 2017). In all cases, a human designer must manually define a set of atomic network components or edit actions for use in network search/generation.
To avoid having the designer constrain the space of possible architectures prior to search, we turn to code-generating Large Language Models (LLMs), large models trained auto-regressively on massive datasets of code (e.g. public repositories in Github). Transformers (Vaswani et al. 2017) facilitated the explosion of LLMs (Radford et al. 2019; Brown et al. 2020). Apart from creating state-of-the-art models in core natural language processing tasks, they led to creating models for a wide variety of other tasks, such as generating video game levels and code (Chen et al. 2021; Todd et al. 2023).
EvoPrompting (Chen, Dohan, and So 2023) is a method that is somewhat similar to ours in that it uses code-LLMs as mutation and crossover operators in order to perform NAS. It is tested on the MNIST-1D classification task (Greydanus 2020) and the CLRS algorithmic reasoning benchmark (Velickovic et al. 2022). Since performance can generally be trivially increased by simply adding parameters to the model, an additional penalty is added to the fitness of a candidate neural architecture corresponding to its model size. This favours small models with effective architectures. In our method, we instead consider model complexity (in terms of FLOPS) as a diversity metric, searching for high-performing models of a variety of sizes.
Quality Diversity (QD) methods (Pugh, Soros, and Stanley 2016) are a family of evolutionary algorithms that, in addition to optimizing a fitness metric, search for a diversity of individuals according to some user-specified "behavioral descriptors". Instead of keeping a population of the most fit individuals, QD methods such as MAP-Elites (Mouret and Clune 2015) maintain an "archive" of individuals, where this archive is partitioned into cells, with each cell corresponding to individuals exhibiting a particular range of values along each behavioral descriptor.
QD methods are valuable in domains such as robot control, where it is useful to learn diverse high-quality trajectories, in case one solution should become unavailable during deployment because of a physical obstruction or mechanical malfunction. Another motivating factor is that greedily searching for the fittest individual may not be desirable in deceptive domains. Here, maintaining a diversity of fit individuals may protect the population from falling into local optima. Conversely, diverse, unorthodox solutions may provide valuable "stepping stones" on the path to globally fit individuals.
LLMatic begins its search with a very basic neural network, inspired by the work of Stanley (Stanley and Miikkulainen 2002) which suggests that neuroevolution tends to perform better when starting with a small network. In LLMatic, we use dual-archive cooperative QD optimization, a method where two separate archives are used to store complementary components that can be combined to solve a given task. The first archive stores neural networks, where the width-to-depth ratio and Floating Point Operations per Second (FLOPS) of a network are the behavioural descriptors. The width-to-depth ratio is a division of the width and the depth of the network. To find the width, we get the maximum of the output features of all layers, while number of layers is considered to be the
depth of the network. FLOPS were chosen over parameter count because FLOPS correlates better with actual time spent training a network [1]. We call this archive the "network archive". The fitness function for the networks in this archive is defined as the test accuracy of the network after training. The second archive, called the "prompt archive", contains the prompt and temperature for generating code, which is the behavioural descriptors as well. The selection of prompt and temperature depends on a curiosity score [15], which depends on whether the generated network was added to the network archive. The fitness of prompt individuals depends on whether the network was better than the previous generation's best score.
In the first generation, ta simple neural network with one convolutional and one fully connected layer initiates the evolution. A prompt is selected at random to generate an initial batch of networks. These networks are evaluated and an attempt is made to add them to the network archive as a random initialization for MAP-Elites. Concurrently, we mutate the temperature based on the fitness of the network, increasing it if the fitness increases and vice versa. By increasing the temperature, we want the LLM to explore as it is performing better than before. By decreasing the temperature, we want the LLM to exploit and try to achieve better fitness than before. Once we calculate the fitness of the prompt individual, we add the score to a collective prompt fitness score, after which we try to populate the prompt archive. The collective prompt fitness score determines the overall fitness of each individual in the prompt archive as it gives each prompt a fitness score.
Once either of the archives reaches a specified capacity, we introduce training of the neural network and evolutionary operators in the evolution process. With a certain probability at each generation, a decision is made on whether to perform crossover or mutation to produce \(N\) new batch of offspring. If a crossover is chosen, we select \(N\) random network individuals, locate their closest networks in the archive, and carry out a crossover operation instructed by a prompt. No individual is added to the prompt archive when a crossover is performed. If the mutation operation is selected, we pick the most curious prompt individual and a random network individual. For exploration, we also select random prompts. In both cases, each network is trained for a certain number of epochs and an attempt is made to add the network to the archive. Likewise, a prompt individual is added as previously described. This process continues for a pre-determined number of generations. algorithm 1 shows the complete search process of LLMatic in pseudocode. Refer to supplementary material for pseudocodes on mutation operators, crossover operators, temperature mutation and addition to archives.
## Experiment Setup and Results
We choose CIFAR-10 dataset1 for our experimentation as it is a widely acknowledged dataset for NAS [14, 15]. The CIFAR-10 dataset is made up of 60,000 color images, each with a resolution of 32x32 pixels, and divided into 10 categories. The categories are airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck. Each category contains 6,000 images. Out of the total images, 50,000 are designated for training and 10,000 are set aside for testing.
Footnote 1: [https://www.cs.toronto.edu/~kriz/cifar.html](https://www.cs.toronto.edu/~kriz/cifar.html)
### Setting up LLMatic
LLMatic starts off with a simple neural network with one convolutional layer that takes in \(3\) input with 3 channels, with \(1\times 1\) kernel size and \(1\) output channel which connects to a dense layer with \(1024\) input neurons. Since the size of images in the dataset is \(32\times 32\), with \(1\) (grayscale) channel, the linear layer will have \(1024\) hidden neurons. These hidden neurons are connected via another dense layer to \(10\) output neurons (as we have 10 classes). Rectified Linear Unit (ReLU) [15] is the activation function used in all layers. All of our networks are generated in PyTorch [15].
At each generation, we generate a batch of \(100\) new offspring. Each network generated is trained for \(50\) epochs. The networks are optimized by stochastic gradient descent [1] with the learning rate set to \(0.001\) and momentum set at \(0.9\) for all networks. We use Cross Entropy loss as our measure for the fitness of the trained network. For evolutionary operators, we set a probability of \(0.7\) for mutation and \(0.3\) for crossover as after experimentation, we found mutation to create consistently more trainable neural networks. We initialize the temperature parameter (used when sampling the code-generating LLM) to \(0.6\). For temperature mutation, half of the population is generated by the prompt individual temperature mutated randomly between \(-0.1\) to \(0.1\). The other half is generated by the temperature obtained from the prompt individual itself. If the fitness of the generated network is better than or equal to the best fitness of the previous generation, we increase the temperature by \(0.05\) and if it is worse than the best fitness of the previous generation, we decrease it by \(0.05\). For the crossover operator, we select \(10\) random networks and find their \(2\) or \(3\) nearest neighbours in the network archive to perform crossover. We set the temperature to be \(0.7\) for network generation.
For our QD archives, we use \(10\) niches per dimension, and we have \(2\) dimensions per archive. We set our number of random initial networks to \(10\). Random initial networks are needed to be filled in archives before evolutionary operators are introduced. For the network archive, we have the width-to-depth ratio of the network as our first dimension and the FLOPS of the network as the second dimension. The width-to-depth ratio has a lower limit of \(0\) and an upper limit of \(200\). The minimum FLOPS is set to \(200\) MegaFLOPS and the maximum is set to \(5\) GigaFLOPS. This range is set after experimentation. For the second archive, i.e. the prompt archive, we have the prompt encoded as an integer as the first dimension and temperature as the second dimension. The maximum value of the prompt equals the number of prompts we have, which is \(16\) in total. The maximum value of temperature is set to \(1\) as it can never increase beyond that for our LLM. The lower limit for all the dimensions is \(0\). For the network archive, we simply select a random network while for the prompt archive, we select the most curious prompt individual which depends on the curiosity score. This curiosity score is
ncremented by \(1.0\) if the selected prompt adds the generated network to the network archive, decreased by \(0.5\) if the network is not added, and reduced by \(1.0\) if the created network is untrainable. If the generated network has better fitness than the previous generation's best network, the collective prompt fitness score for the prompt in the prompt individual is increased by \(1\), otherwise, it is unchanged. We use prompts that are generalizable to any problem in any domain. The following are the prompts that we use for mutation:
```
"""Addalayertoimprovetheabovenetwork"""Deletealayertoimprovetheabovenetwork"""Improvetheabovenetwork"""Improvetheabovenetworkbyreducingthesized drastically""""""Improvetheabovenetworkbyincreasingthesized drastically""""""Addfullyconnectedlayertoimprovetheabovenetwork"""Addconvolutionalayertoimprovetheabovenetwork"""Addpoolinglayertoimprovetheabovenetwork"""Addresidualconnectiontoimprovetheabovenetwork"""Addmultiplesresidualconnectionstoimprovetheabovenetwork"""Adddropoutlayertoimprovetheabovenetwork"""Addnormalizationlayertoimprovetheabovenetwork"""Addrecurrentlayertoimprovetheabovenetwork"""An example of a prompt will look like this while the mutation operator is selected:
``` importtorchimporttorch.nnasnnimporttorch.nn.functionalasFclassNet(nn.Module): def_init_(self): super()_init_() self.conv1=nn.Conv2d(3,1,1) self.fc1=nn.Linear(1024,10) defforward(self,x): x=F.relu(self.conv1(x) x=torch.flatten(x,1) x=F.relu(self.fc1(x)) returnx
``` """Addalayertoimprovetheabovenetwork"""
Figure 1: The illustration of the best accuracy per generation for LLMatic and all ablation studies. Each experiment is conducted with 10 seeds. The shaded region is the standard deviation while the solid line represents the mean. EfficientNet-B0 is the best-performing EfficientNet on CIFAR-10.
When the crossover operator is selected, the prompt will look like this:
```
importtorch importtorch.nnasnn importtorch.nn.functionalasF classNet1(nn.Module): definit_(self): super()._init_() self.convl=nn.Conv2d(3,1,1) self.fcl=nn.Linear(1024,10) defforward(self,x): x=F.relu(self.convl(x)) x=torch.flatten(x,1) x=F.relu(self.fcl(x)) returnx classNet2(nn.Module): definit_(self): super()._init_() self.convl=nn.Conv2d(3,1,1) self.conv2=nn.Conv2d(1,2,1) self.fcl=nn.Linear(2048,10) defforward(self,x): x=F.relu(self.convl(x)) x=F.relu(self.conv2(x)) x=torch.flatten(x,1) x=F.relu(self.fcl(x)) returnx """Combinetheabovetwoneuralnetworks andcreateathirdneuralnetwork classthatalsoinheritsfromnn.Moduleandperformsbetterthantheabovetwoneuralnetworks"""
```
We use the pre-trained CodeGen (Nijkamp et al., 2022) LLM to generate neural networks. CodeGenis an autoregressive decoder-only transformer with left-to-right causal masking. CodeGen is first trained on ThePile dataset with random initialization and is called CodeGen-NL. CodeGen-Multi is initialized with CodeGen-NL and is trained on BigQuery dataset. Lastly, CodeGen-Mono is initialized with CodeGen-Multi and is trained on BigPython. CodeGen is trained to be in various parameter sizes but we use \(6.1\) Billion parameter variant of CodeGen-Mono due to computational constraints.
CodeGen-6B has \(33\) layers and \(16\) heads with \(256\) dimensions per head. The context length is \(2048\) and the batch size is \(2\) million tokens. Weight decay is set to \(0.1\). \(0.4e^{-4}\) is the learning rate. Warm-up steps are set to \(3k\) while total steps for training are \(150k\).
For our QD optimization algorithm, we choose a variant of MAP-Elites, called Centroidal Voronoi Tessellation (CVT-MAP-Elites) (Vassiliades, Chatzilygeroudis, and Mouret, 2017), which was created to scale up MAP-Elites and is shown to do so by comparing with other variants of MAP-Elites by Nilsson and Cully (2021). CVT-MAP-Elites automates the creation of the archive by creating \(k\) cell centroid locations that spread through the behavioural descriptors space. We use the _pymap_elites2_ implementation for our experimentation. We use k-d tree Bentley (1975) to create and write centroids to the archive and find the nearest neighbors using a Euclidean distance metric Dokmanic et al. (2015).
Footnote 2: [https://github.com/resibots/pymap_elites](https://github.com/resibots/pymap_elites)
As we have many components in LLMatic, we choose to do a thorough ablation study to determine the effect of each component on overall performance. The following are the components tested for the ablation study:
* Network-Archive-LLMatic: LLMatic with only network archive. To achieve this, we create a prompt individuals population. The population is fixed to \(100\) individuals starting from creating random individuals. We have only one fitness score for this population, which is calculated as \(+1\) if a network is added in the network archive, \(-0.5\) if the network is not added and \(-1\) if the network is not trainable. After we generate the network, we mutate temperature by adding \(0.1\) if the network is added in the network archive and \(-0.1\) if the network is not added.
* Prompt-Archive-LLMatic: LLMatic with only prompt archive. To achieve this, we create a population of networks. The fitness function for the population of networks is accuracy. We keep the population to a \(100\) individuals. With a similar probability as LLMatic, we select mutation or crossover operator. For the crossover operator, we select the individual that is closest to the structure of the selected network. For network similarity, we use cosine similarity and we choose the networks with higher scores. For the mutation operator, similar to LLMatic we mutate half of the networks from the most curious prompt individuals and half from random individuals.
* Mutation-Only-LLMatic: LLMatic with only mutation operator.
* Crossover-Only-LLMatic: LLMatic with only crossover operator.
* Random-NN-Generation: Neural network generation without evolution. We generate \(100\) networks per generation for \(20\) generations to make it comparable as it is the same number of networks generated per batch in LLMatic. We apply the prompt "Create a neural network that inherits from nn.Module and performs better than the above neural network" and we add the initial network with this prompt.
### Results and Discussion
In this section, we will discuss the results of the experiments that we set up in the previous section. All our results will be a comparison between models for ablation study. We will first discuss the best loss per generation, illustrated in Figure 1. This will lead our discussion to trainable networks generated by changing the crossover and mutation probabilities Figure 3. Then we will discuss how archives are illuminated Figure 2. Some of the generated networks are shown in the supplementary material.
Looking at figure Figure 1, it ain't hard to tell that each component of LLMatic is necessary. Mutation-Only-LLMatic and Network-Archive-LLMatic are the closest to LLMatic which also proves that our choice of giving mutation more probability of being selected is the right one. Crossover-Only-LLMatic is understandably the worse as mutation provides more exploration Ullah, Salam, and Masood (2022). Both operators, mutation and crossover, together give exploration and exploitation abilities to LLMatic, which are highly necessary to find high-quality and diverse networks. While Prompt-Archive-LLMatic does significantly worse as network archive is an important aspect to find high-performing networks. Both archives together demonstrate competitive results.
We use EfficientNet-B0, which is the state-of-the-art network on CIFAR-10, from Tan and Le (2019) as an indicator of where our algorithm stands. EfficientNet-B0 was searched via methods applied by Tan et al. (2019) and is a bit bigger than the original study as they were targeting more FLOPS. The original study required \(8000\), we assume it should be more for EfficientNet-B0, on the other hand, LLMatic requires \(2000\) searches to find a competitive network. EfficientNet-B0 was first trained on ImageNet dataset Deng et al. (2009) and then on CIFAR-10 via transfer learning Torrey and Shavlik (2010). This is an advantage for EfficientNet-B0 as ImageNet has many classes and is an order of magnitude larger dataset.
Figure 2 demonstrates how each archive is being filled on average. We can see the prompt archive contains high-performing individuals who have the first few prompts and higher temperatures. Some of the good-performing individuals do have lower temperatures which suggest that sometimes it is good to pick deterministic layers. For network archives, we observe a diversity of high performing networks with respect to both FLOPS and width-to-depth ratio. More than \(20\) networks are competitive networks in this archive.
To look with more depth into why we choose the probabilities being \(0.7\) for mutation and \(0.3\) for crossover, we observe the number of trainable networks generated per generation. This is to be considered common knowledge that the more working individuals we have, the more chances of getting high-performing individuals. For this purpose, we train LLMatic with uniform probabilities, and \(0.3\) for mutation and \(0.7\) for crossover. We observe that uniform probabilities are still competitive with the original setting while giving crossover more probability makes it worse. The results of these experiments and results of the ablation study for Mutation-Only-LLMatic and Crossover-Only-LLMatic drive us to the conclusion that mutation should be given more
Figure 2: An illustration of archives generated by LLMatic. We have selected the archive with the median number of cells filled in experiments over 10 seeds. Figure 1(a) shows prompt archive. Figure 1(b) shows network archive. The lighter the colour of the filled cell, the better fitness of the individual.
probability of being selected.
## Conclusion and Future Work
To conclude, we present LLMatic: a novel Neural Architecture Search (NAS) algorithm that harnesses the power of Large Language Models (LLMs) and Quality-Diversity (QD) optimization algorithms. LLMatic successfully finds competitive networks that are diverse in architecture. We show empirically that LLMatic can find more than _20_ competitive networks, using only _2000_ searches. LLMatic decreases the max population size per generation to only _100_. LLMatic achieves this while relying on a _6.1B_ parameter language model. Furthermore, we show that each component in LLMatic is necessary. We do an extensive ablation study and find that LLMatic finds the network with the best accuracy among other variants.
LLMatic achieves this with many constraints in hand. Firstly, we use CodeGen-6.1B code generation LLM, which is a smaller language model when compared to existing LLMs. This demonstrates how computationally efficient LLMatic is, and how much it can unlock the value with a larger language model. Secondly, due to computational resources, we keep our searches to \(2000\), and still find competitive networks.
In future work, LLMatic should be compared to other NAS methods on other computer vision and natural language processing tasks. As neuroevolution is similar to NAS, LLMatic needs to be compared to Reinforcement Learning benchmarks as well. With this, LLMatic can be used in tasks like Open-ended Learning as well [20].
|
2303.15971 | Polygon gluing and commuting bosonic operators | We construct two series of commuting Hamiltonians parametrized by a constant
matrix. The first series was my guess and the second one was know, and in our
approach follows from the first series. For the proof we use familier facts
known from our previous consideration of the links between random matrices and
Hurwitz numbers, however the text is self-consistent. | A. Yu. Orlov | 2023-03-28T13:44:09Z | http://arxiv.org/abs/2303.15971v1 | # Polygon gluing and commuting bosonic operators
###### Abstract
Two families of commuting Hamiltonians are constructed, parametrized by a constant matrix. The first series was guessed and is new, while the second is known, and in our approach it follows from the first series. For proof, I used the facts known from our previous consideration of the relationships between random matrices and Hurwitz numbers, but the text is selfconsistent and does not require reference to these works.
To Leonid Chekhov and Nikita Slavnov in connection with their 60th anniversary
**Key words: Polygon gluing, commuting quantum Hamiltonians**
## 1 Introduction
The problem of finding a ring of commuting operators acting in an infinite-dimensional space is one of the problems of mathematics and mathematical physics. We found an example of such a ring. 1 Modern methods for obtaining such rings are described. in various works, in particular, these are works on quantum integrable systems. We can't even look through the huge collection of articles on these topics. To search for links, I present only articles [12], [27] with a wide list of references, and I must be sorry for the incomplete bibliography. Here I present a family of operators tested for commutation among themselves. The proof is simple, but I use some facts that we know very well from the relationship between Ginibre multimatrix ensembles and polygon gluing, see [15], [16]. In fact, the steps of the proof are quite elementary. The text is consistent and it's just a straight calculation with a little trick to make it easier.
Footnote 1: I was initiated by the articles [8]- [11] which are researches on a slightly different topic.
## 2 Commuting Hamiltonians I
Consider the following oscillator algebra:
\[[\phi^{\dagger}_{i,j},\phi^{\dagger}_{i^{\prime},j^{\prime}}]=[\phi_{i,j}, \phi_{i^{\prime},j^{\prime}}]=0,\quad 0\leq i,j,i^{\prime},j^{\prime}\leq N \tag{1}\]
\[[\phi^{\dagger}_{i,j},\phi_{j^{\prime},i^{\prime}}]=\delta_{i,i^{\prime}} \delta_{j,j^{\prime}},\quad 0\leq i,j,i^{\prime},j^{\prime}\leq N \tag{2}\]
The Fock space is generated by the action of the creation operators \(\phi_{i,j},\,0\leq i,j\leq N\) on the vacuum vector \(|0\rangle\), and
\[\phi^{\dagger}_{i,j}|0\rangle=0,\quad 0\leq i,j\leq N. \tag{3}\]
Matrices with entries \(\phi_{i,j},\,0\leq i,j\leq N\) and \(\phi^{\dagger}_{i,j},\,0\leq i,j\leq N\) we denote \(\phi\) and \(\phi^{\dagger}\) respectively.
Introduce
\[H_{n}(A)=\mbox{tr}\left((\phi^{\dagger}\phi A)^{n}\right) \tag{4}\]
If \(A\) is a Hermitian matrix, then the operators \(H_{n}(A)\) can be regarded as the Hamiltonians of the system of bosonic particles. It can be noted that for \(A=I_{N}\) the operator \(H_{2}(I_{N})\) can be associated with
the Laplace-Beltrami operator on the group \(GL_{N}\) and with the Hamiltonian of the quantum Calogero system at the special value of the coupling constant, when the eigenfunctions of the Hamiltonian turn into Schur functions:
\[H_{n}(I_{N})s_{\lambda}(\phi C)|0\rangle=E_{n}(\lambda)s_{\lambda}(\phi C)|0 \rangle,\quad C\in GL_{N} \tag{5}\]
where \(\lambda\) is a partition, \(E_{n}(\lambda)\) is an eigenvalue, and \(C\) is independent of \(\phi\).
**Remark 1**.: From the consideration in [16] it follows:
\[H_{n}(A)s_{\lambda}(\phi C)|0\rangle=E_{n}(\lambda)s_{\lambda}(\phi AC)|0\rangle, \tag{6}\]
where \(|\lambda|=n\). The relation (6) can be interpreted as an eigenvalue problem in two cases (i) either \(A=I_{N}\) or (ii) \(A\neq I_{N}\), \(AC=C\), which implies that both \(A\) and \(C\) degenerate. Case (i) is a specialization of the "generalized cut and join equation" introduced in [9], and for this case (6) is true without the restriction \(|\lambda|=n\).
A remarkable property of the operators \(H_{n}(A)\) is the fact that they commute with each other:
**Proposition 1**.: _We get_
\[[H_{n},H_{m}]=0 \tag{7}\]
_for any pair of \(n,m\in\mathbb{Z}_{\geq}\)._
Proof. Consider the multicomponent analogue of the operators \(\phi\) and \(\phi^{\dagger}\):
\[[\phi^{(a)\dagger}_{i,j},\phi^{(b)}\dagger_{i^{\prime},j^{\prime}}]=[\phi^{(a )}_{i,j},\phi^{(b)}_{i^{\prime},j^{\prime}}]=0,\quad 0\leq i,j,i^{\prime},j^{ \prime}\leq N \tag{8}\]
\[[\phi^{(a)\dagger}_{i,j},\phi^{(b)}_{j^{\prime},i^{\prime}}]=\delta_{i,i^{ \prime}}\delta_{j,j^{\prime}},\quad 0\leq i,j,i^{\prime},j^{\prime}\leq N \tag{9}\]
\[\phi^{(a)\dagger}_{i,j}|0\rangle=0,\quad 0\leq i,j\leq N. \tag{10}\]
for \(a,b=1,\ldots,k\).
Let's represent the creation operator \(\phi^{(a)}_{i,j}\) as an arrow, the beginning of which will be labeled \(i\) and the end labeled \(j\). Let's draw it as a white dotted arrow, and assign the number \(a\) to the arrow itself. The annihilation operator \(\phi^{(a)\dagger}_{i,j}\) will be represented by a black dotted arrow with the same number \(a\), but now we assign the label \(j\) to the beginning of the operator this arrow and the label \(i\) to end.
Then each entry \(B^{(a)}_{i,j}\) we depict by a solid black arrow with the number \(a\), we assign the label \(i\) to its beginning, and we attribute the label \(j\) to the end. Similarly, each entry \(C^{(a)}_{i,j}\) is depicted as a solid white arrow with the number \(a\), we attribute the label \(i\) to its beginning, and the label \(j\) to the end.
The trace of the product of several matrices can be represented by a polygon with directed edges with summation of labels at the vertices. We have two such products: the first one comes from \(H_{n}(A)\) and the second one comes from \(H_{m}(A)\) Which we call black and white polygons resectively. The edges of the black polygon will be directed in the positive direction (counterclockwise) and the edges of the white polygon will be directed negatevely. That is, the transition counterclockwise from one black polygon edge to another edge corresponds to reading the product under the trace sign from left to right and the transition clockwise from one white polygon edge to another edge also corresponds to reading the product under the trace sign from left to right. If we draw a negatively oriented k-gon on a sphere, then its complement is a positively oriented k-gon. In what follows, gluing \(k\)-gons will be understood as gluing a black \(k\)-gon with a complement to a white \(k\)-gon, see Fig. 1 with triangles for the visualization.
We consider two oppositely oriented \(2k\)-gonons: the black one generated by \(\mathrm{tr}\left(\phi^{(1)\dagger}B_{1}\phi^{(2)\dagger}B_{2}\cdots\phi^{(k) \dagger}B_{k}\right)\) whose edges are alternating dotted and solid black arrows, and white one generated by \(\mathrm{tr}\left(\phi^{(k)}C_{k}\phi^{(k-1)}C_{k-1}\cdots\phi^{(1)}C_{1}\right)\) whose edges are alternating dotted and solid white arrows.
In fact, it is more convenient to first consider the case when all matrices \(B_{i},C_{i},\,i=1,\ldots,k\) are identical ones. In this case, all edges of both polygons are shown as dotted arrows, see the see the top images in the Fig.1. And only then add to this drawing the matrices \(B_{i},C_{i},\,i=1,\ldots,k\) at the corners of the polygons as crossbars (solid arrows) at the corners, see the second from the top of the image in the Fig.1. Therefore, we will call the matrices \(B_{i}\) and \(C_{i}\) the black and white corner matrices, respectively.
One can varify
\[\langle 0|\mathrm{tr}\left(\phi^{(1)\dagger}B_{1}\phi^{(2)\dagger}B_{2}\cdots \phi^{(k)\dagger}B_{k}\right)\mathrm{tr}\left(\phi^{(k)}C_{k}\phi^{(k-1)}C_{ k-1}\cdots\phi^{(1)}C_{1}\right)|0\rangle=\mathrm{tr}(B_{1}C_{2})\mathrm{tr}(B_{2}C_{3}) \cdots\mathrm{tr}(B_{k}C_{1}) \tag{11}\]
This relation can be interpreted as a gluing of two \(2k\)-gons as follows. Each pair of oppositely directed dotted arrows (one black, one white) with number \(a\) (\(a=1,\dots,k\)) are glued according to the rule
\[\langle 0|\phi^{(a)\dagger_{i,j}}\phi^{(b)}_{j^{\prime},i^{\prime}}|0\rangle= \delta_{a,b}\delta_{i,i^{\prime}}\delta_{j,j^{\prime}}\]
One can see that (11) describes the sphere glued from two polygons with \(k\) vertices each is surrounded by negatively oriented \(2\)-gones whose edges are given by the arrows related to the matrices \((B_{1},C_{2}),(B_{2},C_{3})\cdots,(B_{k},C_{1})\). Then, the right hand side in (11) is obvious.
It can be seen that (11) describes a sphere glued from two polygons with \(k\) vertices, each of which is surrounded by negatively oriented \(2\)-gons, the edges of which are given by arrows associated with the matrices \((B_{1},C_{2}),(B_{2},C_{3})\cdots,(B_{k},C_{1})\). (This sphere can be thought of as a globe glued together from two hemispheres, on the equator of which there are \(k\) vertices, each of which is surrounded by a digon, two edges of which belong to the northern and southern hemispheres, respectively). Then the right side in (11) is obvious.
However, which surface is obtained from the mathematical expectation depends on which edges of the white polygon with which edges of the black polygon we glue. If we renumber the edges white polygon like \((k,k-1,\dots,1)\to i_{k},i_{k-1},\dots,i_{1}\) and glue the polygons again along the dotted arrows with the same numbers, then another oriented surface can be obtained.
For instance, see the top part of Fig.1. We have
\[\langle 0|\mathrm{tr}\left(\phi^{(1)\dagger}B_{1}\phi^{(2)\dagger}B_{2}\phi^{( 3)\dagger}B_{3}\right)\mathrm{tr}\left(\phi^{(2)}C_{2}\phi^{(1)}C_{1}\phi^{(3 )}C_{3}\right)|0\rangle=\mathrm{tr}(B_{1}C_{2})\mathrm{tr}(B_{2}C_{3}) \mathrm{tr}(B_{3}C_{1}) \tag{12}\]
since it turns out a sphere obtained from two triangles (two hemispheres: southern (black) and northern (white)). Here are the faces of the black triangle are numbered \(1,2,3\) if we go around it in a positive direction relative to the North Pole. The edges of the white triangle are numbered in the same way if we go around it in a positive direction relative to the South Pole. We get \(3\) vertices, each of which has two corners and monodromy of each the vertex is the product of one black and one white corner matrix.
The expression
\[\langle 0|\mathrm{tr}\left(\phi^{(1)\dagger}B_{1}\phi^{(2)\dagger}B_{2}\phi^{ (3)\dagger}B_{3}\right)\mathrm{tr}\left(\phi^{(1)}C_{1}\phi^{(2)}C_{2}\phi^{( 3)}C_{3}\right)|0\rangle=\mathrm{tr}(B_{1}C_{2}B_{3}C_{1}B_{2}C_{3}) \tag{13}\]
describes the gluing of two triangles with edges numbered \(1,2,3\) and \(2,1,3\), resulting in a torus with one \(6\)-valent vertex and \(6\) neighboring corners of alternating colors, see the bottom half of the figure 1.
We get
**Lemma 1**.: \[\langle 0|\mathrm{tr}\left(\phi^{(1)\dagger}B_{1}\phi^{(2)\dagger}B_{2}\cdots \phi^{(k)\dagger}B_{k}\right)\mathrm{tr}\left(\phi^{(i_{k})}C_{i_{k}}\cdots \phi^{i_{1}}C_{i_{k}}\right)|0\rangle=\mathrm{tr}(W_{1})\cdots\mathrm{tr}(W_{ V})\] (14)
_where \(V\) is the number of vertices, and each \(W_{\alpha}\) is a monodromy obtained as the product of corner matrices around the vertex \(\alpha\) while traversing it clockwise._
**Remark 2**.: An important statement is that the corner matrices associated with each vertex alternate between black and white as you go around the vertex. The vertex \(\alpha\) of valence \(2v\) has a monodromy of the form \(W_{\alpha}=(B_{i_{1}}C_{j_{1}})\cdots(B_{i_{k}}C_{j_{v}})\), where the numbers of the corner matrices \(i_{1},\dots,i_{v}\) and \(j_{1},\dots,j_{v}\) belong to the set \(1,\dots,k\). For example, see the right hand sides of the equations (11), (12), (13), see figure 1 for a pair of examples.
Now, one can write
\[:\mathrm{tr}\left(\left(\phi^{\dagger}\phi A\right)^{n}\right):\mathrm{tr} \left(\left(\phi^{\dagger}\phi A\right)^{m}\right):=\sum_{k=0}^{\min(n,m)}:Q_ {k}: \tag{15}\]
where \(k\) is the number of couplings \(\langle 0|\phi^{\dagger}\phi|0\rangle\) according to the Wick rule. Each \(:Q_{k}:\) is the sum which includes (i) the sum over the choice of samples of \(k\) operators \(\phi^{\dagger}\) in the first multiplier in (15) and the sum over the choice of samples \(\phi\) in the second multiplier in the left hand side of (15) (ii) the sum over the choice of pairing in the chosen samples which is the sum over elements of the permutation group \(S_{k}\).
where \(k\) is the number of pairings \(\langle 0|\phi^{\dagger}\phi|0\rangle\) according to Wick's rule. Each \(:Q_{k}:\) is the sum including (i) the sum over the choice of samples from \(k\)\(\phi^{\dagger}\) operators in the first factor in (15) and the sum over the choice of \(\phi\) samples in the second factor on the left hand side (15) (ii) the sum over the choice of pairing in the selected samples, which is the sum over the elements of the permutation group \(S_{k}\).
**Remark 3**.: To describe different pairings we number the chosen \(\phi^{\dagger}\) with the numbers \(1,\ldots,k\) in
\[\underbrace{\left(\phi^{\dagger}\phi A\right)\cdots\left(\phi^{\dagger}\phi A \right)}_{n} \tag{16}\]
in the order from the left to the right. In the product
\[\underbrace{\left(\phi^{\dagger}\phi A\right)\cdots\left(\phi^{\dagger}\phi A \right)}_{m} \tag{17}\]
we number the chosen \(\phi\) with the numbers \(\sigma(1),\ldots,\sigma(k)\) from the left to the right. We couple the \(\phi^{\dagger}\) to \(\phi\) with the same number.
One should have in mind that under the sign of the trace the product a cyclic permutation does range the trace. Then we can place the of the \(\phi_{1}^{\dagger}\) and \(\phi_{1}\) on the leftmost place resepctively in (16) and (17).
Symbolically one can write
\[:Q_{k}:=\sum_{\text{samples}}\sum_{\sigma\in S_{k}}:q_{\sigma}(k\times k\, \text{samples}): \tag{18}\]
The terms \(:Q_{0}:\),\(:\)\(:Q_{1}:\) and \(:Q_{2}:\) are the easiest ones:
\[:Q_{0}:=:\text{tr}\left(\left(\phi^{\dagger}\phi A\right)^{n}\right)\,\text{ tr}\left(\left(\phi^{\dagger}\phi A\right)^{m}\right):\]
\[:Q_{1}:=nm:\text{tr}\left(A\left(\phi^{\dagger}\phi A\right)^{n+m-1}\right):\]
\[:Q_{2}:=nm\sum_{\stackrel{{ n_{1},n_{2},n_{3},\,n_{1}+n_{2}=n-3}}{{ n_{1},n_{2},n_{3},\,m_{1}+m_{2}+m_{3}=m-3}}}:\text{tr}\left(A\left(\phi^{ \dagger}\phi A\right)^{n_{1}+m_{2}-1}\right)\text{tr}\left(A\left(\phi^{ \dagger}\phi A\right)^{n_{2}+m_{1}-1}\right):\]
For \(k=3\) we use (12) and (13):
\[nm\sum_{\stackrel{{ n_{1},n_{2},n_{3},\,n_{1}+n_{2}+n_{3}=n-3}}{{ m_{1},n_{2},n_{3},\,m_{1}+m_{2}+m_{3}=m-3}}}:\text{tr}\left(A\left(\phi^{ \dagger}\phi A\right)^{n_{1}+m_{2}-1}\right)\text{tr}\left(A\left(\phi^{ \dagger}\phi A\right)^{n_{2}+m_{3}-1}\right):\]
\[+nm\sum_{\stackrel{{ n_{1},n_{2},n_{3},\,n_{1}+n_{2}+n_{3}=n-3}}{{ m_{1},n_{2},n_{3},\,m_{1}+m_{2}+m_{3}=m-3}}}:\text{tr}\left(A\left(\phi^{ \dagger}\phi A\right)^{n_{1}+m_{2}-1}A\left(\phi^{\dagger}\phi A\right)^{n_{ 3}+m_{1}-1}A\left(\phi^{\dagger}\phi A\right)^{n_{2}+m_{3}-1}\right):\]
For each \(k\) we have
\[:Q_{k}:=n\sum_{\stackrel{{ n_{1},\ldots,n_{k}}}{{m_{1},\ldots,n_{k}}}}^{ \prime}\sum_{\sigma\in S_{k}}:q_{\sigma}(k): \tag{19}\]
where \(\sigma\) enumerates different ways of couplings of each \(\phi\) among chosen \(k\) samples from the left hand side to the chosen \(k\) samples of \(\phi^{\dagger}\) from the left hand side. There are \(\left(\begin{smallmatrix}n\\ k\end{smallmatrix}\right)\) and \(\left(\begin{smallmatrix}m\\ k\end{smallmatrix}\right)\) ways to choose the samples and there are \(\left|S_{k}\right|=k!\) different pairings (in fact \((k-1)!\) different pairings because of the cyclic permutations). Actually these numbers are not important for us.
When the choice of samples and the numbering are fixed one can write the left side of (15) in form (14) where
\[B_{i}=\phi A\left(\phi^{\dagger}\phi A\right)^{n_{i}},\quad C_{i}=A\left(\phi^ {\dagger}\phi A\right)^{m_{i}}\phi^{\dagger} \tag{20}\]
where the numbers \(n_{1},\ldots,n_{k}\) measure a 'distance' between neiboring \(\phi^{\dagger}\) chosen in (16). There are the following equalities
\[\sum_{a=1}^{v}n_{i_{a}}=n-k \tag{21}\]
\[\sum_{a=1}^{v}m_{j_{a}}=m-k \tag{22}\]
Each way of pairing yields a certain embedded graph. Each graph gives rise to a product of traces of the monodromies around the vertices, see Lemma 1 where the structure of each monodromy is clarified by the Remark 2, therefore the trace of each monodromy \(W_{\alpha}\) has a form
\[\operatorname{tr}\!W_{\alpha}=\operatorname{tr}\left((B_{i_{1}}C_{j_{1}}) \cdots(B_{i_{v}}C_{j_{v}})\right)=\operatorname{tr}\left(A(\phi^{\dagger}\phi A )^{n_{i_{1}}+m_{j_{1}}+1}\cdots A(\phi^{\dagger}\phi A)^{n_{i_{v}}+m_{j_{v}}+1}\right) \tag{23}\]
with some \(i_{1},j_{1},\dots,i_{v},j_{v}\) where \(2v\) is the valence of a vertex \(\alpha\). Thus each trace of monodrome is completely characterized by the ordered set
\[n_{i_{1}}+m_{j_{1}},\dots,n_{i_{v}}+m_{j_{v}} \tag{24}\]
given up to a cyclic permutation. Let us call it the spectrum of the vertex \(\alpha\).
Now we will compare these spectra.
Similarly for the second term in the commutator we write
\[:\operatorname{tr}\left(\left(\phi^{\dagger}\phi A\right)^{m}\right):: \operatorname{tr}\left(\left(\phi^{\dagger}\phi A\right)^{n}\right):=\sum_{k= 0}^{\min(n,m)}:Q_{k}^{*}: \tag{25}\]
and
\[:Q_{k}^{*}:=\sum_{\text{samples}^{*}}\sum_{\sigma\in S_{k}}:q_{\sigma}^{*}(k \times k\,\text{samples}^{*}): \tag{26}\]
We will compare terms \(:q_{\sigma}(k\times k\,\text{samples}):\) and \(:q_{\sigma}^{*}(k\times k\,\text{samples}^{*}):\) for a given \(\sigma\). We also relates the samples as follows: to each chosen \(\phi^{\dagger}\) from a sample in (18) we choose the nearest right neiboring \(\phi\) for the sample in (26) and to each chosen \(\phi\) of a sample in (18) we choose the nearest left neiboring \(\phi^{\dagger}\) for the associated sample in (26). Such pairs of samples we will call dual ones.
where \(k\) is the number of pairings and for each \(k\) we have
\[:Q_{k}^{*}:=\sum_{\sigma\in S_{k}}:q_{\sigma}^{*}(k): \tag{27}\]
\[\operatorname{tr}\!W_{\alpha}^{*}=\operatorname{tr}\left((B_{i_{1}}C_{j_{1}}) \cdots(B_{i_{v}}C_{j_{v}})\right)=\operatorname{tr}\left(A(\phi^{\dagger}\phi A )^{n_{i_{1}}+m_{j_{2}}+1}\cdots A(\phi^{\dagger}\phi A)^{n_{i_{v}}+m_{j_{1}}+ 1}\right) \tag{28}\]
which is characterized by the spectrum
\[n_{i_{1}}+m_{j_{2}},\dots,n_{i_{v}}+m_{1} \tag{29}\]
Now one compare this dual spectrum with the spectrum (24). Spectrums (24) and (29) can be one-to one related by the cyclic permutation of the numbers \(n_{i_{a}},\,a=1,\dots,v\) related to the vertex \(\alpha\) under consideration. Namely, by
\[n_{i_{1}}\to n_{i_{2}},\dots,n_{i_{v}}\to n_{i_{1}} \tag{30}\]
and all \(m_{j_{a}},\,a=1,\dots,v\) in (29) are the same as in (24). These replacements do not violate conditions (21) and (22).
The same correspondences between terms resulting from (15) and from (25) we get for all vertices of the resulting graph, and one gets it for all graphs. The proof is complete.
For a given partition \(\mu=(\mu_{1},\dots,\mu_{\ell},\,\ell=1,2,\dots\) we introduce
\[H_{\mu}(A)=:\prod_{i=1}^{\ell}\operatorname{tr}\left(\left(\phi^{\dagger}\phi A \right)^{\mu_{i}}\right): \tag{31}\]
In the same way as we prove Proposition 1 one can prove that
\[\left[H_{n}(A)\,,\,H_{\mu}(A)\right]=0 \tag{32}\]
for \(n=1,2,\dots\) and any partition \(\mu=(\mu_{1},\mu_{2},\dots)\).
In this case for each given \(k\) we glue a single black \(k\)-gon related to the picking out the discribed samples in \(\operatorname{tr}\left(\left(\phi^{\dagger}\phi A\right)^{n}\right)\) to the set of white polygones obtained as picking out of the bosonic operators from the product \(h_{\mu}(A)\). In this case we use the same cyclic permutation of the numbers \(n_{i}\) described above.
## 3 Commuting Hamiltonians II
Suppose \(A=1+\epsilon\textsc{A}\), where \(\textsc{A}\in\mathrm{Mat}_{N\times N}\), \(\epsilon\) is a parameter.
Let us introduce
\[h_{n}(A)=:\mathrm{tr}\left(a\left(\phi^{\dagger}\phi\right)^{n}\right) \tag{33}\]
and
\[h_{\lambda}(A)=:\prod_{i=1}^{\ell}\mathrm{tr}\left(a\left(\phi^{\dagger}\phi \right)^{\lambda_{i}}\right): \tag{34}\]
where \(\lambda=(\lambda_{1},\ldots,\lambda_{\ell}\) is a partition. Then from Proposition 1 and from
**Lemma 2**.: \[[H_{n}(I_{N}),H_{m}(I_{N})]=0,\] (35)
_from_
**Lemma 3**.: \[[H_{n}(I_{N}),h_{\lambda}(a)]=0\] (36)
it follows
**Corollary 1**.: _Let \(\textsc{A}\in\mathrm{Mat}_{N\times N}\)_
\[[h_{n}(a),h_{\lambda}(a)]=0 \tag{37}\]
Each of Lemmas (2),(3) and actually (37) is proven in the same way as Proposition 1. We omit the details.
Being written in the form of differential operators and without the sign of normal ordering Lemmas (2),(3) and also (37) are quite known, for example see [3].
## 4 Discussion
(i) We found a series of commuting Hermitian operators, this means that these is a quantum integrable system. We mark that the eigenfunctions for the Hamiltonians \(h_{n}(A)\) are yet unknown in case of general Hermitian matrices. This is a problem to be solved together with a number of problems, say, solved in [26] for a number of quantum integrable problems.
(ii) It will be interesting to compare these results to the results obtained in [19], [20], [17], [18], [12], [13], [14], [24], [25], [2], [3].
(iii) Links with the quantum Calogero model is the other interesting direction to work out. In this relation the following works should be pointed out [8]- [11], [22], [23], [4]
## Acknowledgements
The author is grateful to George Shurygin for important discussions and to A.D.Mironov, D.Gurevich and A.Zheglov for inspirating talks on the related subjects. I also thank A.E.Mironov and I.A.Taimanov for the invitations to Novosibirsk where this work was finalized. The work was supported by the Russian Science Foundation (Grant No.20-12-00195).
|
2310.15849 | A Resilient Framework for 5G-Edge-Connected UAVs based on Switching
Edge-MPC and Onboard-PID Control | In recent years, the need for resources for handling processes with high
computational complexity for mobile robots is becoming increasingly urgent.
More specifically, robots need to autonomously operate in a robust and
continuous manner, while keeping high performance, a need that led to the
utilization of edge computing to offload many computationally demanding and
time-critical robotic procedures. However, safe mechanisms should be
implemented to handle situations when it is not possible to use the offloaded
procedures, such as if the communication is challenged or the edge cluster is
not available. To this end, this article presents a switching strategy for
safety, redundancy, and optimized behavior through an edge computing-based
Model Predictive Controller (MPC) and a low-level onboard-PID controller for
edge-connected Unmanned Aerial Vehicles (UAVs). The switching strategy is based
on the communication Key Performance Indicators (KPIs) over 5G to decide
whether the UAV should be controlled by the edge-based or have a safe fallback
based on the onboard controller. | Gerasimos Damigos, Achilleas Santi Seisa, Sumeet Gajanan Satpute, Tore Lindgren, George Nikolakopoulos | 2023-10-24T14:04:26Z | http://arxiv.org/abs/2310.15849v1 | A Resilient Framework for 5G-Edge-Connected UAVs based on Switching Edge-MPC and Onboard-PID Control
###### Abstract
In recent years, the need for resources for handling processes with high computational complexity for mobile robots is becoming increasingly urgent. More specifically, robots need to autonomously operate in a robust and continuous manner, while keeping high performance, a need that led to the utilization of edge computing to offload many computationally demanding and time-critical robotic procedures. However, safe mechanisms should be implemented to handle situations when it is not possible to use the offloaded procedures, such as if the communication is challenged or the edge cluster is not available. To this end, this article presents a switching strategy for safety, redundancy, and optimized behavior through an edge computing-based Model Predictive Controller (MPC) and a low-level onboard-PID controller for edge-connected Unmanned Aerial Vehicles (UAVs). The switching strategy is based on the communication Key Performance Indicators (KPIs) over 5G to decide whether the UAV should be controlled by the edge-based or have a safe fallback based on the onboard controller.
Edge Robotics; 5G; UAV; MPC; Resiliency.
## I Introduction
Cloud and edge computing have emerged in the field of robotics, and the terms of cloud robotics [1, 2, 3] and edge robotics [4, 5, 6] are becoming a trend in the scientific world. At the same time, 5G is providing an ideal communication environment thanks to the increased performance and the additional available features like the Quality of Service (QoS) [7]. However, the safe utilization of remote cloud or edge computing resources requires the consideration of onboard safety fallback actions for mission-critical applications. Furthermore, communication Key Performance Indicators (KPIs) can provide useful information on the status of the system, that can be used to trigger a series of safety actions.
This article deals with the challenge of communication uncertainty within cloud or edge-connected robots over 5G. Even though edge computing and 5G networks can provide minimal latency, robust networking, and reliable access to external computational resources, still the need for onboard processing for safety reasons is essential. The edge-based algorithms can provide optimized behavior for the system, while the onboard backup actions can provide redundancy in case of degraded communication. Communication degradation can happen in cases such as out-of-coverage scenarios or overloaded network cells. In the authors' previous works [8, 9], edge-based architectures were presented for offloading time-critical applications, while these contributions were demonstrated with edge-based Model Predictive Control (MPC) schemes over 5G networks [10]. The need for edge resources for the execution of the edge-MPC was verified with a series of experiments in [11]. The previous works were focused on the Kubernetes (K8s) architecture, which has also been used for this article, and how the MPC could be offloaded to the edge in an optimal manner. However, no actions were considered in case of communication issues.
In the novel proposed framework, shown in Fig. 1, a switching strategy is developed and is responsible to decide the input of the Low-Level Controller (LLC) of the Unmanned Aerial Vehicle (UAV). The modules shown in Fig. 1 run either offboard (Edge), and are mainly focused on optimizing the behavior of the system, or onboard (UAV onboard computer) and are mostly focused on the UAV operation and safety actions. In this framework, the switching strategy,
Fig. 1: System Overview. The safety fallback actions include the switching strategy on the UAV along with the fallback planner and the PID control. The optimized algorithms on the edge include the MPC, the optimizer, and the state estimation. Additionally, a 5G network has been established for communication
which is running onboard, can utilize either the edge-MPC or the onboard control commands, based upon the status of the communication link. The chosen onboard controller is a Proportional-Integral-Derivative (PID) controller because it is computationally light and can run on any UAV's onboard computer. To establish robust communication between the UAV and the edge, 5G networks have been used, while for the development of the source code and the messaging between the different modules, the Robotic Operating System (ROS) framework was utilized as in [12].
While some studies deal with communication issues with edge algorithms that can tolerate latency [13, 14, 15], switching mechanisms are essential for system redundancy. Even though switching from cloud/edge servers to local operation is a critical functionality for autonomous systems, there are not many works that are addressing this problem. As such, some articles are proposing a switching mechanism between the cloud and the edge operation. In [16], autonomous vehicles are controlled by the cloud, since multi-sensor data for multiple autonomous vehicles can be handled better by the cloud. However, when latency measurements are higher than a threshold, the switching method activates the edge controller. Other works, like [17], promote switching between different edge servers to handle communication issues for autonomous vehicles. These studies, though, do not consider cases when the latency is very high or when the communication is completely lost, thus making the need for onboard control crucial.
A self-reliant MPC and a replacement controller have been proposed in [18]. The self-reliant MPC is operating on the cloud, while the replacement controller is providing improved control performance when there are relatively long duty-standby transitions that lead to degraded performance of the self-reliant MPC. In [19] and [20], edge-based and local controllers have been introduced for industrial control systems. The two controllers cooperate to enhance the performance based on a switching logic. In [19] the local controller outputs the edge controller's commands when it is delivered on time, and it outputs its own command otherwise. The switching logic in [20] is aiming to both guarantee stability and optimal control. The edge controller provides optimal behavior when the system is operating in a performance region, but once the system exits the performance region, the system switches to the local controller to ensure stability, Many approaches have also considered switching schemes from a classical time delay approach as in [21] but this article is focusing only on edge oriented architectures.
The authors in [22] proposed a mechanism where a cloud MPC can control independently a system, support a local controller or have the local controller operate independently when the cloud MPC fails, while in [23], a cloud MPC, a local MPC and a Linear Quadratic Regulator (LQR) are utilized to control the system. The cloud MPC is running at a high rate and is responsible for optimally controlling the system. This occurs when the control command originating from the cloud MPC can be available to the system within a decided sampling period. Otherwise, the local MPC, which is running at a lower rate, takes over. Since the computational power locally is limited, the local MPC might not be able to solve the optimization problem and generate control commands. In this case, the local LQR controls the system.
In comparison to all the previous works, our system does not switch from the remote controller to the local based on latency metrics, but it uses the communication KPIs and estimates the position error (the difference between the actual position and the desired position of the UAV) produced by these metrics. By doing so, the sensitivity of the system to time delays is taken into consideration. Thus, the system can reactively switch to the local safe mode, only when the communication is heading the system to undesirable states.
The main contribution of this work is the development of a novel resilient strategy that can reactively switch from offboard to onboard controllers based on the availability of a fast and reliable communication link. The switching strategy optimizes the system's behavior by utilizing the advanced edge-based algorithms when the system respects the communication requirements and switches to the onboard safety mode when communication is degraded and considered unstable as relying on offloaded procedures can lead to huge position errors and destabilize the system under such conditions. Furthermore, the switching mechanism provides redundancy when the connection is poor or lost since the function is based on connectivity conditions like signal strength, packet loss, or based on end-to-end KPIs, like latency. These metrics are used by the strategy to estimate a position error, and thus, by switching between onboard and edge-based controllers, it can be used to keep the position error bounded. The switching, along with additional computational light components that were developed on the UAV's onboard computer, acts as a safety resilient layer for the overall system. Finally, the whole system goes through a series of experiments for the evaluation of the switching strategy.
The rest of the article is organized into three sections. Section II is describing the overall system, the modules, and the components of the system. The main modules and components are introduced in subsections and the switching strategy is analyzed. In Section III, the experimental setup is presented, among with the results and the evaluation of the system. Finally, Section IV concludes with the justification of the article's conceptions, and some interesting future directions and implementations are proposed.
## II System Overview and Safety Actions
In this work, we are focused on developing resilient fallback actions for 5G-edge-enabled UAVs based on latency, application layer dropped packets, and signal strength measurements. Depending on these communication KPIs, the system decides through a switching mechanism, whether the UAVs should be controlled by the edge algorithms, which are developed utilizing the Kubernetes PODs technology for optimized behavior (offboard control mode) or whether the onboard modules should take over (onboard control mode). To ensure the functionality of the switching mechanism, many
components were utilized and developed. Moreover, a UDP tunneling has been developed for this work in order to forward the ROS messages from the onboard computer to the PODs of the Kubernetes server.
### _Edge-MPC and Onboard-PID_
The proposed mechanism is switching between an MPC, which is running on the edge, and a PID, which is running on the UAV's onboard computer. It receives as an input the control command signal, from both the MPC and the PID, and outputs one of the two signals. This output will be the input of the UAV's LLC.
#### Ii-A1 Edge Model Predictive Control
A 5G-enabled UAV that uses an edge server, as an external computational unit, experiences minimal latency in the uplink and downlink communication direction. Fig. 2 depicts the observed latency in the uplink and downlink direction. For both the onboard controller and the remote controller on the edge server, the UAV is described as a robot with six degrees of freedom and a fixed body frame, as presented in [15] and described in Eq. 1.
\[\dot{p}(t) =v_{z}(t)\] \[\dot{v}(t) =R_{x,y}(\theta,\phi)\begin{bmatrix}0\\ 0\\ T_{ref}\end{bmatrix}+\begin{bmatrix}0\\ 0\\ -g\end{bmatrix}-\begin{bmatrix}A_{x}&0&0\\ 0&A_{y}&0\\ 0&0&A_{z}\end{bmatrix}v(t) \tag{1}\] \[\dot{\phi}(t) =\frac{1}{\tau_{\phi}}(K_{\phi}\phi_{ref}(t)-\phi(t))\] \[\dot{\theta}(t) =\frac{1}{\tau_{\theta}}(K_{\theta}\theta_{ref}(t)-\theta(t))\]
The position of the UAV is denoted as \(p=[p_{x},p_{y},p_{z}]^{T}\) and the linear velocity is denoted as \(v=[v_{x},v_{y},v_{z}]^{T}\). A rotation matrix that describes the attitude of the UAV in Euler form is denoted by \(R(\theta(t),\phi(t))\in SO(3)\), where \(SO(3)\) is the \(3D\) rotation group and the roll and pitch angles are denoted respectively as \(\phi\in[-\pi,\pi]\) and \(\theta\in[-\pi,\pi]\). The variables with the subscript '_ref_' represents the desired value. The only parameters that affect the acceleration are the magnitude and the angle of the thrust vector produced by the motors, the linear damping terms \(A_{x},A_{y},A_{z}\in R\) and the gravity of earth \(g\). This can be derived from Eq. 1. A first-order system is used to model the relationship between the attitude (roll/pitch), and the referenced terms \(\phi_{ref}\) and \(\theta_{ref}\in R\), with gains \(K_{\phi}\) and \(K_{\theta}\in R\) and time constants \(\tau_{\phi}\) and \(\tau_{\theta}\in R\). Additionally, a Lower-Level attitude Controller (LLC) takes as input the thrust, roll, and pitch commands and generates the motor commands for the UAV. The control command values are saturated, as will be described in Section II-A3. Note here that the position and the linear velocity in the described setup are obtained through the sensing system for the UAV and sent to the edge server over the uplink channel.
For the cost function, a related optimizer is assigned to find an optimal set of control actions, defined by the cost minimum of the cost function \(J\) described from Eq. 2.
\[J =\sum_{j=1}^{N}\underbrace{(x_{d}-x_{k+j|k})^{T}Q_{x}(x_{d}-x_{k +j|k})}_{state\ cost}\] \[+\underbrace{(u_{d}-u_{k+j|k})^{T}Q_{u}(u_{d}-u_{k+j|k})}_{input \ cost} \tag{2}\] \[+\underbrace{u_{k+j|k}-u_{k+j-1|k})^{T}Q_{\delta u}(u_{k+j|k}-u_ {k+j-1|k})}_{control\ actions\ smoothness\ cost}\]
where \(N\) is the prediction horizon of the MPC, \(x=[p,v,\phi,\theta]^{T}\) is the UAV's state vector and \(u=[T,\phi_{d},\theta_{d}]^{T}\) is the control input. \(Q_{x}\in\mathbb{R}^{8x8}\) is the matrix for the state weights, \(Q_{u}\) is the matrix for the input weights, and \(Q_{\delta u}\in\mathbb{R}^{3x3}\) is the matrix for the input rate weights.
#### Ii-A2 State Estimation
The proposed switching strategy consists of multiple modules and components. The state estimation module is designed to account for the data that flow from the UAV to the edge server, i.e., the uplink, while the error estimation module on the UAV accounts for the data that flow from the edge server to the UAV, i.e., the downlink. The more complicated case of the uplink direction is handled by estimating the actual state of the UAV by utilizing the received delayed (from the uplink link) state on the edge server. This method is inspired by the work described in [15].
As depicted in Fig. 2, the captured state of the robot is delayed by \(l_{u}=t^{\prime}_{j}-t_{j}\). In order to compensate for the \(l_{u}\), the state of the robot is estimated when the data arrive at the
Fig. 2: Control block diagram of the system
edge server. A timestamp field in the robot's state is used to calculate \(l_{u}\) and the model of the system to predict the actual state. Let \(p(t)\) be the UAV's position, and \(v(t)=\hat{p}_{(}t)\) be the UAV's velocity, then the estimated position and velocity are formulated by Eq. 3.
\[\hat{p}(t)=p(t-l_{u})\] \[\implies\hat{v}(t)=v(t-l_{u}) \tag{3}\]
To track the future state, the uplink delay \(l_{u}\), has to be taken into account, thus, the latter expression for the velocity is expressed by Eq. 4.
\[\hat{v}(t+l_{u})=\hat{v}(t)+\int_{t-l_{u}}^{t}\hat{v}(t)dt \tag{4}\]
The integral term in (4) is simplified using a Taylor series approximation and ignoring the higher order terms (since \(l_{u}^{2}<<l_{u}\)) as denoted in Eq. 5.
\[\hat{v}(t+l_{u})=\hat{v}(t)+\hat{v}(t)l_{u}. \tag{5}\]
Respectively, the expression for the estimated position is described by Eq. 6.
\[\hat{p}(t+l_{u})=\hat{p}(t)+v(t)l_{u}. \tag{6}\]
Finally, the delayed values regarding the roll and pitch can be derived in a similar manner.
#### Iii-A3 Onboard PID Control
In the onboard control mode, the PID controller takes over based on the fallback switching signal and generates the corresponding control actions to continue the UAV mission. These control actions are produced by less advanced algorithms in comparison to the ones that are produced from the edge controller, but their generation requires much less computational effort. The controller is a standard PID controller with gains \(K_{P},K_{I},K_{D}\) for the proportional, integral, and derivative terms respectively, that take as input the odometry data, \(x_{i}\) (position of the UAV \(p_{i}\)) from the UAV sensors and generates roll, pitch, yaw, and thrust commands, \(u_{i}^{PID}\). These commands are saturated to an upper \(u^{th}\) and lower \(-u^{th}\) value as expressed by Eq. 7 so the LLC will not receive extreme control commands.
\[\begin{split}& u_{i}=u^{th},\quad\ if\ \ u_{i}^{PID}\geq u^{th}\\ & u_{i}=-u^{th},\ \ if\ \ u_{i}^{PID}\leq-u^{th}\\ & u_{i}=u_{i}^{PID},\ \ \elsewhere\end{split} \tag{7}\]
### _Switching Strategy_
To ensure the UAV's autonomy, the switching strategy, which can also be utilized as a resilient fallback mechanism, is deployed onboard the UAV. Unlike the most common approaches in the literature, which usually employ mechanisms based on the round trip time (RTT) delay [24], the presented framework utilizes a switching mechanism that is triggered on the estimated error, i.e., how much the acquired trajectory will deviate from the reference trajectory, based on the downlink latency and the measured dropped packets in the application layer. Finally, a radio signaling KPI is employed to account for the non-linear relationship between signal coverage and various latency KPIs.
#### Iii-B1 Error Estimation based on the Downlink Latency
Due to the downlink latency, we can assume that the UAV's actual position \(p_{i}\) may deviate from the reference position \(p_{i}^{ref}\), given that the aerial robot has a linear velocity \(v_{i}\) different from zero. The estimated error \(\hat{e}\), based on the downlink latency (\(l_{c}\) and \(l_{d}\)), does not reflect the overall position error (difference between the actual position and the desired position of the UAV), but it provides an estimation on how the error can vary due to the latency. Of course, there are other parameters that affect the error, such as the uncertainty, the control design defect, the disturbances, etc., but these are not considered in the design of the switching strategy, since the error from these parameters will occur whether we use onboard or offboard controllers.
In Fig. 3 the downlink latency is depicted, where \(u_{i}\) is describing the command that is sent from the edge at time \(t_{i}\) and arriving at the UAV at time \(t_{i}^{\prime}\). The frequency of the generated commands is set and considered fixed with the MPC rate denoted as \(f_{exec}\) and the MPC execution time \(t_{exec}\), constants. Thus, by measuring the time that a control command was generated and the time it arrived at the UAV, we can calculate the downlink latency. With this information and the measurements of the UAV's velocity, based on the last control command that was sent from the remote controller (MPC), we can estimate the position error, \(\hat{e}\). Hence, the estimated error \(\hat{e}_{i}\) is calculated by the distance, \(d=v(t_{i-(k+1)})\cdot l(t_{i})\) the UAV covered between two consecutive commands (\(u_{i}\) and \(u_{i-(k+1)}\)).
During this work, two ways of downlink latency formulations and, respectively, two ways of error estimations are used to capture the UAV's behavior. The first estimated error \(\hat{e}_{c}(t_{i})\), is based on the latency between two consecutive commands (\(u_{i}\), \(u_{i-(k+1)}\)) that arrive to the UAV and is described by Eq. 8.
Fig. 3: Latency plot based on downlink time (\(l_{c}\) and \(l_{d}\))
\[\hat{e}_{c}(t_{i})=v(t_{i-(k+1)})\cdot l_{c}(t_{i})\] \[l_{c}(t_{i})=t^{\prime}_{i}-t^{\prime}_{i-(k+1)} \tag{8}\]
where \(k\) is the number of dropped packets, \(v(t_{i-(k+1)})\) is the velocity of the UAV, based on the previous valid command (the command that was generated from the correct corresponding states of the UAV), and \(l_{c}(t_{i})\) is the latency between two consecutive commands at the UAV (\(u_{i}\) and \(u_{i-(k+1)}\)), for time stamps \(t_{i},i=1,2,..,n\).
The second estimated error \(\hat{e}_{d}(t_{i})\) is based on the latency that is introduced by the time the command (\(u_{i-k}\)) was created on the edge server to the time the command (\(u_{i}\)) reached the UAV, and is described by Eq. 9.
\[\hat{e}_{d}(t_{i})=v(t_{i-(k+1)})\cdot l_{d}(t_{i})\] \[l_{d}(t_{i})=t^{\prime}_{i}-t_{i}+t_{exec}\cdot k \tag{9}\]
where \(l_{d}(t_{i})\) is the latency that is introduced by the time the command \(u_{i-k}\) was generated by the edge and the time the valid command (\(u_{i}\)) arrived to the UAV.
When \(t_{down}=t_{exec}\) then \(\hat{e}_{c}=\hat{e}_{d}\), while when \(t_{down}>t_{exec}\) then \(\hat{e}_{c}<\hat{e}_{d}\) and \(t_{down}<t_{exec}\) then \(\hat{e}_{c}>\hat{e}_{d}\). Thus, to estimate the error \(\hat{e}\) that is introduced to the system due to the latency, we calculate the mean error between \(\hat{e}_{c}\) and \(\hat{e}_{d}\), Eq. 10.
\[\hat{e}=\frac{\hat{e}_{c}(t_{i})+\hat{e}_{d}(t_{i})}{2} \tag{10}\]
Once we have estimated the error \(\hat{e}\) that the latency can add to the system, we can propose a threshold \(e_{th}\) to the accepted error.
#### Ii-C2 Signal Strength
In addition, other KPIs that can inform us about the status of the communication are the ones that describe channel conditions. Even though in some cases the latency might be low, there is the possibility of suddenly losing communication. The signal strength and more specifically the Signal-to-Interference-plus-Noise Ratio (\(SINR\)) can alert the system before the communication is lost and trigger the switch to activate the onboard control mode. A threshold \(s^{th}\) based on \(SINR\) studies have been utilized and set so that the latency and throughput requirements for the offloaded processes are met.
#### Ii-C3 Switch Formulation
The system requirement is to keep the position error bounded and to ensure a reliable communication channel. It is impossible to eliminate the error completely because the system itself introduces some error as mentioned above. Our goal is to keep the error bounded in acceptable values for the safety of the UAV. Since we know that a portion of the error depends on the end-to-end latency of the system as well as the dropped packet count in the application layer, then we can estimate whether the latency error \(\hat{e}\) will exceed a threshold \(e_{th}\) that has been ad-hoc defined. Thus, we can predict that the overall error (\(|p_{i}-p_{i}^{ref}|\)) will exceed a predefined desired bounded limit (defined by use case characteristics), and then the switch will be triggered and turn the system into onboard control mode. Once the latency is low and the estimated error \(\hat{e}\) is less than the \(e_{th}\) (\(\hat{e}<e_{th}\)), the switch will be triggered and set the system back to the offboard control mode. Though the onboard-PID controller has the worst performance and the position error is overall bigger than the offboard-MPC, still, for safety reasons, the operation of the system using the PID controller is better and preferable when the latency is high.
In series to the error switch, a \(SINR\) switch is placed as depicted in Fig. 4. This switch is triggered when the signal \(s\) based on the \(SINR\) metrics exceeds the \(s^{th}\). This switch is connected directly to the LLC of the UAV, thus, the \(SINR\) can switch the system to the onboard control mode regardless of the output of the error switch.
To avoid undesired continuous changes in the control mode, a sliding window has been utilized. The switching strategy can be characterized as a two-level switch (one error switch and one \(SINR\) switch). The overall system is shown in Fig. 4 with all the described components and modules.
## III Experimental Evaluation
For the experimental components, the following equipment was utilized. A real-life 5G network operating in mid-band frequencies (3.7 GHz) at the premises of the Lulea University of Technology. The used 5G network system provides an
Fig. 4: Overall block diagram of the system with all the modules. The switching mechanism is highlighted with all the described components
indoor 5G Ericsson DOT base station system that was chosen for the manifestation of the corresponding experiments. The utilized edge server is located near the local core breakout of the 5G network, thus achieving optimal low latency, which has been thoroughly demonstrated and documented in previous works [10]. Further, a Kubernetes-enabled subset of the available resources was used regarding the edge server component. Finally, the used UAV is a Crazyllie 2 model, which is assisted by a Vicon motion capture system that captures the robot's states and further provides ground truth accuracy.
To demonstrate the switching solution, network traffic that exceeds the UE's uplink capabilities was initiated by the 5G-edge-enabled UAV itself. This approach demonstrates realistic scenarios where various design components may affect the UAV's performance. The reader can find a comprehensive explanation of such scenarios in [7].
A characteristic mission designed to test an essential component required in most full-scale UAV missions was considered to evaluate the proposed architecture. The UAV takes off and has to execute a circular trajectory. The trajectory following problem is a commonly applicable building block to most complex UAV missions. The UAV executes the circular mission while offloading all the required processing to the edge server. During the mission, the system is disturbed, so the connectivity conditions deteriorate. More specifically, data that severely exceed the uplink capabilities of the 5G-edge-enabled UAV are striving to transmit, thus, significantly affecting the latency performance of the 5G-edge-enabled UAV system. Further, in a separate experiment, severe signal interference is induced at consecutive times. The architecture is tested in the aforesaid scenarios.
Fig. 5 depicts the system's behavior when it is examined for different \(SINR\) values, and Fig. 6 depicts the top view of the described experiment. Each row shows the reference or "desired" trajectory and the real or captured trajectory. Low \(SINR\) values trigger the switch at a selected threshold of \(s^{th}=6\,dB\). The additional interference that causes the \(SINR\) drop is enabled and disabled three times. This experiment seeks to validate the system's behavior and the switching controllers' performance. Additionally, the chosen threshold expresses frequent scenarios, such as near-cell edge conditions or conditions where the UAV experiences severe interference, e.g., high altitude flights [25]. Note that the data rate requirements for transmitting the robot's state, i.e., the captured states by the Vicon system and the controlled commands sent by the edge server to the UAV, require small data rates compared to the available capabilities. For quantification purposes, the UAV's states require \(\sim 30Kbps\), the control commands require \(\sim 15Kbps\), and the uplink and downlink capabilities of the considered system are \(\sim 94\,Mbps\) and \(\sim 1402\,Mbps\) respectively. Consequently, the system's latency is not linearly correlated to the \(SINR\) values. However, if the channel conditions deteriorate, this would yield a selection of a lower modulation scheme and consequently lower the achievable throughput, for example, 64 QAM (Quadrature Amplitude Modulation) would be selected in relatively good channel conditions, whereas 16 QAM would be selected in worsen channel conditions. In conclusion, when the channel conditions deteriorate enough to enable a modulation scheme that considering the remaining combined traffic would not be able to comprehend the overall uplink or downlink transmission, then latency rise will be observed in the control and command packets as well as in the robot's state packets. Additionally, if the \(SINR\) values drop significantly enough, the system's connectivity is completely discontinued. Finally, it is important to note that the system's tracking accuracy decreases when the PID controller takes over, and transient
Fig. 5: UAV trajectory in 3 axes. The switching functionality to the onboard PID controller is triggered in challenging connectivity conditions. The UAV initially operates with the 5G-edge-enabled MPC controller. During this experiment, three switches are observed. The UAV alternates between estimated “_safe_” and “_unsafe_” conditions. The switching decision is depicted with the red color signal.
Fig. 6: Top view of the real and reference trajectories of the Crazylfile. The Crazylfile is commanded to do a circular trajectory when controlled by the offboard-MPC and is commanded to go to the home position \(4p_{home}=[0,4,0.8]\) (center of the circle) and hover when controlled by the onboard-PID
effects on the switching state of the two controllers are visible. Such challenges can be addressed with extensive tuning, additional controllers that target the transient phase, and others; nevertheless, it is not within the scope of this work. Overall, the proposed switching strategy demonstrated successfully that the system is able to fallback into the onboard controller when the \(SINR\) values are below the chosen threshold.
Regarding the latency aspect of the proposed safety mechanism, a situation where the UAV was performing the autonomous mission of executing a circular trajectory on a set height was examined. During this scenario, a remote MPC controller was operating in a K8s pod hosted on the edge server. The communication of the UAV and the edge server was established over a 5G network. Subsequently, to test the performance of the switching mechanism, an attempt to transmit sensor data that significantly surpassed the 5G-edge-enabled UAV's uplink capabilities was initiated; hence increased latency was inducted into the end-to-end system's data packet flows. Similar real-life scenarios commonly occur when the design process regarding the participating data flow profiles fails. A common factor that could produce such scenarios revolves around the unique characteristics of applications that exhibit large fluctuations in the produced data rates (e.g., image processing algorithms). Another one relates to the unique character of UAV communications. For example, the mobility of the UAV might strongly affect the communication channel conditions. The combination of the two latter paradigms can create scenarios that, without a large enough safety margin, the requested data rates on the UAV side might exceed the communication system's capabilities.
Fig. 7 depicts the measured latency \(l_{c}\) and the corresponding estimated error \(\hat{e}_{c}\), while Fig. 8 depicts the measured latency \(l_{d}\) and the corresponding error \(\hat{e}_{d}\). In both figures, it is visible that the latency was initiated approximately at the \(18.5\,s\) of the mission. The increase in both latency formulations is directly observed, which is also captured in the corresponding errors. However, each formulation presents different sensitivity and, thus, different error estimations. For example, it is evident that the dropped packet rate significantly affects the \(\hat{e}_{d}\). Another contributing factor to the large fluctuations is the velocity of the UAV, which also presents large fluctuations. This is mostly an outcome of the downlink delay, which causes the UAV to be in incorrect positions, then the UAV tries to address its incorrect position aggressively. For those reasons, even though both formulations capture the expected error, the combined average metric is preferred for the initiation of the switching action.
The combined error \(\hat{e}\) is depicted in Fig. 9. This formulation is used to calculate the switching condition. It is evident that the \(\hat{e}\) inherits the strong fluctuation characteristic of \(\hat{e}_{c}\) and \(\hat{e}_{d}\)
Fig. 8: Estimated error \(\hat{e}_{d}\) along with the corresponding measured latency \(l_{d}\). Note that the induced latency is initiated at the \(\sim 18.5\,s\).
Fig. 7: Estimated error \(\hat{e}_{c}\) along with the corresponding measured latency \(l_{c}\). Note that the induced latency is initiated at the \(\sim 18.5\,s\).
Fig. 9: Estimated error of the UAV. The gray curve demonstrates the UAV estimated raw error time series with the corresponding high fluctuating frequency. The black curve demonstrates the UAV’s sliding window error estimation, which is utilized for the switching functionality. The red curve depicts the error threshold or the switching condition. Here, the error threshold is set to the value of 0.15 m.
This fact would deem this metric unstable to be directly used. The metric would yield multiple switches within the duration of the increased latency and thus would risk the system's stability. To address that, as mentioned in Section II-B3, a sliding window average is applied in \(\hat{e}\). Both the \(\hat{e}\) and the corresponding sliding window formulation is depicted in Fig 9. For this experiment, the window size is set to 50 samples, and the error threshold \(e_{th}=0.15\,m\). Please note that the estimated error of 0.15 m refers only to the error produced by the latency effect. The system identifies the expected error and switches to the onboard PID controller. Then, when the latency is disabled and the sliding window average error becomes smaller than the decided threshold, the UAV switches back to the optimal remote controller. Overall, the validation of the system performed as expected, and the switching strategy ensured the system's stability.
## IV Conclusions and Future Work
In this article, a novel switching strategy was presented. This strategy ensures that a UAV will operate in one of the two following modes, based on a resilient reactive mechanism that uses the available communication KPIs. The first mode is the offboard mode (edge-based mode), which utilized an MPC for controlling the trajectory of the UAV. The MPC has been offloaded to the edge for an optimized performance and the communication is over 5G. The offboard mode is active as long as the switching mechanism does not detect any issue on the communication channel. Once the channel is considered non-reliable, or the communication link is unstable, the switching mechanism turns the system to the onboard mode for safety and redundancy reasons. The system stays in onboard safety mode as long as required. Once the metrics indicate that the channel is reliable again, the system turns to the offboard mode. The validation of the proposed switching strategy was thoroughly tested in laboratory experiments.
The field of edge robotics has room for many different directions. Some related interesting future implementations could be the investigation of mobile edge computing for migrating from one edge cluster to another based on the communication, or the development of algorithms to ensure the safe, rapid, and reliable redeployment of the mission-critical application at the edge through a Kubernetes cluster. Finally, an interesting study would be the task allocation, management, and control of multiple collaborative robots through the edge for real-time applications.
|
2307.09677 | Wildland Fire Mid-story: A generative modeling approach for
representative fuels | Computational models for understanding and predicting fire in wildland and
managed lands are increasing in impact. Data characterizing the fuels and
environment is needed to continue improvement in the fidelity and reliability
of fire outcomes. This paper addresses a gap in the characterization and
population of mid-story fuels, which are not easily observable either through
traditional survey, where data collection is time consuming, or with remote
sensing, where the mid-story is typically obscured by forest canopy. We present
a methodology to address populating a mid-story using a generative model for
fuel placement that captures key concepts of spatial density and heterogeneity
that varies by regional or local environmental conditions. The advantage of
using a parameterized generative model is the ability to calibrate (or `tune')
the generated fuels based on comparison to limited observation datasets or with
expert guidance, and we show how this generative model can balance information
from these sources to capture the essential characteristics of the wildland
fuels environment. In this paper we emphasize the connection of terrestrial
LiDAR (TLS) as the observations used to calibrate of the generative model, as
TLS is a promising method for supporting forest fuels assessment. Code for the
methods in this paper is available. | Grant Hutchings, James Gattiker, Braden Scherting | 2023-07-18T23:09:25Z | http://arxiv.org/abs/2307.09677v1 | # Wildland Fire Mid-story: A generative modeling approach for representative fuels +
###### Abstract
Computational models for understanding and predicting fire in wildland and managed lands are increasing in impact. Data characterizing the fuels and environment is needed to continue improvement in the fidelity and reliability of fire outcomes. This paper addresses a gap in the characterization and population of mid-story fuels, which are not easily observable either through traditional survey, where data collection is time consuming, or with remote sensing, where the mid-story is typically obscured by forest canopy. We present a methodology to address populating a mid-story using a generative model for fuel placement that captures key concepts of spatial density and heterogeneity that varies by regional or local environmental conditions. The advantage of using a parameterized generative model is the ability to calibrate (or 'tune') the generated fuels based on comparison to limited observation datasets or with expert guidance, and we show how this generative model can balance information from these sources to capture the essential characteristics of the wildland fuels environment. In this paper we emphasize the connection of terrestrial LiDAR (TLS) as the observations used to calibrate of the generative model, as TLS is a promising method for supporting forest fuels assessment. Code for the methods in this paper is available.
**Keywords:** Wildfire Modeling, Cox Process, Bayesian Model Calibration, Gaussian Process, Prescribed Fire, Environmental Assessment, Spatial Generative Model
Introduction
Wildland fire has historically been modeled as a physically-inspired and empirically calibrated spread rate model applied to bulk fuel assessments (Rothermel, 1972; Andrews, 2018). Computational models of wildland fire have been explored, and range from fire perimeter spread approximations to spatially resolved models of fuels, atmospheric dynamics, and physical phenomenology (Linn et al., 2002, 2005, 2020). These computational simulation models have been employed extensively in the study of the behavior and impacts of wildfire, for example Linn et al. (2012). Modeling and simulation of prescribed fire introduces additional challenges. In this report, we focus on the need for detailed layout of fuels, which is a key aspect of achieving reliable predictions of fire outcome (Linn et al., 2020, 2021; Hiers et al., 2009).
Plot-level summary has been the basis in the past for characterization of mid-story fuels, that is, the fuels under the main forest canopy. For specific prescribed fire planning, direct survey consistent with the protocols of the Forest Inventory and Analysis (FIA) program (Toney et al., 2007) are currently used in operational settings. The use of reference plots to impute broad-area coverage (Riley et al., 2016) has made estimated plot-level statistics available with broad coverage. Manual survey is time-consuming and expensive and summarizes plots of hundreds of square meters in a relatively small number of metrics, sufficient for some purposes of large-scale environmental monitoring (Tinkham et al., 2018). However, the spatial resolution of technologies for analyzing the interactions between forest composition and wildland fire now exceeds the spatial resolution sought by typical existing survey protocols. Computational models such as FIRETEC accept input on the meter scale, currently, allowing for the specification of explicit tree positions, and spatial layout of heterogeneous mid-story fuels. There is growing cognisance of the quantitative connection between heterogeneity in fuels to fire outcomes Knapp and Keeley (2006); Linn et al. (2013), and the influence of small-scale (meter to tens of meters) heterogeneity in fuel structure on salient fire behavior (Parsons et al., 2017; Atchley et al., 2021). Further, prescribed fire planning targets marginal conditions, where outcomes have been shown to be sensitive (Jonko et al., 2021). Survey protocols leveraging remote sensing technologies are emerging to satisfy the need for spatial resolution in fuels for wildland fire.
Light detection and ranging (LiDAR) has found a strong and growing role in environmental characterization. Areal LiDAR survey (ALS) achieves resolution sufficient to inventory individual trees (e.g., Li et al. (2012)). This, coupled with the large spatial extent of ALS deployed from aircraft, enables exact representation of overstory fuels for large swaths, although currently not for complete coverage over time. Recent work examines the ability of _in-situ_ terrestrial LiDAR scanning (TLS) to estimate plot-level summaries (Pokswinski et al., 2021), including tree identification and bulk characterization of mid-story (Anderson et al., 2021; Silva et al., 2016; Rowell et al., 2020; Loudermilk et al., 2023). This report will describe results based on information
extracted from TLS, but is agnostic to the method of mid-story observation, which could be based on manual survey, remote or _in-situ_ sensing imagery, multi-modal LiDAR, or other means.
Trees can be automatically extracted from broad-scale remote sensing survey, and this abstraction of trees and canopy can be used to populate the corresponding fuels in simulation models. Recent work (McDanold et al., 2023) develops an approach to generating grass and litter surface fuels using a physically-motivated model that includes dependence on overstory and canopy. The mid-story is not typically characterized in a complete format that allows exact layout, although it is correlated with canopy and other spatial covariates. The perspective taken in this report is that mid-story fuels will be characterized from limited relevant observations of an ecosystem, and generated for the purposes of analysis and simulation models as _representative_, rather than exact, fuel distribution.
This report addresses the interpretation and generation of heterogeneous mid-story fuels with a spatial statistical modeling approach. The primary goal is the definition and demonstration of a statistical model that can generate representative mid-story fuels. A spatial statistical model can be calibrated to observed data; that is, the settings of the model that make it most consistent with observations can be discovered, perhaps also informed by prior constraints on the settings that control the characteristics of the generated spatial patterns. There is a large family of spatial statistical models that might be used to define generative processes. Setting reasonable values for the parameters of any generative model is known to be a considerable challenge, especially as the model grows in complexity. This work takes the position of investigating a model that is complex enough to capture features of heterogeneous fuels distributions, while simple enough to infer parameters of the model proposed given observations, or a practical combination of expertise and observational constraints.
The next section will describe the family of methodology that will be applied for representing and generating heterogeneous fuels layout. The approach to model calibration is then presented, along with a description of the metrics that will be used in the calibration process. A review of our datasets and goals leads to a demonstration validating the ability to infer generative model settings using instances from the model to support an understanding and validation of the capability in principle, and finally the corresponding process of determining model settings from observed data and the application of the method to fuels generation is shown. We conclude by describing how this model can fit into a framework of fuel layouts including additional spatial covariates, specifically the dependence of mid-story to canopy and to urban features, i.e. roads.
## 2 Point-Process Model
The goal of this work is to demonstrate a model that can generate synthetic layouts of mid-story fuels that are representative of an environment's characteristics. By _rep
_resentative_ fuels, we mean that the model output is similar to the target environment in established metrics that are chosen to inform characteristics of interest in fuels layouts, including structure and heterogeneity. We now describe some background of spatial models that lead to the capability of generating representative fuels.
Spatial statistical models have a rich history in the description and analysis of environmental data (Banerjee et al., 2003). The methodology here comes from two strands of this literature: spatial point processes and binary models.
### Spatial Point Processes
There is a deep literature in point processes, this section will introduce the topic relevant to the application. Data that contains the locations of events of interest (in space and/or time) are known as a point pattern. The standard term _event_ corresponds here to the siting of a fuel element, which from different points of view could represent a simple spatial sample of fuel, or be interpreted as distinct features such as shrubs. Suppose we observe the locations of \(n\) plants \(\{s_{i}\}_{n}\) in some plot or domain \(D\in\mathbb{R}^{2}\). The natural model for these data is a point process \(\mathcal{P}\), and we write \(\{s_{i}\}_{n}\sim\mathcal{P}\). A commonly used point process is the Poisson process, with spatial variability controlled by an _intensity function_\(\gamma(\mathbf{s})\). The intensity function determines the density of points throughout the domain. Specifically, the expected number of points in a region \(B\subset D\) (i.e, "random counting measure") is \(\int_{B}\gamma(\mathbf{s})d\mathbf{s}\). When the intensity is constant, \(\gamma(\mathbf{s})=\gamma\), we obtain a homogeneous Poisson process (HPP); the more general nonhomogeneous Poisson process (NHPP) is controlled by a non-constant intensity function; see Moller and Waagepetersen (2007) for further explanation and examples.
### Binary Mosaic Models
The mid-story fuel spatial pattern can be represented as a binary mosaic model (BMM). In this approach, the observational data and corresponding model output are a 2-dimensional binary present/absent pattern over the domain. Here, the BMM is defined as a point process laying out the centers of disks, with the disk radii following a distribution. The layout of centers is spatially heterogeneous according to an underlying intensity distribution.
Approximating the map of mid-story fuels by a binary continuum constructed from the union of overlapping compact sets, for example in Fig. 6, is known generally as a _germ-grain model_: germs (centers) follow a stochastic point process, and grains are given by compact sets which emanate from the germs and may depend on the point process. The _boolean model_ is special case of the germ-grain model that arises when the germs follow a homogeneous Poisson process, parameterized by a constant intensity \(\gamma\), and the compact sets are mutually independent (Molchanov, 1997) forming the binary mosaic. In previous work, inference on this model will typically target
three parameters: the first two moments of the radii distribution and the scalar intensity value, which is assumed to be constant across the domain. This model can be extended to incorporate spatial heterogeneity through the use of an NHPP on the germs, yielding a spatially-indexed intensity surface \(\gamma(s)\). Such a surface may be a parametric or nonparametric function of spatial locations. Nonparametric NH-PPs can be difficult to estimate, but they obviate the problematic task of positing a parametric model for intensity _a priori_. See ch. 8 of Banerjee et al. (2003) for an overview.
Pielou (1964) was perhaps the first to draw attention to the use of binary mosaics in ecology. However, it was Diggle (1981) who first proposed an explicit generative model for binary mosaics in vegetation coverage map settings. Using a now widely-studied binary mosaic map of heather shrubs, he proposed a minimum contrast estimation strategy for inferring the homogeneous Poisson intensity and three Weibull parameters controlling radii. This analysis succeeded in estimating the parameters and generating realizations from the fitted models.
Inference on the boolean model for binary mosaics is expanded by Moller and Helisova (2010). This simulation-based likelihood inference method relies on a pre-specified Poisson process; the authors consider only homogeneous intensities. Our approach extends both results by permitting the underlying process intensity to vary over the domain, thereby enabling the modeling of more complicated vegetation structures. The approach is naturally extended to allow intensities that vary as a function of observed covariates, that is, spatial maps that indicate higher or lower expectation for observing fuel.
### Spatial Generative Model
We next present the details of the non-homogeneous Boolean process used to model the binary map, which is in the Gaussian Cox process family. The model is presented in component-wise manner that reflects the generative process; deviations between the formal statistical model and generative/sampling procedures are noted where relevant.
The specification of a union of disks used to approximate the binary map requires the specification of locations of disk centers and the radii of the respective disks. We first specify the model for the number of disks \(n\) and their respective center locations \(s_{1},\ldots,s_{n}\), henceforth "points". The total number of points is sampled
\[n\sim Pois(\lambda*A_{D}),\]
where \(A_{D}\) is the area of the domain and \(\lambda\) represents points per unit area. We assume the layout of points in the domain \(D\) is controlled by a smooth and continuous relative intensity function \(\omega(s):D\rightarrow\mathbb{R}^{+}\). This is achieved by modeling \(\omega\) as an appropriately-transformed Gaussian Process (GP),
\[W(s)\sim\mathcal{GP}(\mathbf{0},\mathcal{C}_{\rho}) \tag{1}\]
\[\gamma(s)=f(W(s)) \tag{2}\]
where \(C_{\rho}\) is a covariance matrix generated by an appropriate covariance function with hyperparameter \(\rho\), \(f\) is a function mapping from \(\mathbb{R}\rightarrow\mathbb{R}^{+}\). Taking \(f(s)=\exp(s)\) corresponds to a Log-Gaussian Cox process.
The proposed generative process takes \(f(s)=logistic(s)\), which allows the interpretation of the relative intensity function as a probability. We model the relative intensity function \(\omega(s)\), rather than the intensity function \(\gamma(s)\) for convenience in the generative procedure. For the relative intensity function, \(E[n_{B}]\neq\int_{B}\omega(s)ds\), rather \(\forall s\in D,\ P(s_{i}=s)=\omega(s)\).
Given a value \(\rho=\hat{\rho}\), a process realization \(W\) may be sampled from the GP. A process realization \(W\) is function-valued, but a discrete approximation is obtained by computing the covariance function on discrete grid \(d\times d\), yielding covariance matrix \(\Sigma_{\hat{\rho}}\), and sampling from the multivariate normal distribution \(\mathcal{N}(\mathbf{0},\Sigma_{\hat{\rho}})\). Transforming this draw by \(f\) gives the intensity function. Conditional on the relative intensity function, the joint density of the number of points and point locations is
\[s_{1},\ldots,s_{n},n\sim\prod_{i}\omega(s_{i})\times\frac{(\lambda A_{D})^{n} \exp(-\omega A_{D})}{n!}. \tag{3}\]
The points are collected in the set \(\Psi=\{s_{1},\ldots,s_{n}\}\), which is a sample from the NHPP.
We assume the locations and sizes are independent, so disk radii may be sampled from a distribution on \(\mathbb{R}^{+}\). It is arguable that a spatial dependence exists for disk radii, but we will show that the inference process including a single spatial field can be challenging. Adding complexity to this model introduces significant challenges in inference, and information at this level of detail is not typically available for prior specification of distribution. We choose the truncated Normal distribution bounded below by zero:
\[r_{1},\ldots,r_{n}\sim N^{+}(\mu,\sigma^{2}) \tag{4}\]
Conditional on \(r_{i}\), disk \(i\) is the set \(\xi_{i}=\{s\in\mathbb{R}^{2}:||s-s_{i}||\leq r_{i}\}\) and the binary map is
\[\Xi=\bigcup_{i:s_{i}\in\Psi}\xi_{i}.\]
## 3 Generative Model Calibration
This section describes operational details of calibrating the generative model, including: generating spatial realizations from a model with specified parameters, metrics used for summarizing and ultimately comparing a spatial pattern, and the discovery of model parameters using the Bayesian model calibration framework.
Generative modeling implements a stochastic map from unobservable parameters and observable covariates to data. These models are useful for encoding an approximation to the true data generating mechanism, systematically characterizing uncertainty, and are able to simulate new data. They also enable specification and estimation of parameters that control general data features relevant to the research question. This is useful when we wish to generate synthetic data with particular properties, but wish to integrate over or ignore other characteristics such as specific location or orientation. Developing appropriate feature sets capturing, or ignoring, qualities of the data is an application driven development process.
### Generating Model Realizations
A stochastic outcome of the model, that is, a spatial layout, is referred to a _realization_ of the model conditional on the parameter settings. In the generative setting, we use the Gaussian Process model intensity function in Eq. 1, referenced on a \(d\times d\) grid \(\mathcal{X}_{D}\), to predict the intensity function at randomly proposed test points. To produce a binary map, candidate points are proposed and accepted or rejected based on the predicted intensity function. Given a candidate location \(\tilde{s}=(x,y)\) drawn uniformly on \(D\), the intensity function value \(W(\tilde{s})|\mathcal{X}_{D},\mathcal{C}_{\rho}\) is predicted from the Gaussian Process model. The point \(\tilde{s}\) is accepted with probability \(p=\omega(\tilde{s})=f(W(\tilde{s})|\mathcal{X}_{D},\mathcal{C}_{\rho})\). This process is repeated until \(n\) candidate points are accepted, where \(n\) is a Poisson random variable. Each point is the center of a disc of points in the BMM. The disc radius will be drawn from the truncated Normal distribution of Eq. 4. The outcome is the BMM shown, for example, in Fig. 2, where the top row shows realizations of different parameters drawn from their ranges. In practice, \(d\) should be large enough such that the points in \(\mathcal{X}_{D}\) are able to capture the heterogeneity in the field observations.
### Descriptive metrics
Observational data and model output can be quantified and compared through extracted metrics describing features of the data relevant to the application, in this case describing spatial qualities such as area, perimeter, and topological features such as counts of connected components and holes. We summarize the geometric geospatial object by a vector of relevant summaries of the binary mosaic, \(y=\eta(\Xi)\).
The metrics used are described in Table 1, with some comments on procedural details. This is not intended to be a final or exhaustive list of useful features, but rather a sufficient set for demonstration. Extensions are possible; one potentially useful example is metrics related to transect segment summaries from manual survey.
\begin{table}
\begin{tabular}{|l|l|} \hline area & From a binary image, observed or derived from observations, area is the proportion of occupied pixels in the domain. \\ & For a generated representation composed of disks on the domain, we use a Monte Carlo estimate for the area of the union of the \(n\) disks in the generated data; this provides the flexibility to compute local areas and statistics of spatial auto-correlation described below. \\ \hline perimeter & From a binary image, observed or derived from observations, perimeter is the number of occupied pixels adjacent to unoccupied pixels, with a Pythagorean correction for boundary pixels that abut occupied pixels diagonally. \\ & For a generated representation, the following approximation is used: Arrange points \(n_{i}\) along the circumference of each disk; discard any circumference points that lie within another disk; calculate the approximate perimeter as the proportion of points retained times the analytic total disk perimeters. A circumference point lies within another disk if the distance between the point and the center of another disk, \(s_{j}\), is less than \(r_{j}\). \\ \hline NCC & Number of distinct connected components is easily calculated from raster data. \\ & For a generated representation, we construct a graph object using the _igraph_ R package. Graph nodes are given by disk centers; edge \(e_{ij}=1\) if \(d(s_{i},s_{j})\leq r_{i}+r_{j}\) and \(e_{i}j=0\) otherwise. Given a graph, the package provides efficient functionality for computing the number of connected components. \\ \hline holes & From a binary image, the number of holes is calculated by performing the same NCC operation on unoccupied cells and subtracting the number of CCs on the boundary. \\ & To compute the number of holes given only disk locations and radii, we use tools from the _TDA_ (topological data analysis) R package. \\ \hline grid areas & Computing local areas on subsets of the domain enables us to investigate how area is distributed or correlated within the domain by computing metrics of these sub-domains. \\ & For this work we use a regular grid of sub-domains of size \(1m^{2}\). \\ & • Moran’s I and Geary’s C — measures of spatial auto-correlation based on adjacency; several versions of each statistic may be obtained by changing the size of the sub-area and by changing how adjacency is defined (e.g., Moore vs Von Neumann neighborhood, neighborhood range, weights, etc.). \\ & • sum of sub-domain areas — this is an approximation to total area \\ & • number of full sub-domains \\ & • number of empty sub-domains \\ & • sample variance of sub-domain specific areas \\ \hline empirical & Empirical estimates of some model parameters are available from disk-based data. \\ parameter & With the number of observed fuel elements \(n\), and the vector of observed disk radii \(\mathbf{r}\): \\ estimates &
### Bayesian Model Calibration
The calibration process is represented graphically in Fig 1. Given a parametric generative model or forward simulation, model calibration refers to the process of estimating model parameter settings that give the most consistent output, or in other terms _fitting_ the model. Considering uncertainty, _calibration_ of the generative model to data is determining a posterior distribution of the model parameters given the observation data. New predictions from the calibrated model are made with the inferred parameter distributions for \(\theta\).
We cast this in a standard Bayesian calibration framework (Santner et al., 2018). A posterior distribution with respect to the metrics \(y=\eta(\Xi)\) is defined. The metric set \(y^{obs}\) is extracted from observations of the natural environment. The corresponding set of metrics \(y^{gen}\) can be extracted from a model generated example. These can then be compared through a likelihood function. Here, we assume a likelihood function associated with a multivariate normal distribution on the metrics, given the values of the parameters \(\theta\):
\[\text{L}(y^{obs}|\theta)=(2\pi)^{k/2}det(\Sigma_{y|\theta})^{-1/2}e^{-\frac{1 }{2}\sum_{i}\sum_{j}(y^{obs}_{i}-y^{gen}_{j})^{T}\Sigma^{-1}_{y|\theta}(y^{obs }_{i}-y^{gen}_{j})}, \tag{5}\]
where \(k\) is the length of metrics vector. The covariance matrix \(\Sigma_{y|\theta}\) is not known in practice and must be estimated, which is discussed in Section 4. The next step in Bayesian calibration is to specify priors on the parameters \(\theta\). In this implementation the default prior on lengthscale of the GP intensity function is Uniform, with bounds chosen based on the domain size. The prior on the disk mean is \(N(1.5,0.5^{2})\), truncated to \([0,3]\), and the prior on the variance is \(\Gamma(1,.001)\) which has a peak near zero. These priors are reasonable defaults for many of the datasets we have worked with, but may need to be adjusted for different ecosystems. Together we denote the prior on the parameters as \(\pi(\theta)\). The Bayesian expression of the posterior distribution for the
Figure 1: Model calibration concept. a) Simulations from a generative model across a prior range of control parameters \(\theta\) are compared to observations, resulting in refined knowledge of \(\theta\) distributions, and then b) the posterior distributions for \(\theta\) lead to predictions with appropriate uncertainty / variability.
parameters for a given set of data-derived metrics is:
\[P(\theta|y^{obs})\propto L(y^{obs}|\theta)*\pi(\theta) \tag{6}\]
The above expressions are un-normalized, presented as such since the normalization constants are not necessary operationally for inference. In principle, Eq. 6 can be optimized to find the maximum _a posteriori_ value of the parameters, or it can be calibrated with Markov chain Monte-Carlo (MCMC) to sample the posterior distribution (we will not rehearse the MCMC algorithm here, it is easily available in references such as Gelman et al. (2013), which also includes substantial advice on practice).
## 4 Model Calibration Process and Results
An initial step in verifying the soundness of the procedure proposed is to infer the model based on so-called _perfect data_ - data generated from the model, but treated as observations. Through that process the capability can be assessed quantitatively as well as qualitatively, albeit for the ideal case of data known to be consistent with model assumptions. We then proceed to outcomes of the same model inference procedure applied to real-world observations. There several aspects to the statistical calibration process with some procedural complexity, some of which are unique to this application. This section will give some insight into practical considerations for calibrating this generative model to data, and how prior domain information can be introduced to aid inference.
### Model Inference Process with Generated Data
We will first show the outcome of the calibration process, then fill in additional underlying technical and procedural details. Recall that this generative model has four parameters: the length scale of the Gaussian process intensity function \(\rho\), the point density per square meter \(\lambda\), and the mean \(\mu\) and standard deviation \(\sigma\) of the Normal distribution for disk radius. Generated patterns using values of these parameters drawn from parameter prior distributions is shown in the panels on the top row of Fig. 2. There is clear qualitative difference in the character of the fuels layouts generated with different parameter settings. To validate that the metric set used allows the identification of these settings through comparison with observation, the panels in the middle row of Fig. 2 show an example of model-generated data as synthetic observation that will be used in model calibration, where all five are generated using the same parameter set. The lower row panels show fuels generated from samples of calibrated parameter distributions, which are qualitatively similar to the training data, and we will show the quantitative connection in model parameters.
#### 4.1.1 Stochastic Likelihood Evaluation
In this application, find settings of parameters \(\theta=\{\rho,\lambda,\mu,\sigma\}\) is complicated by the model output being non-deterministic, where the generated spatial binary map is a stochastic outcome for a given a setting of parameters. As defined previously, we denote a generated binary map as \(\Xi\), and the metrics generated from that binary map as \(y=\eta(\Xi)\). The observation is \(y^{obs}\) (as well as the model-generated pseudo observation where used), and a generated instance from the model is \(y^{gen}_{\theta}\). For a deterministic model, MCMC samples the posterior distribution of parameters, summarized in Eq. 6, with the sole model outcome (\(J=1\)) \(y_{gen}\). For a stochastic model, each realization has a different spatial outcome and corresponding metrics set, resulting in a sample from a corresponding stochastic likelihood. The likelihood in Eq. 5 acts to estimate an expectation of the likelihood over multiple realizations, although still resulting in a stochastic estimate. The need for multiple realizations from a Cox process for robust parameter estimation has been documented (Micheas, 2019). This model has the same need for multiple realizations, especially for estimating the intensity function length scale \(\rho\).
There is a trade-off between the cost associated with the number of model realizations used, and the algorithmic performance associated with the noisy estimate of \(E[L(y^{obs}|\theta)]\). For such estimates of the likelihood, algorithmic methods such as MCMC can encounter optimistic tail values through random variation, making it difficult efficiently sample the posterior. Our approach to addressing this issue with
Figure 2: Impact of parameters on generative binary layouts. Top row: five parameter sets from their prior ranges demonstrates breadth of generative behavior; middle: synthetically generated layouts from one setting of parameters; bottom row: the constrained model behavior associated with the calibrated model, showing 5 generated outcomes from parameter sets drawn from the calibrated posterior distribution.
MCMC is to re-calculate the current value for \(L(y^{obs}|\theta)\) for each proposed MCMC step. In practice, we have found that, over a range of test problems with different parameter settings, 25 realizations are sufficient to effectively constrain the posterior while keeping the computational cost as low as possible. Some parameter values may require more samples due to larger variability in \(y^{gen}\); this will ultimately depend on the details of the data and domain.
Preliminary studies in optimization have shown a greater intractability for optimizers to converge with stochastic model estimates as compared to MCMC. For this reason, optimization is not explored as an inference tool in this work.
#### 4.1.2 Covariance Estimation
In Eq. 5 we defined a normal likelihood on the \(m\) observed plots \(y^{obs}_{i}\)\(i=1,...,m\), and explicitly denote that the covariance matrix is dependent on \(\theta\) because of the stochastic nature of the data generating process for some \(\theta\). As \(\Sigma_{y|\theta}\) is unknown, an obvious plug-in estimate is the empirical covariance matrix of \(y^{obs}_{i}\); \(i=1,...,m\). This works well when \(m\) is reasonably large. We have found that \(m=25\) is sufficient for estimation of domains consistent with Fig. 2, resulting in posterior uncertainty is significantly reduced from prior uncertainty for all parameters and posterior modes are consistent with their true settings. For smaller observed datasets of around \(m\leq 10\) we find that, all else equal, posterior uncertainty can be quite large and posterior modes can miss the true parameter settings. A representative example is shown in Fig. 3. Posterior distributions are much more sharply peaked with \(m=25\) observations, and for all four parameters posterior modes are near the truth. For \(m=10\) however, posterior uncertainty is significantly greater and inference for \(\sigma\) and \(\lambda\) may be biased towards smaller estimates.
There are two factors at play when comparing posterior estimates from the \(m=10\) and \(m=25\) case. First is that the sum likelihood has more information for larger \(m\). Second is that the estimate of \(\Sigma_{y|\theta}\) is improved with larger \(m\). The former is not to be dismissed, however we have found that the latter may be the more significant effect. For our representative example in Figs. (2, 3) we found that inference with \(m=10\) using the covariance matrix estimated with \(m=25\) is nearly identical to inference using \(m=25\) for both estimation of the covariance and calibration. Improving covariance estimation can significantly improve inference without collecting more data.
To improve covariance estimation for these smaller data applications we have observed benefit in an approach of augmenting the observed dataset \(y_{obs}\) with simulated data generated from random samples of \(\theta\) from informed prior distributions. The augmented dataset is used only for covariance estimation, rather than likelihood calculations. Informed prior distributions can be developed naturally from the available observations, with empirical parameter estimates \(\hat{\lambda}=n/(d_{X}*d_{Y}),\hat{\mu}=mean(r),\hat{\sigma}=sd(r)\).
To augment the observed data: draw \(m^{*}\) samples \(\mathbf{\lambda}_{s}\sim N(\hat{\lambda},.1\hat{\lambda})\), \(\mathbf{\mu}_{s}\sim N(\hat{\mu},.1\hat{\mu})\), \(\mathbf{\sigma}_{s}\sim N(\hat{\sigma},.1\hat{\sigma})\). The 10% standard error is designed to account for the fact that our empirical estimates are subject to the random data \(y^{obs}\). We cannot define a simple empirical estimate for \(\rho\) so we sample \(\mathbf{\rho}_{s}\) from weakly informative distribution such as a Uniform or Gamma with large variance. For each of these \(m^{*}\) samples \(\mathbf{\theta}_{s}=[\mathbf{\lambda}_{s}|\mathbf{\mu}_{s}|\mathbf{\sigma}_{s}|\mathbf{\rho}_{s}]\) generate \(K\) sets of random metrics \(y^{k}_{gen}(\mathbf{\theta}_{m});\ k=1,...,K\). The covariance matrix is then estimated as the empirical covariance over these \(m^{*}\times K\) sampled metrics together with the \(m\) observations.
Fig. 4 shows 95% posterior confidence intervals over three values of \(m\in\{1,10,25\}\) for different covariance matrix estimation approaches. In orange we have posterior intervals where only the \(m\) available observations are used for covariance estimation. Blue and green intervals have incorporated simulated data in the estimation procedure. Both of these intervals use samples from the normal distributions defined above for \(\lambda,\mu,\sigma\) but vary the prior for \(\rho\). We find that sensitivity to the prior on \(\rho\) is relatively small, with similar results from a Uniform distribution on \([0,10]\) and a more informative \(\Gamma(3,1)\) distribution which has a mean at the true value. For this example, the conditional likelihood for \(\rho\) with other parameters fixed at the truth is fairly flat.
This comparison demonstrates the connection between observation data availability and the ability to identify and constrain the parameter posterior distributions. It also shows that information regarding domain details, i.e. by a domain expert, can result in good outcomes even when relatively data limited.
Figure 3: Model prior distributions (dotted) and calibrated posterior distributions for the parameters for the intensity function length scale, disk radius mean and variance, and overall intensity scaling average density. The parameters values used to generate the synthetic observations are shown in red vertical lines. Two posteriors are shown to demonstrate the value of observed data.
Figure 4: Posterior 95% intervals over different observed dataset sizes and different priors for simulated data used to improve covariance estimation. Orange intervals are generated using only the data for covariance estimation. Green and Blue intervals use the observed data and simulated data to estimate the covariance. Augmenting observations with simulations for covariance estimation can significantly reduce posterior uncertainty when data is limited.
### Calibration with Field Data
Point patterns in spatial domains may arise either from direct observation or through post-processing of technical measurement. Direct observation is recording the locations of events of interest, often aided by GPS. When feasible, this form of survey produces high-fidelity measurements. However, it requires considerably more effort than either plot-level summaries or remotely-sensed surveys. A LiDAR scan produces a point cloud (i.e., a set of points) in 3D, but the points do not have the context of higher-level features such as plants, and the point clouds are processed to identify features of interest. For mid-story fuels, features of interest can be thought of as units of fuel organization, which, at different scales, could be bunch grass, bushes, or trees, or, more generally thought of as regions of high fuel density in the environment. The locations of the these features form a point pattern.
There are several possible pathways for extracting a heterogeneous spatial layout of fuels from observations of the environment. Monoculture ecosystems of chaparral or bunch grasses might be easily extracted from remotely sensed imagery or ALS as a coverage map that can be used directly as a binarized domain. A similar analysis is the extraction of the location and size of trees from imagery and ALS, for example (Sterenczak et al., 2020). However, the extraction of mid-story fuels is in general more difficult in an environment including an overstory. This section will show a simple method for extracting fuels elements from TLS scans, and used for calibration of the fuels model.
#### 4.2.1 Terrestrial LiDAR Survey (TLS)
TLS generates a pointcloud of the scanned forest environment. The data collected here uses the Leica BLK360 instrument, deployed to sites in the Santa Fe National Forest. The demonstration site is ponderosa forest, which includes aspen at various maturity, gamble oak, and other typical mid-story flora. In this demonstration, the generated pointcloud is clipped to a 15m square domain centered at the scanner. In order to extract mid-story from the example scan, the ground is normalized to a ground plane, and the domain is segmented to the region from 0.1m to 3m. Fig. 5 is a visualization of the resulting mid-story pointcloud data. This report focuses on methodology detail rather than the environmental conditions or data collection methodology, and so we present this example dataset as representative data without defending its environmental properties.
#### 4.2.2 Data representation
We follow a simple reduction of the pointcloud data to spatially compact fuel units, which is the concept that motivated the spatial model representation. Since we are working in 2D for this demonstration, the z value of the data is disregarded, leaving the marginal projection to the ground plane. A Gaussian mixture model (GMM)
of the 2D data is built using the python scikit library's BayesianGaussianMixture procedure, which fits a Dirichlet process model. The result a representation of the data as a number of Gaussian distribution, in this case constrained to be circular. A prior on the GMM fit controls the fuel data inferred to be in a reasonable range of radii. The inferred 2D fuels density is then discretized into a binary model of disks of 2 fitted GMM standard deviation radius centered at the corresponding GMM means. The fitted disks are shown in Fig. 6. Although this is an incomplete representation of the TLS mid-story information and there are alternative approaches for automating inventory, for example (Loudermilk et al. (2023)), and creating a characteristic representation, this description does reflect the variability of the major fuels elements in the domain, and in a format consistent with characterization of the mid-story as distinct units of fuel in the conceptual framework of the generative model presented. Specific of the algorithm will be highly dependent on the particulars of an application context of fuels generation; this background is a brief overview to introduce a path to generating the data shown in Fig. 6.
Figure 5: mid-story pointcloud used for example data extraction. This shows a ponderosa forest lidar scan, normalized for ground level, and segmented to exclude ground and canopy.
Figure 6: Pointcloud data projected down to the x-y plane, and the Gaussian mixture model fit to that data used to create a binary dataset. For presentation clarity, color distinguishes distributions and point membership.
#### 4.2.3 Model Calibration and Fuels Generation
Calibrating the generative model with this disk-based BMM data corresponds directly to the calibration done with generated data. Calibration to the data in Fig. 6 results in the posterior distributions shown in Fig. 7. What we can see from these views of the 4-parameter joint distribution is that there is a significant degree of learning parameter posteriors from the presented example dataset, particularly for \(\mu,\sigma,\lambda\). Covariance estimation for this calibration example was done using the data augmentation approach outlined in Section 4.1.2 with \(K=25\), \(m^{*}=25\), and \(\boldsymbol{\rho}_{s}\) drawn from a Uniform distribution on \([0,10]\).
There is no ground truth in parameters in this case of data extracted from real TLS. Instead the evaluation is the qualitative performance of the fuels layout. This is shown in Fig. 8 which, as before, shows some examples of the prior behavior of the model before calibration, the data used in calibration, and examples of the generative output from the calibrated model. Qualitative examination shows that the layouts of the calibrated model are reasonably representative, particularly in the goal of capturing the heterogeneity of the binarized maps.
Figure 7: Parameter posterior distributions from model calibration against GMM representation of TLS mid-story. Empirical estimates from the single observation of \(\mu,\sigma^{2},\lambda\) are shown in red vertical lines.
## 5 A Framework for Multiple Spatial Influences
The model developed can be used along with other spatial covariates to provide a more contextually relevant layout. This can be accomplished by considering the generative model as only one component of a more general intensity function. Revisiting Eq. 1, we can expand the intensity function heterogeneity component \(W(s)\) to include other controlling terms, as:
\[\omega(s)=f\big{(}\beta_{0}W(s)+\sum_{k}\beta_{k}X(s)_{k}\big{)} \tag{7}\]
That is, additional terms are added, where \(X\) are spatial covariate fields, and \(\beta\) are weights to control the relative impact of spatial field components. We will briefly show a generative example where \(X_{1}\) is a canopy height model (CHM) over a domain, which we will take as an indicator of greater likelihood of finding a mid-story fuel element, i.e. a shrub. The canopy height model used for this example is shown in Fig. 9a. In addition to a canopy height covariate, we also incorporate an \(X_{2}\) of the location of a road, where fuel probability is zero. These are shown in Fig. 9b.
One important consideration in this model is the need for a convention for the semantics of the \(X\) datasets. We will specify \(X_{k}\in[-1,1]\), interpreted as proportional to a likelihood of fuel. \(\beta_{k}\) is a positive multiplier interpreted as a relative scaling of the impact of indicator \(k\). We will not investigate the ability to infer \(\beta_{k}\) parameters, and instead will take them as supplied by a user guiding the system. To estimate \(\beta\), data requirements would be substantially increased, and likely the need for expert-supplied guidance more critical, as well as complex. It is important to recognize that while
Figure 8: In the top row, five generated fuel layouts using parameters drawn from their prior distributions. The observed calibration data is shown in the middle row. In the bottom row, five generated fuel layouts from the calibrated model posterior distribution.
the relative scaling of the \(\beta_{k}\) controls the impact of indicators \(X\) in the generated outcome, at the same time the overall sum of \(\beta_{k}\) will impact the layout as the total is projected through the logistic function.
Fig. 8(c) shows an example of placing fuel element centers in the domain of the CHM with the heterogeneity model only. In Fig. 8(d), canopy heights are used to bias fuel placement toward areas of higher canopy. By setting \(X_{2}\) to \(-1\) at road locations and \(0\) elsewhere, an effectively zero fuel probability can be given to those locations by setting \(\beta_{2}\) large.
This section is a demonstration example of a topic requiring further development through field experience with these tools. It seems clear that a number of (potentially) available spatial covariates may be important in fuels layout, such as slope, aspect, canopy density, etc. We have seen some preliminary results that the \(\beta_{k}\) can be identified in some circumstances, but in general increasing the problem complexity will make this difficult, likely greatly increasing the need for observation data and/or expert influence. Instead, our recommendation would be to make inferences of the heterogeneity over controlled observations, and carry those settings forward for expert tuning of the data.
Figure 9: In panel (a) a canopy height model is shown, scaled to [-1,1]. Panel (b) shows the locations of the road, which should be set to zero probability of fuel generation. Panel (c) shows a realization from the data generating process given parameters \(\tilde{\theta}\) with \(\beta=(1,0,0)\) indicating no effect from the canopy. Panel (d) is a realization using the same \(\tilde{\theta}\) but \(\beta=(1,1,10)\) gives a strong correlation between canopy height and fuel placement. \(\beta_{2}=10\) gives effectively zero probability of fuel placement at road locations indicated by \(X_{2}\).
Discussion and Conclusion
This study shows the potential of a connection from LiDAR remote sensing of mid-story to generating representative examples of the mid-story that manifest key characteristics. This is accomplished through the definition of a parameterized generative model that can capture these characteristics of the layout, and the ability to calibrate, or learn, the parameters of this model. The result is an interpretable method that can be used to compactly summarize the relevant information from collected datasets, and populate unknown areas with characteristic patterns of fuels. Although we emphasize the potential for generative model calibration, in a practical application it is reasonable to also include knowledge of the environment as either a more constrained prior parameter distribution.
A key issue is the definition of the relevant features of fuels characteristics. This work is motivated by prior investigations into environmental generative models, and reinforced by the information considered useful as embodied in historical plot survey approach to summarize the spatial heterogeneity of fuels. Further, our approach is a reasonable method for generating heterogeneous fuels layouts for 3D fire simulations by interpreting the layout of disks as locations and scaling factors for a template 3D fuel example, i.e., a shrub. The model presented is consistent with those motivations.
The model presented should be regarded as an initial method investigated for both its flexibility and its ability to be calibrated against both perfect data and data collected in the field. There are clear extensions of this model, as well as the method of connecting the model to observations (i.e., pointcloud processing, matching metrics, and likelihood evaluation) that would be interesting, although their properties must be the subject for further investigations. Interesting extensions could include fuel type (e.g., species or coarse woody debris type) and a description of 3D properties including shape and density. In addition, further exploration of spatial covariates for fuel placement is appropriate for enhancing the ability to capture environmental conditions related to, e.g., terrain, overstory, and moisture/water. Finally, although we focus on mid-story, this generative approach can be applied to fuels from surface to canopy. However, although the model can be extended in many ways, one of the key findings is that model complexity must be controlled to avoid models that are not feasible to infer, to specify with respect to known environment principles, and/or to interpret.
The method of describing shape or topology, and the connection to kernels in the stochastic intensity function appears to be an open area for investigation. In application, there is limited guidance on what properties of heterogeneity and shape are relevant to environmental assessment goals, including the functional impact to wildland fire outcome. This suggests several research directions both for the underlying statistical methodology, and the connection to application pragmatic outcomes.
## Acknowledgements
_This research was supported by the Los Alamos National Laboratory (LANL) through its Laboratory Directed Research and Development (LDRD) program under project number 20220024DR._
|
2301.12867 | Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and
Toxicity | Recent breakthroughs in natural language processing (NLP) have permitted the
synthesis and comprehension of coherent text in an open-ended way, therefore
translating the theoretical algorithms into practical applications. The large
language models (LLMs) have significantly impacted businesses such as report
summarization software and copywriters. Observations indicate, however, that
LLMs may exhibit social prejudice and toxicity, posing ethical and societal
dangers of consequences resulting from irresponsibility. Large-scale benchmarks
for accountable LLMs should consequently be developed. Although several
empirical investigations reveal the existence of a few ethical difficulties in
advanced LLMs, there is little systematic examination and user study of the
risks and harmful behaviors of current LLM usage. To further educate future
efforts on constructing ethical LLMs responsibly, we perform a qualitative
research method called ``red teaming'' on OpenAI's ChatGPT\footnote{In this
paper, ChatGPT refers to the version released on Dec 15th.} to better
understand the practical features of ethical dangers in recent LLMs. We analyze
ChatGPT comprehensively from four perspectives: 1) \textit{Bias} 2)
\textit{Reliability} 3) \textit{Robustness} 4) \textit{Toxicity}. In accordance
with our stated viewpoints, we empirically benchmark ChatGPT on multiple sample
datasets. We find that a significant number of ethical risks cannot be
addressed by existing benchmarks, and hence illustrate them via additional case
studies. In addition, we examine the implications of our findings on AI ethics
and harmal behaviors of ChatGPT, as well as future problems and practical
design considerations for responsible LLMs. We believe that our findings may
give light on future efforts to determine and mitigate the ethical hazards
posed by machines in LLM applications. | Terry Yue Zhuo, Yujin Huang, Chunyang Chen, Zhenchang Xing | 2023-01-30T13:20:48Z | http://arxiv.org/abs/2301.12867v4 | # Exploring AI Ethics of ChatGPT:
###### Abstract
Recent breakthroughs in natural language processing (NLP) have permitted the synthesis and comprehension of coherent text in an open-ended way, therefore translating the theoretical algorithms into practical applications. The large language-model (LLM) has significantly impacted businesses such as report summarization softwares and copywriters. Observations indicate, however, that LLMs may exhibit social prejudice and toxicity, posing ethical and societal dangers of consequences resulting from irresponsibility. Large-scale benchmarks for accountable LLMs should consequently be developed. Although several empirical investigations reveal the existence of a few ethical difficulties in advanced LLMs, there is no systematic examination and user study of the ethics of current LLMs use. To further educate future efforts on constructing ethical LLMs responsibly, we perform a qualitative research method on OpenAI's ChatGPT1 to better understand the practical features of ethical dangers in recent LLMs. We analyze ChatGPT comprehensively from four perspectives: 1) Bias 2) Reliability 3) Robustness 4) Toxicity. In accordance with our stated viewpoints, we empirically benchmark ChatGPT on multiple sample datasets. We find that a significant number of ethical risks cannot be addressed by existing benchmarks, and hence illustrate them via additional case studies. In addition, we examine the implications of our findings on the AI ethics of ChatGPT, as well as future problems and practical design considerations for LLMs. We believe that our findings may give light on future efforts to determine and mitigate the ethical hazards posed by machines in LLM applications.
Footnote 1: In this paper, ChatGPT refers to the version released on Dec 15th.
## I Introduction
The recent advancements in NLP have demonstrated their potential to positively impact society and successful implementations in data-rich domains. LLMs have been utilized in various real-world scenarios, including search engines [1, 2], language translation [3, 4], and copywriting [5]. However, these applications may not fully engage users due to a lack of interaction and communication [6]. As natural language is a medium of communication used by all human interlocutors, conversational language model agents, such as Amazon Echo [7] and Google Home [8], have the potential to significantly impact people's daily lives. Despite their potential benefits, unforeseen negative effects on human-computer interaction have also emerged as NLP transitions from theory to reality. This includes issues such as the toxic language generated by Microsoft's Twitter bot Tay [9] and the privacy breaches of Amazon Alexa [10]. Additionally, during the unsupervised pre-training stage, language models may inadvertently learn bias and toxicity from large, noisy corpora [11], which can be difficult to mitigate.
While studies have concluded that LLMs can be used for social good in real-world applications [12], the vulnerabilities described above can be exploited unethically for unfair discrimination, automated misinformation, and illegitimate censorship [13]. Consequently, numerous research efforts have been undertaken on the AI ethics of LLMs, ranging from discovering unethical behavior to mitigating bias [14]. Weidinger et al. [15] systematically structured the ethical risk landscape with LLMs, clearly identifying six risk areas: 1) Discrimination, Exclusion, and Toxicity, 2) Information Hazards, 3) Misinformation Harms, 4) Malicious Uses, 5) Human-Computer Interaction Harms, 6) Automation, Access, and Environmental Harms. Although their debate serves as the foundation for NLP ethics research, there is no indication that all hazards will occur in recent language model systems. Empirical evaluations [16, 17, 18, 19, 20] have revealed that language models face ethical issues in several downstream activities. Using exploratory studies via model inference, adversarial robustness, and privacy, for instance, early research revealed that dialogue-focused language models posed possible ethical issues [21]. Several recent studies have demonstrated that LLMs, such as GPT-3, have a persistent bias against genders [22] and religions [23]. Expectedly, LLMs may also encode toxicity, which results in ethical harms. For instance, Si et al. [24] demonstrated that BlenderBot[25] and TwitterBot [26] can easily trigger toxic responses, though with low toxicity.
Despite current studies on NLP and ethical risks and effects, the following gaps in earlier research exist:
* Practice: Many studies on AI ethics have been conducted theoretically and may not accurately reflect the real-world ethical risks.
* Timeliness: The rapid advancements in NLP have resulted in a lack of examination of more recent language models from an ethical perspective.
* Agreement: There is a lack of consensus among |
2306.08640 | AssistGPT: A General Multi-modal Assistant that can Plan, Execute,
Inspect, and Learn | Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks. | Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou | 2023-06-14T17:12:56Z | http://arxiv.org/abs/2306.08640v2 | # AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
###### Abstract
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
## 1 Introduction
Large language models (LLMs) [1; 2; 3; 4], especially ChatGPT [5], have made remarkable progress in recent months, significantly advancing the field of developing AI assistants. Despite these advances, a single LLM serving as an AI assistant still exhibits inherent limitations in certain abilities, such as understanding visual environments and comprehending complex tasks, which restrict their utility in real-world applications. To address these shortcomings, a promising solution is to explore the integration and collaboration of multiple domain experts _e.g._, pretrained models or APIs, to tackle complex tasks. Numerous efforts have been made in this direction. Some works [6; 7; 8] utilize language as a bridge and transform the visual input into pure texts using foundational visual models, such as captioner [9; 10; 11], object detectors [12; 13; 14], and OCR models [15; 16]. Subsequently, the extracted texts are fed into LLMs for reasoning tasks like question-answering. Nonetheless, as for complex
visual scenarios such as a long-form video with complicated scene switching, as shown in Fig. 1, the generated texts may go well beyond the query requirements. This can lead to an abundance of superfluous information while crucial details relevant to the query may be omitted.
Some other concurrent works propose decomposing user queries into subtasks and plan to sequentially call external models or APIs to answer them. Currently, there are two branches of methods. The first one is language-based planning [17; 18; 19; 20]. For instance, HuggingGPT and Chameleon [17; 19] propose using an LLM as a controller, managing and organizing the cooperation of expert models. Another branch of work is code-based planning [21; 22; 23]. ViperGPT [21] proposes to use Codex to write the Python code to call visual-related APIs for handling multi-modal tasks. These approaches allow for invoking the models only when necessary, which allows models to only output useful information and optimize the use of computation resources.
Despite this progress, addressing high-level queries is still challenging. Specifically, current questions in existing benchmarks usually directly imply how to plan the reasoning. For example, for questions like "What is the red object used for?", no matter what the image is, the reasoning steps are relatively fixed, i.e., recognize the red object, then figure out its function. However, for more complex questions, there could be diverse reason paths. For example, for question _"How much black pepper should I use for 700g beef?"_ in Fig. 2, the variations in the presentation of relevant information, whether it's in the form of subtitles, actions, text within videos, or a combination of these, can result in distinct reasoning paths. Therefore, as shown in Fig. 2, once a reason-only approach makes a mistake, it becomes difficult for it to self-correct.
Similar approaches are already proposed in the NLP field, such as ReAct [24] and ToolFormer [25]. However, there is a unique challenge in multimodal tasks: _How to handle non-textual intermediate
Figure 1: **In-the-wild example of AssistGPT.** AssistGPT can reason in an interleaved language and code format. Given a query input and visual inputs, AssistGPT plans the problem-solving path in language, using structured code to call upon various powerful tools. The Inspector, part of the system, can manage visual inputs and intermediate results, assisting the Planner to invoke tools. Meanwhile, the Learner can assess the reasoning process and collect in-context examples.
result?_ For ReAct and ToolFormer, the outputs of external models can be directly fed into the Planner and passed to subsequent models. While the intermediate results obtained in multimodal tasks usually are cropped regions for the image grounding module, and segmented video clips for the temporal location module, as shown in Fig. 1. In complex cases, it is hard for Planner to manage which information should be fed into the next module.
In this paper, we propose a multi-modal AI Assistant system, named AssistGPT [1] (The design of our model's icon is inspired by the HAL 9000 from the movie "A Space Odyssey", a fictional artificial intelligence character), with interleaved language and code reasoning method, inheriting the advantages of flexible reasoning in ReAct and robust tool invocation in Program-based planning. Specifically, our system consists of four parts, Planner, Executor, Inspector, and Learner. We show how our system works in Fig. 1. Similar to ReAct, the Planner thinks about what needs to be done next based on the current reasoning progress and invoking external models. What sets our method apart is the use of formatted code to invoke external models. The Executor wraps external tools into a uniform input and output format, allowing the tool to be invoked with structural commands. Simultaneously, we have also proposed an Inspector, which manages visual inputs and intermediate results during the reasoning process. It provides the Planner with summaries and metadata of all currently available visual materials. The combination of the Inspector and the Executor allows the model to efficiently implement complex reasoning. Moreover, it is challenging for the model to ensure correct reasoning in a zero-shot scenario. The Planner might output invalid code or unreasonable paths. To enable the system to continuously improve, we proposed the Learner, which checks whether the prediction process is reasonable or judges the correctness of the predicted results based on annotations. It allows the system to try multiple times and record successful examples as in-context examples.
The current version of AssistGPT integrates 10+ tools for different functions, including image detection, captioning, region grounding, temporal grounding, OCR Module, object enumeration, speech-to-text, etc. By combining these functionalities, AssistGPT can accomplish a wide range of multi-modal tasks which are still hard for existing systems.
In summary, our contributions are as follows: 1) We constructed a general multimodal AI assistant that can accomplish diverse visual-related tasks with the cooperation of multiple models. 2) We propose a new compositional reasoning method that reasons over in an interleaved language and code manner. A simple learning mechanism is also proposed to improve the AssistGPT's ability in planning. 3) We showcase AssistGPT's capabilities not only the benchmark results but also some realistic applications for processing complex images and long-form videos, understanding high-level queries, and handling flexible inputs.
## 2 Related Work
**Multi-modal Systems.** Prior to the advent of LLM, remarkable works were done to design multi-modal models for one or several specific tasks, such as focusing on visual appearance [26; 27; 28; 29; 30; 31], visual-related knowledge [32; 33; 34; 35; 36; 37], action [38; 39; 40], ego-centric videos [41; 42; 43; 44], instructional videos [45; 46; 47], scene text [48; 49; 50; 51], etc. They have achieved commendable results in specific tasks, however, their
Figure 2: Comparison of PEIL and two mainstream reasoning methods in multi-modal tasks.
generalizability is relatively limited, making it challenging to address more complex and diverse questions in real-world scenarios.
Recently, two types of strategies are proposed for developing a general multi-modal system. One is pre-training LLM to support visual features as conditional inputs. The representative models are GPT-4 [52], PaLM-E [53], BLIP-2 [54], and Mini-GPT4 [55]. Despite these methods being capable of directly processing multi-modal input, they still exhibit limitations in addressing advanced functional needs, such as image spatial grounding, long-form video grounding, and audio comprehension. Additionally, the computational cost of scaling these models can be extremely high. The alternative strategy aims to combine multiple models or APIs to accomplish complex multi-modal reasoning. For instance, models like the Socratic model [6] and Visual ChatGPT [8] achieve this by connecting ChatGPT with image generation models. HuggingGPT [17] combines a variety of Huggingface models with LLMs. ViperGPT [21] employs Codex [56] to call visual APIs via Python programming. Our AssistGPT falls into the second category by combining and invoking various modules for multi-modal reasoning, but we propose a new framework PEIL for integrating external tools and models.
**Compositional Reasoning.** Compositional reasoning methods in the field of visual question answering usually decompose questions into several subtasks, each addressed by a specific module. This kind of method offers strong interpretability due to its modular structure and the clear division of responsibilities among the individual components. This idea was initially put forward by [57]. Subsequently, [58; 59] introduced an end-to-end variant based on LSTM and CNN. Traditional compositional reasoning methods are limited by language models' parsing capabilities, often requiring ground-truth question decomposition or reinforcement learning for optimal module usage.
With the advent of LLMs, question decomposition can be accomplished remarkably well in a zero-shot manner. Chain-of-thought prompts [60], Toolformer [25], and ReAct [24] enable models to plan how to solve an NLP problem. HuggingGPT [17] and ViperGPT [21] are multi-modal systems that use LLM to parse a question into a series of reasoning steps. However, for complex queries, the model needs to determine the subsequent steps based on not only questions but also visual inputs or feedback from previously executed modules. MMReAct [61] introduced the idea of ReAct to a multi-modal system to overcome it, while it is still under development and hasn't demonstrated its effectiveness on the benchmark. Previous methods reason over either language reasoning or code, and as stated in the introduction, both have certain shortcomings. Our work first proposes an interleaved language and code reasoning manner which can better handle general queries and complex visual inputs.
**Learning Schemes for Modular System.** Early modular models primarily employed end-to-end Reinforcement Learning (RL) to train each module's planning and acting from scratch. While this approach is practical for lightweight models, RL can introduce substantial overhead for systems where each module is an LLM. Toolformer [25] proposes a self-supervised technique that optimizes planning requiring only a handful of demonstrations for each API. Specifically, Toolformer attempts various APIs to find successful examples and then fine-tunes the model. In contrast, we propose a straightforward mechanism in the multi-modal field, which can guide the system to retry and preserve the successful explorations as in-context examples.
## 3 AssistGPT
**Overview.** AssistGPT is a general multi-modal AI assistant system that can dynamically engage various tools in an interleaved language and code manner. Specifically, given a general language query and reference images or videos as inputs, the goal of AssistGPT is to generate the desired answer. As shown in Fig. 3, AssistGPT is achieved by cooperation with four core modules: **(a)** Planner, **(b)** Executor, **(c)** Inspector, and **(d)** Learner. The Planner SS 3.1 aims to control the whole reasoning process, with the Executor SS 3.2 supplying valuable feedback to Planner by executing external tools. The Inspector SS 3.3 manages the input and intermediate results and assists the Planner in feeding proper content to the Executor. The Learner SS 3.4 is capable of assessing the system performance and record successful explorations as in-context examples. In the following sections, we will go through each module in detail.
### Planner
The **Planner** employs a highly intelligent LLM _i.e.,_ GPT-4 [52] as the central brain to control the global reasoning planning. It begins the planning process by taking inputs from three types of information: an Instruction Prompt consisting of the [Tool Set Illustration] and [In-Context Example]2, Input Query, and the Summary of Visual Inputs created by **Inspector**.
Footnote 2: The successful trials recorded by Learner, will be introduced later.
Then it generates the appropriate output for the next step, which consist of two parts: Thought: a language phrase indicates what should be done next. While it doesn't affect the module or API call directly, it aids the LLM planning procedure. Action: a structural string obeys the pre-defined template provided in the instructions. It specifies which external tool to call and what arguments to input, _e.g.,_ [Object
- objects, attributes, actions, events, and so on. However, the abundance of information can sometimes obstruct our problem-solving process. One crucial solution is to pinpoint the most crucial and valuable information from the rich sea of visual, textual, and audio data This part incorporates several modules such as the (f) Region Ground, (g) Narration Ground, (h) Text Ground, and (i) Subtitle Ground.
* **Reasoner**: The initial two sets of tools primarily deal with collect and identify of data, whereas the third set focuses on reasoning, utilizing the extracted information and external knowledge. This part incorporates modules such as (j) Knowledge Reason, (k) Narration Reason, (l) Subtitle reason, and (m) Temporal Reason modules. These modules primarily utilize LLM at their core by taking different types of information and prompts as inputs or a simple program.
### Executor
The **Executor** takes the code generated by the **Planner** as input, then call a module to produce the output by carrying out three steps to obtain the final result. These steps include validation check, module execution, and post-processing, as shown in Fig. 3.
* **Validation Check**: Even powerful LLM like GPT-4 can sometimes generate illegal code. For example, an image caption module accept a long video as input. We have designed a legality check for each module to determine whether the code is executable. Moreover, if code includes errors, we do not interrupt the entire reasoning process. Instead, we return an error message as the output code to the **Planner**, allowing it to optimize the planning process in real-time.
* **Module Execution**: We standard various modules or APIs into a unified interface using the code-style template _i.e.,_ [Module_Name](<text_query>, <visual_index>). Each module is designed to accept multiple text queries and visual data (images or videos) as input. In each standardized module, we provide instructions on its function and the requirements of the argument, which is used for [Tool Set Illustration] in **Planner**. Additionally, for the sake of simplicity and accuracy in Planning, the generated code is simplified. Later, a simple rule-based function will map it to the executable codes and then execute it to obtain the final result.
* **Post-processing**: For all modules, the generated results will be translated into a language format to inform the **Planner** about the outcome, as the **Observation** part illustrated above. For instance, for the Narration Ground module, the model will return whether it has found the relevant segment. If so, output the start and end times of the segment. Additionally, many Ground-related modules will send their segmented video or cropped image region to the subsequent visual outcome manager _i.e.,_ **Inspector**.
### Inspector
The objective of the **Inspector** is to manage the visual inputs provided by the user and the intermediary results produced by our system to assist the **Planner** in deciding which source should be directed to which module. Specifically, the **Inspector** records the metadata of each visual element, which includes its type (image or video), source (provided by the user or generated by the system), and a brief description of the content (obtained from the caption model, or the title of an online video). For videos, there is some additional metadata, such as the duration of the video, whether it contains audio and subtitles. The **Inspector** monitors the inputs from the user and the outputs from the **Executor**. As soon as a new visual element is received, it appends the metadata, noted as **Summary** in the above, to the reasoning history of the **Planner**. With the cooperation of
\begin{table}
\begin{tabular}{l l l l} \hline \hline Module Usage & Core Model & Input & Output \\ \hline (a) Image Caption & BLIP series [9; 10; 11] & T, I & T \\ (b) Video Narration & BLIP series [9; 10; 11] & T, V & T \\ (c) Object Detection & G, Dino [64] / GLIP [13] & T, I & T \\ (d) Text Detection & Google OCR & I & T \\ (e) ASR Translation & Whigner [65] & A & T \\ (f) Region Ground & GPA [66] & T, I & T, I \\ (g) Narration Ground & GPT / CLIP [67] & T, Nar, T & V \\ \hline \hline \end{tabular} \begin{tabular}{l l l l} \hline Module Usage & Core Model & Input & Output \\ \hline (h) Text Ground & Program + SSA [62; 63] & T, I & T, I \\ (i) Subtitle Ground & GPT [5] & T, Sub. & T, V \\ (k) Knowledge Reason & GPT [5] & T & T \\ (k) Narration Reason & GPT [5] & T, Nar. & T \\ (l) Subtitle Reason & GPT [5] & T, Sub. & T \\ (m) Temporal Reason & Rule-based & T, V & T, V \\ \hline \hline \end{tabular} \begin{tabular}{l l l l} \hline Module Usage & Core Model & Input & Output \\ \hline (h) Text Ground & Program + SSA [62; 63] & T, I & T, I \\ (i) Subtitle Ground & GPT [5] & T, Sub. & T, V \\ (k) Knowledge Reason & GPT [5] & T & T \\ (k) Narration Reason & GPT [5] & T, Nar. & T \\ (l) Subtitle Reason & GPT [5] & T, Sub. & T \\ (m) Temporal Reason & Rule-based & T, V & T, V \\ \hline \hline \end{tabular}
\begin{tabular}{l l l l} \hline Module Usage & Core Model & Input & Output \\ \hline (h) Text Ground & Program + SSA [62; 63] & T, I & T, I \\ (i) Subtitle Ground & GPT [5] & T, Sub. & T, V \\ (k) Knowledge Reason & GPT [5] & T & T \\ (k) Narration Reason & GPT [5] & T, Nar. & T \\ (l) Subtitle Reason & GPT [5] & T, Sub. & T \\ (m) Temporal Reason & Rule-based & T, V & T, V \\ \hline \hline \end{tabular}
\end{table}
Table 1: Module used in AssistGPT. A module may have different models, separated by a slash (_l_).
the **Planner**, **Executor**, and **Inspector**, our system can generate answers to difficult queries with complex visual inputs.
### Learner
Despite the robot generalization capabilities of LLMs, they can still easily encounter errors when dealing with multi-modal type queries. Thus, it is essential for an AI assistant to have self-valuate mechanism. To achieve this goal, we hope that the model can self-check the reasonableness of its output. On the other hand, when ground truth is available, we intend to gather successful prediction instances as in-context examples. Specifically, AssisGPT will repeatedly attempt to provide the answer when the response is not satisfactory until either passes the self-check, or correct answer is given (when ground truth is available) or a predefined maximum number of attempts is reached. **Learner** includes an evaluator implemented by the LLM, which operates in two modes: self-assessment and ground-truth comparison. These modes are activated depending on the availability of ground truth, and we discuss the two of them separately.
* **Self-assessment mode** is activated when there is no user feedback or ground truth available. It takes the reasoning trace and the results of each step as input, allowing GPT to assess whether the reasoning is complete, consistent, and adhere to the required format.
* **Ground-truth comparison mode** is activated when annotators provide ground truth. In this mode, GPT evaluates whether the AssistGPT's prediction is semantically consistent with the provided ground truth.
Furthermore, **Learner** encourages to keep trying until it receives positive feedback or reachs the maximum number of attempts. After conducting \(N\) times explorations, several outcomes may arise:
* **No adjustments required**: If the model delivers the correct answer on its initial attempt, this suggests that AssistGPT can well-solve the current question effectively. Therefore, no improvement is required.
* **Plan Revision**: If the model produces the correct answer after making \(n\) attempts, where \(1<n\leq N\), this implies that there is room for improving the model's planning capabilities. Therefore, we save the successful reasoning trace to [In-Context Memory Bank]. Consequently, when the model comes across a similar query in the future, it can use this as an in-context example.
* **Function Updates**: If the model still fails to provide the correct answer even after \(N\) attempts, it is highly probable that the problem resides in a specific module or API rather than the planning process. It may necessitate incremental updates to the module. We will leave this for future work.
## 4 Experiments
### Experimental Setting
**Datasets.** Our system is evaluated on A-OKVQA [33] and NExT-QA [68] benchmarks designed to test comprehensive multimodal capabilities, including visual facts, commonsense, temporal sequences, causality, etc. **A-OKVQA**[33] is an innovative benchmark for knowledge-aware visual question answering with 25K questions that demand a high-level comprehension of commonsense and world knowledge. These questions in A-OKVQA go beyond the information contained in the image and cannot be answered solely by querying a knowledge base. Besides, the question is diverse, spanning a wide range of domains such as commonsense reasoning, visually-grounded, knowledge-based, and physical understanding. In our experiments, we assess the model performance under the in-context learning setting on the validation set, which consists of 1,145 questions. **NExT-QA**[68] is a benchmark for evaluating the AI system's causal reasoning, temporal action reasoning, and rich object interactions in video question answering. NExT-QA has a total of 5,440 videos, with averaging 44 seconds in length, and approximately 52K manually annotated question-answer pairs. In our experiments, we assess the model performance under the in-context learning setting on the validation set, which consists of 4,996 questions.
**Implementation Details.** In the following experiments, we use GPT-4 API provided by OpenAI [52] as Planner. In the A-OKVQA experiments, we set Caption Module as BLIP2 or InstructBLIP (abbreviated as Ins.BLIP), use the Gounding Dino for the Object Detection model, and Google
OCR for Text Detection. For the NExT-QA experiments, our Video Narration Module is based on InstructBLIP Vicuna-7B [11]. Our experiments are performed on 4 A5000 GPUs.
### Quantitative Results
**Comparison with State-of-the-arts.** From the results in Table 2, it can be seen that in the multi-choice track, our two versions of AssistGPT (i.e., with light-weight BLIP2 FlanT5\({}_{XL}\) and more powerful Ins.BLIP Vicuna-7B) achieve the best among all current methods on in-context learning setting. It's worth noting that we use a pre-trained version of InstructBLIP, which performs at 53.3%, as shown in Table 3. When integrated into our system, it can enhance its performance to the level of fine-tuning model. For direct answer questions, while our performance on it may not match that of recently proposed models, it is still comparable to previous supervised SOTA, like GPV-2 [69].
Our performance on direct answers did not surpass previous methods. The main reason is that for open-ended questions, models relying on LLM tend to output complete phrases rather than a single word as the final answer, even when we prompt them to provide as concise an answer as possible. For instance, for a given question "What flag is represented on the wall?", AssistGPT outputted the answer, "United States flag", but the correct answer does not include the word "flag", therefore it's deemed incorrect. This type of error is very common in AssistGPT. In the appendix, we show more examples to analyze the failure cases. Moreover, compared to the SOTA method, PromptCap [70], it specifically trained a caption model toward generating captions for A-OKVQA, which is also the reason for its good performance, while our system is more general.
From the results in Table 4, AssistGPT achieved higher performance than recently proposed supervised methods, demonstrating the effectiveness of our approach. We can see that our model's performance mainly shows a more promising improvement in Causal and Descriptive questions, mainly due to our model continuously obtaining detailed information related to the question from the videos. Moreover, our method does not perform well on temporal questions. The main reason for this is that there are relatively few open-world temporal grounding models available, and mainstream work still involves fine-tuning on closed-world datasets. Therefore, we have to use the image captioner InstructBLIP with GPT-4 to achieve temporal grounding. The effect is not as good as that of fine-tuned models but has a more generalization ability. Furthermore, its performance is also very close to recent concurrent work, ViperGPT [21]. The ViperGPT is a little bit superior to ours, possibly because it has designed a sophisticated rule-based method, iteratively checking whether objects appear in the frame to perform temporal grounding.
**Ablation Study.** We have designed several variants of AssistGPT to test the effectiveness of our proposed method. The most basic baseline is **InstructBLIP** (note that all following models are using Vicuna-7B version), which is the main source of visual information in AssistGPT. Since **InstructBLIP** cannot necessarily output the answer in the required format, we design a variant, **InstructBLIP+GPT-4** allows GPT-4 to further refine the output of InstructionBLIP. The **Reasononly** model directly plans all the steps the models need to run, similar to previous works [17]. The **ReAct** model is capable of executing language-based ReAct. However, without Inspector and Code-like invocation forms, a subsequent model can only accept the output of the previous model, which is similar to [61]. We also ablate the Learner, which has three versions, **PIE** (i.e., w/o. Learner), **PEIL w. Self-Check** and **PEIL w. GT-Check**.
From the results in Table 3, we can see that the Reason-only model, which plans all the steps the models need to execute, showed a notable improvement in D.A. and M.C. This indicates that
\begin{table}
\begin{tabular}{l l c c} \hline \hline & Model & D. A. & M.C. \\ \hline \multirow{5}{*}{\begin{tabular}{} \end{tabular} } & LXMBRT [29] & 30.7 & 51.4 \\ & KRISP [36] & 33.7 & 51.9 \\ & GPV-2 [69] & 48.6 & 60.3 \\ & InstructBLIP Vicuna-7B [11] & 64.0 & 75.7 \\ \hline \multirow{5}{*}{
\begin{tabular}{} \end{tabular} } & PromptCap [70] & **56.3** & 73.2 \\ & AssistGPT (BLIP2 FlanT5\({}_{XL}\)[54]) & 42.6 & 73.7 \\ & AssistGPT (Ins.BLIP Vicuna-7B) & 44.3 & **74.7** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of SOTAs on A-OKVQA dataset. D.A. and M.C. indicate direct answer and multi-choice. ICL: In-context Learning. ZS: Zero-shot inference.
\begin{table}
\begin{tabular}{l l c c} \hline \hline LLM & Model & D.A. & M.C. \\ \hline - & Ins.BLIP & 13.4 & 53.8 \\ \hline \multirow{5}{*}{
\begin{tabular}{} \end{tabular} } & Ins.BLIP + GPT-4 & 27.9 & 55.2 \\ & Reason only & 28.8 & 65.9 \\ & ReAct & 30.1 & 68.2 \\ & PIE & 32.4 & 72.4 \\ & PEIL w. Self-Check & 41.2 & 74.2 \\ & PEIL w. GT-Check & 44.3 & 74.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study of our AssistGPT on A-OKVQA dataset. Ins.BLIP used here is the pre-trained version.
integrating multiple models can enhance model performance. The ReAct model, despite not having Inspector and Code-like invocation forms, showed a further improvement in both metrics, surpassing the Reason-only model. This suggests the effectiveness of ReAct manner. But involving our interleaved language and code, i.e., PIE, brings a more significant improvement on M.C. Finally, the two variants of PIE with partial ablations, PEIL w. Self-Check and PEIL w. GT-Check, scored the highest on both tracks, showing the effectiveness of the Learner. The Learner shows a more significant improvement on D.A. tracks because models on D.A. often fail to output extremely short answers as required by A-OKVQA. The Learner can mitigate it by collecting in-context examples.
### Qualitative Results
In Fig. 4, we visualize some prediction cases from A-OKVQA (Question 1) and NExT-QA (Question 2). From both examples, it can be seen that AssistGPT can decompose the question into reasonable sub-tasks and then complete them step by step, ultimately obtaining the final answer. Moreover, due to the interleaved code and language reasoning method, the model can effectively invoke the necessary content as input. From the reasoning process of Question 1, we can also see AssistGPT's self-correction ability. When the visual model output unsatisfactory results, AssistGPT can dynamically invoke other modules, like the ground module to reason over another path. In addition, for Question
\begin{table}
\begin{tabular}{c l|c c c|c} \hline \hline & Method & Causal & Temporal & Descriptive & All \\ \hline \multirow{5}{*}{\(\Box\)} & HGA & 44.22 & 52.49 & 44.07 & 49.74 \\ & VQA-T [71] & 49.60 & 51.49 & 63.19 & 52.32 \\ & ATP [72] & 53.10 & 50.20 & 66.80 & 54.30 \\ & VGT [73] & 52.28 & 55.09 & 64.09 & 55.02 \\ & MIST [74] & 54.62 & 56.64 & 66.92 & 57.18 \\ \hline \multirow{2}{*}{\(\Box\)} & ViperGPT [21] & - & - & - & 60.00 \\ & AssistGPT & 60.02 & 51.38 & 67.26 & 58.36 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of our AssistGPT with SOTAs on NExT-QA dataset.
Figure 4: Qualitative results on A-OKVQA (Question 1) and NExT-QA dataset (Question 2).
1, the model's first attempt did not yield effective results, and it will autonomously optimize the plan because it did not pass the self-check. In addition, we also present the result of the reason-only baseline. It first calls InstructBLIP to output a caption, then uses GPT-4 for inference. Since the information from the caption does not meet the requirements, resulting incorrect results. However, once the prediction fails, the model does not have a way to self-optimize. It's worth mentioning that the most significant feature of our method is that it can solve more complex problems than those in the benchmark, as the example in Fig. 1. We show more in-the-wild examples in the Appendix.
## 5 Conclusions and Limitations
In this paper, we propose a novel multi-modal assistant system named AssistGPT that leverages an interleaved code and language reasoning approach, namely Plan, Execute, Inspect, and Learn (PEIL). This innovative system integrates LLM with various tools to address the challenges posed by complex visual-based tasks. Our experimental results on A-OKVQA and NExT-QA benchmarks demonstrate AssistGPT's effectiveness. Furthermore, we showcase our system's ability in handling diverse and intricate real-world scenarios. Our system also has some limitations. Our approach does not propose an end-to-end updating solution, which is crucial when the tools used make mistakes. Another limitation is that the planning process requires an extensive explanation of tools, resulting in a relatively large overhead, which could be improved by distilling a smaller size planner.
## Appendix
In the appendix, we provide additional details for the main paper:
* More discussion with existing modular systems in Sec. A
* More details of AssistGPT in Sec. B.
* More qualitative results of A-OKVQA in Sec. C.
* More in-the-wild examples in Sec. D.
## Appendix A Discussion with Existing LLM-driven Modular Systems
In Table 5, we compare the existing LLM-driven modular systems with our AssistGPT from four processings. From the perspective of Task Focus, there are currently three works that can handle videos: Hugging GPT [17], MM-ReAct [61], and ViperGPT [21]. Hugging GPT and MM-ReAct merely demonstrate their capabilities in handling videos through a few simple examples (thus we mark them with orange checkmarks ). For instance, Hugging GPT exhibits its video generation feature, while MM-ReAct showcases its ability to perform tasks such as summarization and localization based
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Task Focus} & \multicolumn{2}{c}{Reasoning} & \multicolumn{2}{c}{Source Management} & \multirow{2}{*}{Learning} \\ \cline{2-2} \cline{5-8} & NLP & & \multicolumn{1}{l}{Image} & \multicolumn{1}{l}{Video} & \multicolumn{1}{l}{Format} & ReAct & \multicolumn{1}{l}{Input format} & \multicolumn{1}{l}{Method} \\ \hline Toolformer [25] & ✓ & ✗ & ✗ & lang. \& prog. & ✓ & text-only & - & ✓ \\ WebGPT [75] & ✓ & ✗ & ✗ & program & ✓ & test-only & - & ✓ \\ \hline Visual ChatGPT [8] & ✗ & ✓ & ✗ & language & ✗ & multi. V. & Filename & ✗ \\ ViperGPT [21] & ✗ & ✓ & ✓ & program & ✗ & single V. & Variable & ✗ \\ ViSProg [22] & ✗ & ✓ & ✗ & program & ✗ & single V. & Variable & ✗ \\ MM-ReAct [61] & ✗ & ✓ & ✓ & language & ✓ & multi V. & Filename & ✗ \\ Chameleon [19] & ✓ & ✓ & ✗ & language & ✗ & single V. & Cache update & ✗ \\ HuggingGPT [17] & ✗ & ✓ & ✓ & language & ✗ & multi V. & Filename & ✗ \\ \hline AssistGPT (ours) & ✗ & ✓ & ✓ & lang. \& prog. & ✓ & multi V. & Inspector & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Comparison of existing LLM-driven modular systems. We compare existing methods from four dimensions: Task Focus, Reasoning Method, Source Management (how they manage input and intermediate results), and whether they have learning capabilities. The term "ReAct" in the table does not strictly refer to using the ReAct [24], but rather it denotes planning and executing concurrently.**
on subtitles. However, these methods have not been validated on any benchmark. ViperGPT can handle questions based on visual content. Compared to these works, AssistGPT is capable of dealing with more complex and general video question-answering tasks, including understanding subtitles, visual content, and OCR, and demonstrating long video comprehension capabilities.
**Reasoning.** In terms of reasoning, existing Multi-modal models primarily adopt a reason-only style, that is, directly deriving the solution steps based on the question. This approach struggles with handling complex visual inputs, and when the intermediate results don't meet expectations, the model also finds it hard to self-correct. MM-ReAct introduces the original ReAct for reasoning in Multi-modal tasks, but due to the original ReAct's inadequacy in dealing with complex non-text intermediate results, its current planning scheme for addressing video-related issues is basically two steps: extracting all information from the video and then having an LLM answer the question. In contrast, this paper proposes a more general Plan, Execute, Inspect, and Learn (PEIL) reasoning scheme. In the case of complex videos, our interleaved language and code reasoning approach allows for flexible language planning for the next step, and structured code for invoking input and intermediate results, thereby facilitating the handling of complex questions and visual content.
**Source Management.** Handling complex input and a large number of intermediate results is often crucial in complex reasoning processes. Current language-based reasoning methods mainly use filenames to label resources. Chameleon proposes an update mechanism with a cache that constantly updates the current reasoning results. Program-based reasoning, on the other hand, uses variables to store intermediate results. A deficiency of these methods is the inability of the language-based Planner to quickly comprehend the content of visual sources, which impedes the effective use of different sources to complete different subtasks. As a result, existing work struggles to handle flexible input and intermediate results. Even though some work supports multiple visual sources as input, they are more often batch-processed for similar tasks, with each source requiring similar operations. For instance, in HuggingGPT, the task of calculating the sum of the number of zebras in several images involves counting the number of zebras in each image. In contrast, our work introduces the Inspector, which records the metadata and summary of each visual source and provides it to the Planner for reasoning. This design can support complex input. For example, a user view image that describes the current user's problem, and a reference video as a source of knowledge, AssistGPT can then use these two different types of sources to jointly answer the user's question.
**Learning.** Most multi-modal modular systems lack the capability for continuous optimization. This paper proposes a simple update mechanism that allows the model to self-check the reasonableness of its output and ultimately continues to collect in-context learning examples.
## Appendix B More details of AssistGPT
In Table 6, we show the invoke commands and illustration of each module in AssistGPT. We provide more details of how each module is implemented.
* **Image Caption**: The core model of this module is a text-conditioned captioning model, e.g., BLIP2 [54], InstructBLIP [11], similar to an open-ended Visual Question Answering model.
\begin{table}
\begin{tabular}{c|c c} \hline \hline
**Module** & **Invoke Command** & **Illustration** \\ \hline (a) Image Caption &
\begin{tabular}{c} **\#**\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\##\#\#\#\#\##\#\#\#\#\#\#\#\##\#\#\#\#\#\##\#\#\#\#
* **Video Narration**: As the general video captioning models are not yet mature, we currently use the image captioning model [54, 11] to accomplish this function. Specifically, we sample image frames (1/3 FPS for current implementation) and perform text-conditioned captioning on each frame. We employ text-conditioned captioning because, if we use dense captioning, the output text will be excessively abundant, making it difficult for subsequent models to utilize. The Video Narration feature can also optionally read the OCR content within the frames. The extracted OCR will be appended to the caption of each frame.
* **Object Detection**: The main function of this module is to determine whether the image contains the objects mentioned in the query and to address counting-related questions. Thus, it contains an open-set object detection model, e.g., Grounding DINO [64], which can output the bounding boxes of relevant objects based on the query. We also let the module calculate the number of related objects.
* **Text Detection**: This model is used to extract OCR from images, and the extracted text is returned to the Planner. We use Google OCR to achieve this purpose.
* **ASR Translation**: This model is used to convert audio from a video into text. We use OpenAI's open-source ASR (Automatic Speech Recognition) model, Whisper [65], to accomplish this. The detected ASR organizes timestamps and text in a manner similar to subtitles. In the implementation, we automatically run this module as soon as we receive a video with audio.
* **Region Ground**: The purpose of this module is to find a specific area of an image based on the query. We use the OFA-Large [66], which is fine-tuned on RefCOCO, to achieve it.
* **Narration Ground**: This model's function is to find time segments related to the query based on the video's narration. We propose two implementations: 1) We use GPT-4 [5], taking the video's narration and query as prompts, to output the timestamps of the time segments. 2) Another solution is using CLIP [67] to do that. We can split the video into several segments, and calculate the similarity between the frame in each segment and query. The time stamps of the segment with the highest similarity will be outputted. In our preliminary experiments, the first solution showed better interpretability and generalization ability, so it was adopted in the benchmark evaluation.
* **Text Ground**: The purpose of this model is to locate specific areas of an image that correspond to a certain text. This capability can guide users in identifying crucial information in complex, text-rich images, such as user interfaces. The query format is text[:object_name], wherein text signifies the text to be located, and object_name (which is optional) is used to locate the text on a specific object, for instance, "menu: button". Specifically, the model operates in two stages: 1) Based on the Optical Character Recognition (OCR) detection results, the model identifies areas of the image that match the text segment of the query. This is achieved by calculating the distance between the query and the OCR extracted, and when the edit distance is below a particular threshold, it is considered a match. 2) If more than one textual area is identified, we further refine the results based on the object's name. We employ the Semantic Segment Anything (SSA) [63] to segment the image semantically, identifying regions that match the object's name mentioned in the query.
* **Subtitle Ground**: This model is similar to the narration grounding model, but it uses the video's subtitles as input instead of the narration. Thus, we also use GPT-4 to achieve it.
* **Knowledge Reason**: The purpose of this model is to enable the model to apply external knowledge to answer questions. We currently do not connect to the internet to retrieve knowledge, but use the knowledge that GPT-4 has itself learned. Specifically, this model enables GPT-4 to use its own knowledge to infer the answer based on the question and results of all previous reasoning steps.
* **Narration Reason**: The aim of this module is to infer some information based on the visual content of the video. This module also uses GPT-4, taking the query and the input video's narration as prompts, to infer the answer.
* **Subtitle Reason**: The aim of this module is to infer some information based on the subtitle of the video. It is similar to Narration Reason, but takes the input video's subtitle and query as prompts, to infer the answer.
* 6. Temporal relation words include two types, one is absolute temporal relation words, such as in the middle/beginning/end of the video. The second type is relative temporal relation words, such as before and after. For the first type of words, we divide the video into 5 segments and then
output the time stamps of the corresponding segment according to the temporal_word. For the second type, we divide the video into 8 segments, and then, according to the input time stamps, we output the time stamps of the segment before or after it. The current hyperparameters, the division of video clips, are still preliminary. It would be much better to use the model to divide them semantically, and then perform temporal reasoning in the future.
## Appendix C Qualitative Results in A-OKVQA
In Figure 5, we showcase a successful instance along with several failure examples, illustrating the most frequent error patterns in A-OKVQA. As is evident, AssistGPT can produce highly interpretable answer processes. Moreover, even in cases where the questions are answered incorrectly, there are relatively reasonable explanations provided. In the following, we illustrate the common error patterns in detail:
* **Undesired output format**: For Direct Answer questions, like Q2, the results of AssistGPT are the same as the correct answers in meaning, but the expression is different, which would be considered as incorrect under the existing metrics.
* **Fine-grained recognition**: The recognition of fine-grained categories of some objects is still not well done by existing visual models, resulting in the incorrect final answer. For example, AssistGPT didn't successfully recognize cough drops in Q3.
* **Pose-to-text**: Currently, there are very few models that can map the fine-grained pose or actions of people or animals to natural language. For example, capturing the upward jump action of the cat in Q4 is a challenge. AssistGPT currently does not incorporate a related model to grasp such information. Instead, it makes prediction based on the surrounding objects in relation to the cat.
* **Inconsistent reasoning**: Despite AssistGPT having some self-error correction mechanisms, it occasionally exhibits inconsistencies in its reasoning process, which can lead to final inaccuracies. For instance, in Q5, the model initially identifies the orange vehicle as a truck, but in subsequent steps, it is referred to as a shuttle bus. Unfortunately, AssistGPT fails to detect this inconsistency and does not proceed to make necessary corrections.
## Appendix D In-the-wild Prediction Examples
We show some examples of AssistGPT handling in-the-wild scenarios in Figure 6 and Figure 7. From various in-the-wild examples, it's clear that AssistGPT can adeptly handle a range of video types, be it dense, subtitled instructional videos (Q2, Q3), or those featuring rich, visual content with sporadic on-frame text (Q1, Q4, Q5). Impressively, when faced with high-level queries (Q2 and Q3), the model exhibits a capacity to strategically locate useful content, accurately identify the correct responses, and offer comprehensive, multimodal answers. A notable self-error correction capability is also evident during its reasoning process, as demonstrated in Q2. Here, the narration model was unable to generate meaningful narrations and, therefore, opted to utilize the subtitle to answer the question.
Moreover, in Q5, we highlight that our model can effectively process multiple video inputs serving different functions. This includes a User view image and a couple of reference videos. It's important to note that our model can accommodate any number of inputs. Consequently, with the incorporation of a YouTube video search function, the model could autonomously seek out several reference videos and then cross-reference them to discern the user's intent.
In summary, we want to emphasize that AssistGPT is a comprehensive multi-modal assistant system, capable of managing a wide array of real-world application queries that are far more complex and comprehensive than the samples provided in benchmarks.
**Question:** What is the man
**Thought:** Check the visual -- thought: Check what the man
**the gray suit on the left**
**left looking down to check?**
**you must choose one answer left.**
**Action:**
**from: phone, tablet,**
**notebook, pager**
**system:** Showout
**Visual-e:** an image, user
**provided image, main**
**content is two men in**
**gray suits sitting at a table**
**Visual-i:**
**Visual-i:**
**Visual-i:**
**The man is looking down to check.**
**original**
**Figure 5: **Reasoning process of AssistGPT on A-OKVQA.** The choice colored with green in the question indicates the ground truth.
Figure 6: **The reasoning process of AssistGPT when handling in-the-wild questions.**
Figure 7: **The reasoning process of AssistGPT when handling in-the-wild questions.** |
2310.04944 | Beyond Text: A Deep Dive into Large Language Models' Ability on
Understanding Graph Data | Large language models (LLMs) have achieved impressive performance on many
natural language processing tasks. However, their capabilities on
graph-structured data remain relatively unexplored. In this paper, we conduct a
series of experiments benchmarking leading LLMs on diverse graph prediction
tasks spanning node, edge, and graph levels. We aim to assess whether LLMs can
effectively process graph data and leverage topological structures to enhance
performance, compared to specialized graph neural networks. Through varied
prompt formatting and task/dataset selection, we analyze how well LLMs can
interpret and utilize graph structures. By comparing LLMs' performance with
specialized graph models, we offer insights into the strengths and limitations
of employing LLMs for graph analytics. Our findings provide insights into LLMs'
capabilities and suggest avenues for further exploration in applying them to
graph analytics. | Yuntong Hu, Zheng Zhang, Liang Zhao | 2023-10-07T23:25:22Z | http://arxiv.org/abs/2310.04944v1 | # Beyond Text: A Deep Dive into Large Language Models' Ability on Understanding Graph Data
###### Abstract
Large language models (LLMs) have achieved impressive performance on many natural language processing tasks. However, their capabilities on graph-structured data remain relatively unexplored. In this paper, we conduct a series of experiments benchmarking leading LLMs on diverse graph prediction tasks spanning node, edge, and graph levels. We aim to assess whether LLMs can effectively process graph data and leverage topological structures to enhance performance, compared to specialized graph neural networks. Through varied prompt formatting and task/dataset selection, we analyze how well LLMs can interpret and utilize graph structures. By comparing LLMs' performance with specialized graph models, we offer insights into the strengths and limitations of employing LLMs for graph analytics. Our findings provide insights into LLMs' capabilities and suggest avenues for further exploration in applying them to graph analytics.
## 1 Introduction
In recent years, there have been unprecedented advancements in large language models (LLMs) [30; 21] such as Transformers [35], BERT [9], GPT [5], and their variants. LLMs can be treated as foundation models that can be readily applied to diverse downstream tasks with little adaptation [5; 18; 21]. These models have achieved state-of-the-art results on many natural language processing tasks including text classification, machine translation, sentiment analysis, and text summarization [47]. Significantly, advancements in architectures and training methodologies have given rise to emergent capabilities, setting state-of-the-art models like GPT-3.5 [5], GPT-4 [28], Claude-2 [3], BARD [12], LlaMA [33], and LlaMA-2 [34] apart from their predecessors. For instance, in-context learning [24] and zero-shot capabilities [18; 39] enable these models to generalize across tasks for which they were not explicitly trained. This is confirmed by their excellent performance in complex activities such as mathematical reasoning and Question Answering (QA) systems.
However, most of the tasks that Large Language Models (LLMs) surpassed previous benchmarks are Natural Language Processing (NLP) tasks involving sequential data. Graph-structured data presents additional complexity beyond sequences as it contains rich topological connections between entities that must be modeled along with node, edge, and graph attributes. Graph-structured data is ubiquitous across many domains, including social networks [26; 20], knowledge graphs [29], molecular structures [41; 45; 37], and transportation networks [4; 46]. While LLMs have shown powerful reasoning and generalization capabilities in sequential data, it remains unclear if they can handle structural information beyond context when applied to graph-structured data. This raises a compelling research question: Can the strengths of LLMs be extended to graph-structured data, enabling them to exhibit significant predictive ability? Further, can they compete with state-of-the-art models specialized for graph data, such as Graph Neural Networks (GNNs)?
To comprehensively study the capabilities of LLMs on graph-structured data, we conduct a series of empirical experiments with leading LLMs on diverse graph-based tasks that span node-, edge-, and graph-level predictions. By comparing their performance to specialized graph models like GNNs, we aim to assess the potential strengths and limitations of LLMs in this domain. Critically, by altering the input prompt formats, we aim to evaluate how effectively LLMs can extract and leverage the underlying structural information from the graph to enhance their performance in subsequent tasks. Additionally, we explore the importance of the structural data across different task dimensions spanning node, edge, and graph levels as well as diverse dataset domains such as citation networks, social networks, and chemical networks.
Broadly, this paper focuses on studying the central question of investigating the capabilities of LLMs on graph-structured data from three perspectives:
* **Can LLMs effectively process graph analytics tasks even without explicit graph structure?** Given that LLMs have already shown the capability to leverage contextual information for human-like reasoning in many NLP tasks, it becomes intriguing to assess whether they can attain substantial predictive performance on graph data tasks, even in the absence of structural information.
* **How well can LLMs interpret graph structures to enhance downstream task performance?** It is essential to investigate to what extent LLMs can perceive and interpret important graph structures. Furthermore, it is imperative to understand whether such recognition can influence and enhance performance in subsequent tasks.
* **How do task dimensions and dataset domains affect LLMs' ability to handle structured data?** LLMs' ability in identifying pivotal structural information for predictions can be influenced by specific tasks and data domains. For example, node-level tasks may heavily rely on entity attribute interpretation, while graph-level tasks may demand comprehensive understanding of intricate inter-node interactions. Also, the distinct topologies properties to various dataset domains, whether derived from intricate social networks or sophisticated molecular structures, further influence the proficiency with which LLMs decipher and manage structured data.
The subsequent sections of this paper are structured as follows: We initiate with an extensive literature review, highlighting the recent advancements of LLMs within graph domains. Subsequent to this, we present our comprehensive findings on benchmarking LLMs on graph data, aiming to address the aforementioned research questions. This is accompanied by a detailed discussion, delving into the depth of our discoveries across varied experimental setups. We conclude by summarizing the key points and proposing ideas for future explorations.
## 2 Related Works
Large language models for graph-structured data.In recent literature, a few preliminary studies [44; 7; 40; 13] have made attempts to uncover the potential of LLMs in handling graph-structured data. Unfortunately, a comprehensive examination of LLMs' capacity to extract and harness crucial topological structures across diverse prompt settings, task levels, and datasets remains underexplored. Both Chen et al.[7] and Guo et al.[13] proposed to apply LLMs directly on graph data. Their research primarily focus on the node classification task, constrained to a selected few datasets within the citation network domain, and thereby fails to offer a thorough exploration of LLMs' ability over diverse task levels and datasets. In addition, Ye et al.[44] fine-tuned LLMs on a designated dataset to outperform GNN, underscoring a distinct research objective compared to our study which emphasizes the intrinsic proficiency of LLMs in understanding and exploiting graph structures. Meanwhile, Wei et al.[40] treated LLMs as autonomous agents within graph data, which is less relevant to the core focus of our paper.
Graph neural networks.In recent years, graph neural networks (GNNs) [16; 8; 27; 11; 14; 42; 25; 45; 20; 6; 38; 2] have emerged as a powerful deep learning approach for graph analysis and learning. GNNs operate by propagating information along edges of the graph and aggregating neighborhood representations for each node. The expressive power of GNNs to learn from graph structure makes them well-suited for analyzing complex relational data [42; 48; 22]. Unlike standard deep neural networks which operate on regular grids, GNNs can leverage the topological structure of graphs and have achieved state-of-the-art performance on tasks such as node classification [16], link prediction [19], and graph classification [10].
Experiments
Datasets.We conducted the experiments on 5 commonly used graph benchmark datasets for node-level, edge-level and graph-level tasks: Cora[32], Pubmed[32], OGBN-arxiv[15], WordNet18[23] and Reddit[14]. Brief descriptions of the datasets are shown in Table 1.
We selected these five datasets for our preliminary experiments due to their rich contextual information present in the attributes of nodes, edges, and graphs. Specifically, Cora, Pubmed and OGBN-arxiv are citation network, where each node represents a research paper while an edge between two nodes indicates that there is a citation relationship between them. Edge in WordNet18 links two synsets that are regarded as nodes. Reddit came from Reddit posts, in which each node represents a post and two nodes are connected if the same user comments on two posts. The specifics regarding their textual features are as follows:
* Cora: Each node represents a paper in the domain of Artificial Intelligence, containing the information about its title and abstract. Each paper belongs to one of the following 7 categories: ['Case_Based', 'Theory', 'Genetic_Algorithms', 'Probabilistic_Methods', 'Neural_Networks', 'Rule_Learning', 'Reinforcement_Learning']. An edge from one node to another indicates the first paper cited the second one.
* Pubmed: Each node represents a scientific publication from PubMed database pertaining to diabetes. The node textual information contains keywords from its abstract and text body. Each paper belongs to one of the following 3 categories: ['Diabetes Mellitus, Experimental', 'Diabetes Mellitus Type 1', 'Diabetes Mellitus Type 2']. An edge from one node to another indicates the first paper cited the second one.
* OGBN-arxiv: Each node represents a research paper, containing the information about its title and abstract. Each paper belongs to one of 40 categories on arxiv.cs such as 'AI' (Artificial Intelligence). An edge leading from one node to another signifies that the first paper cites the second one.
* WordNet18: Each node represents a synset, containing a description. An edge between two nodes indicate their relation such as 'furniture', 'includes', or 'bed'. Each edge belongs to one of 18 relationships.
* Reddit: Each node corresponds to a post made by a user, which contains descriptions or discussions about a particular topic. Each graph symbolizes a subreddit (or community), with affiliations to one out of 29,651 distinct communities, for instance,'math'.
Choices of LLMs.We opted to utilize OpenAI's state-of-the-art models, GPT-3.5 (GPT) and GPT-4, via their API system, based on a balance between performance and cost considerations. We adopted GPT with the latest versions (_gpt-3.5-turbo-16k_ and _gpt-4_) in experiments.
Implementation Details.For node classification task, we follow the same train-test split of Cora, Pubmed and OGBN-arxiv as established in semi-supervised GNN methods [16, 42]. For link prediction on Cora, Pubmed and WordNet18, a random 15% of the links from the graph and the same number of negative-edge node pairs are packed into the test sets. For graph classification,
\begin{table}
\begin{tabular}{l r r r r} \hline \hline
**Dataset** & **\#Node** & **\#Edge** & **\#Task** & **Metric** \\ \hline Cora & 2,708 & 5,278 & 7-class node classifi. \&Link Prediction & Accuracy \\ Pubmed & 19,717 & 44,324 & 3-class node classifi. \&Link Prediction & Accuracy \\ OGBN-arxiv & 169,343 & 1,166,243 & 40-class node classifi. & Accuracy \\ \hline
**Dataset** & **\#Entity** & **\#Relation** & **\#Task** & **Metric** \\ \hline WordNet18 & 40,943 & 18 & 18-class link classifi. & Accuracy \\ \hline
**Dataset** & **\#Node** & **\#Subgraph** & **\#Task** & **Metric** \\ \hline Reddit & 3,848,330 & 29,651 & 70-class subgraph classifi. & Accuracy \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the datasets. For Reddit, it actually contains 29,651 subreddits (classes). Here we only randomly sampled 70 communities for graph classification task in each run.
in each run, we randomly selected 70 communities. Experiments conducted on WordNet18 and the retrieval test for Cora employed few-shot prompts. Conversely, all other experiments leveraged zero-shot prompts. We executed each experiment thrice, subsequently averaging the results."
Comparison GNN Methods.On node-level tasks, we choose the semi-supervised results from Graph Neural Network (GNN) [31], Graph Convolutional Network (GCN) [17] and Graph Attention Network (GAT) [36] to compare with performance from LLMs. On edge-level tasks, we consider Graph Auto-Encoder (GAE) [1], Graph InfoCust (GIC) [43]. It is worth noting this is not an absolutely fair comparison. Since LLMs operate under zero-shot or few-shot settings, where GNNs require a training set for parameter optimization. Additionally, potential data leakage during the LLMs' training process remains a concern. However, these studies aim to offer a foundational understanding of LLMs' proficiency in understanding graph data structures and forecasting downstream tasks.
### Node-level task
Driven by the goal of investigating LLMs' capabilities in discerning patterns within textual graphs and leveraging this for downstream tasks, we crafted three distinct prompts for our node-level prediction task experiments: (1) absence of graph topology descriptions; (2) straightforward presentation of all neighborhood data to the LLM; and (3) a retrieval-based prompt guiding the LLM to extract task-centric structural details. Examples of these prompts can be found in Table 6.
**LLMs' zero-shot or few-shot ability on node classificaiton tasks is still usually weaker than the semi-supervised GNN performance**. This may arguably suggests that general LLMs still can not surpass the specialized graph models on node classification task. As indicated in Table 2, GPT-4 outperforms the GNN models in zero-shot and few-shot settings on Pubmed, but this superiority isn't observed on Cora and OGBN-arxiv. We hypothesize three potential reasons for this
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{8}{c}{Number of labels} \\ \cline{2-10} & 1 & 5 & 10 & 15 & 20 & 30 & 40 & 50 & 60 & 70 \\ \hline
**GPT-3.5** & 0.773 & 0.662 & 0.618 & 0.594 & 0.628 & 0.604 & 0.638 & 0.536 & 0.618 & 0.507 \\ \hline
**GPT-4** & 0.957 & 0.886 & 0.895 & 0.843 & 0.795 & - & - & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average performance of GPT-3.5 and GPT-4 on Reddit when possible labels increase from 1 to 70. Results on GPT-4 with more labels are not available due to input limit of prompt length.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{Edge-level} \\ \cline{2-5} & Cora & Pubmed & WordNet \\ \hline
**GAE** & 0.793 & **0.923** & - \\ \hline
**GIC** & 0.812 & 0.775 & - \\ \hline
**GNN** & 0.739 & 0.528 & - \\ \hline
**GPT-3.5** & 0.512 & 0.116 & 0.134 \\ \hline
**GPT-4** & 0.578 & 0.132 & 0.169 \\ \hline
**GPT-3.5\({}^{\circ}\)** & 0.646 & 0.114 & - \\ \hline
**GPT-4** & **0.967** & 0.143 & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average accuracy on link prediction tasks. **GPT-3.5\({}^{\circ}\)** means adding structural information into prompt like Table 6. There are only triples for entries in WordNet18, which makes there is no connected graph structure for it.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{Edge-level} \\ \cline{2-5} & Cora & Pubmed & Arxiv \\ \hline
**GCT-64\({}^{*}\)** & 0.814 & 0.790 & 0.731 \\ \hline
**GAT\({}^{*}\)** & **0.832** & 0.790 & **0.742\({}^{*}\)** \\ \hline
**GNN** & 0.829 & 0.738 & 0.721 \\ \hline
**GPT-3.5** & 0.627 & 0.673 & 0.516 \\ \hline
**GPT-4** & 0.432 & 0.821 & 0.642 \\ \hline
**GPT-3.5\({}^{*}\)** & 0.647 & 0.712 & 0.509 \\ \hline
**GPT-4\({}^{*}\)** & 0.531 & **0.833** & 0.673 \\ \hline
**GPT-3.5\({}^{\circ}\)** & 0.656 & 0.704 & 0.445 \\ \hline
**GPT-4\({}^{*}\)** & - & 0.814 & - \\ \hline
**GPT-3.5\({}^{*}\)** & 0.054 & - & - \\ \hline
**GPT-4\({}^{*}\)** & 0.047 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average accuracy on node classification tasks. **[No structure information] GPT-3.5 and GPT-3.5\({}^{*}\)** mean zero-shot and few-shot prompt strategy. **[With structure information] GPT-3.5\({}^{\circ}\)** and GPT-3.5\({}^{\bullet}\)** mean prompts contain neighbors’ information and retrieval requires, respectively.
discrepancy: 1. Fewer categories; 2. Less semantic overlap between categories; 3. Questionable groundtruth categories. GPT's 'wrong' predictions about citation networks are mostly reasonable. Particularly, papers in OGBN-arxiv with lable of Computation and Language always are classified into other categories like Artificial Intelligence and Machine Learning. These prediction error papers always mentioned many machine learning concepts in their abstracts. We also argued whether datasets are 'out of fashion' compared with the information in the GPT's corpus. We prompted GPT to use pre-2000 categorization criteria on Cora, but this does not lead to improvements. Intriguingly, GPT-4 did not consistently surpass GPT-3.5 in terms of predictive accuracy, hinting that the extent of pre-trained knowledge could significantly influence predictions in zero-shot scenarios.
**Incorporating structural information can slightly enhance the performance of GPT on node-level tasks to a certain degree.** As evidenced in Table 2, the inclusion of neighborhood information enhances node classification performance in certain instances. However, this improvement lacks consistency across different LLM selections and datasets. Such observations could suggest that structural information might not be a pivotal factor in node-classification tasks. Additionally, we assessed the capability of GPT to retrieve information by incorporating retrieval requirements into the prompts for Cora. Regrettably, this led to both GPT-3.5 and GPT-4 struggling significantly, rendering them largely unable to provide accurate predictions.
### Edge-level task
**Contrary to node-level tasks, the structural information of a graph seems to be more crucial for edge-level tasks.** When only node data, excluding neighborhood information, is presented to GPT, the link prediction accuracy on Cora stands at 51.2% for GPT-3.5 and 57.8% for GPT-4. Remarkably, these figures significantly increased to 64.3% and 96.7% respectively when we incorporate the nodes' neighbors information. Notably, the zero-shot result of GPT-4 even surpass the performances of all trained GNN models. It is worth noting that some wrongly predicted links can be attributed to the absence of neighbor information for these nodes in the dataset. Table 6 illustrates the difference between prompts used on link prediction tasks. It is also interesting that when we further increase the information of neighboring nodes (e.g., the abstract of an article), the prediction accuracy becomes worse than only with information of titles. For the link prediction task on Word
\begin{table}
\begin{tabular}{l l l} \hline \hline Node-level Task & Structure? & Prompt to GPT \\ \hline
**Zero-shot** & No & The title of one paper is \textless{Title}\textgreater{} and its abstract is \textless{Abstract}\textgreater{}. \\ (Cora \& Pubmed & Here are possible categories: \textless{Categories}\textgreater{}. Which category \\ \& OGBN-arxiv) & does this paper belong to? You can only output one category. \\ \hline \multirow{6}{*}{Yes} & Yes & The title of one paper is \textless{Title}\textgreater{} and its abstract is \textless{Abstract}\textgreater{}. \\ & Yes & The title of one paper is \textless{Title}\textgreater{} and its abstract is \textless{Abstract}\textgreater{}. \\ & Yes & The title of one paper is \textless{Title}\textgreater{} and its abstract is \textless{Abstract}\textgreater{}. \\ & Yes & The title of one paper is \textless{Title}\textgreater{} and its abstract is \textless{Abstract}\textgreater{}. \\ & Yes & This paper is cited by following papers: \textless{Title}\textless{}. Each of these papers belongs to one categories in: \textless{Categories}\textgreater{}. You \\ & & need to 1.Analyse the papers’ topic based on the give title and abstract; 2.Analyse the pattern of citation information based on on their titles, and retrieve the citation information you think is important to help you determine the category of the first given paper. Now you need to combine the information from 1 and 2 to predict the category of the first given paper. You should only output one category. \\ \hline
**Few-shot** & Yes & Here is a list of labeled papers: The title and abstract of the first paper are \textless{Title}1\textgreater{} and \textless{Abstract}\textgreater{} respectively, and this paper r belongs to \textless{Category}1\textgreater{}\cdots\) Here is a new paper whose title is \textless{Title}\textgreater{} and its abstract is \textless{Abstract}. Here are possible categories: \textless{Categories}. Which category does this paper belong to? You can only output one category. \\ \hline \hline \end{tabular}
\end{table}
Table 5: Examples of different prompts used in node classification experiments. For few-shot tasks, we randomly sampled two papers for each category due to the limit of input length.
Net18, we randomly selected 5 entries for each relationship and presented this information to GPT. Unfortunately, both GPT-3.5 and GPT-4 struggled to achieve a high predictive accuracy based on the provided data. A plausible explanation for this could be GPT's heavy reliance on its pre-trained knowledge, especially when not fine-tuned for specific tasks.
### Graph-level task
For graph-level tasks, we only conducted experiments on Reddit due to its text richness and semantic ambiguity. Only GPT-3.5 was tested since the information of one community is large even we have summarized the information from each user. We selected _top-k_ post summaries of the most replied users as representative information of a community. As shown in Table 4, when GPT needs to make predictions from full 70 communities, the accuracy was 50.7%. The accuracy decreased from 77.3% to 50.7% when possible communities in the <SubReddits> list increased from 1 to 70.
## 4 Conclusion and Future Works
This research presented a systematic empirical evaluation of leading LLMs on diverse graph learning tasks spanning node, edge, and graph levels. Through careful variation of prompt design and dataset selection, we assessed the capabilities of models such as GPT-3.5 and GPT-4 in interpreting and leveraging graph structural information to enhance predictive performance. Our results demonstrate that while LLMs exhibit reasonable node classification capabilities even without explicit graph data, likely by relying on contextual clues, their zero-shot performance continues to lag behind state-of-the-art GNNs specialized for this domain. However, incorporating graph topology information can significantly boost performance on edge-level link prediction tasks, with GPT-4 even surpassing certain GNNs in select cases. On more complex graph classification tasks, limitations emerge in handling increased output complexity. In summary, this research provides valuable evidence that LLMs have promising capabilities on graph analytics, while also revealing clear areas for improvement compared to specialized graph models.
Our future work should explore more rigorous benchmarking LLMs on graph learning tasks with graph specialized models, novel prompt designs to focus on topological structures, evaluating on additional graph tasks, and even fine-tuning open-sourced LLMs on graphs. By exploring these
\begin{table}
\begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt}} \hline Edge-level Task & Structure? & Prompt to GPT \\ \hline
**Zero-shot** & No & There are two papers. Title of the first paper is \textless{Title1>} and its abstract is \textless{}Abstract1>. Title of the second paper is \textless{}Title2>, and its abstract is \textless{}Abstract2>. You need to predict whether the second paper or not. Answer ’YES’ or ’NO’. \\ \hline \multirow{7}{*}{\begin{tabular}{} \end{tabular} } & Yes & There are two papers. Title of the first paper is \textless{Title1>} and its abstract is \textless{}Abstract1>. Title of the second paper is \textless{}Title2>, and its abstract is \textless{}Abstract2>. The first paper cited following papers: \textless{}Title8>. You need to predict whether the second paper or not. Answer ’YES’ or ’NO’. \\ \hline \multirow{7}{*}{
\begin{tabular}{} \end{tabular} } & No & We define two descriptions should have a relationship, such as furniture \textless{}includes\textgreater{} bed. There are some samples: \textless{}Entries\textgreater{}. Here are possible relations: \textless{}Relationships\textgreater{}. The first entry is \textless{}Entry1> and the second entry is \textless{}Entry2>. You must output only one relationship between these two entries. \\ \hline \end{tabular}
\end{table}
Table 6: Examples of different prompts used in link prediction experiments. Structural information plays an important role in link prediction task.
\begin{table}
\begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt}} \hline Graph-level Task & Structure? & Prompt to GPT \\ \hline
**Zero-shot** & Yes & There are texts from representative users of one Reddit community: \textless{}Posts\textgreater{}. There are following communities: \textless{}SubReddits\textgreater{}. Which community does these texts belong to? You should only output one community from given communities. \\ \hline \end{tabular}
\end{table}
Table 7: Example prompt used in graph classification experiments. Structural information is given by a list of _top-k_ important nodes a graph.
avenes, the full potential of large language models for advancing graph representation learning and analytics can be more promising.
|
2308.11865 | Resistive Hose modes in Tokamak Runaway Electron Beams | Beams of energetic runaway electrons are generated during disruptions in
tokamaks, and fluid models are used to study their effects on macroscale
dynamics. Linear computations of a massless, runaway electron beam coupled to
MHD plasma show that resistive hose instabilities grow faster than tearing
modes at large resistivity. Eigenvalue results with reduced models of the
resistive hose instability are compared with results from the full MHD and beam
system, showing that the resistive hose decouples from any plasma response. An
estimate of plasma temperature at which growth of the resistive hose dominates
tearing for post-disruption DIII-D plasma parameters is in a physically
relevant regime | A. P. Sainterme, C. R. Sovinec | 2023-08-23T01:56:17Z | http://arxiv.org/abs/2308.11865v1 | # Resistive Hose Modes in Tokamak Runaway Electron Beams
###### Abstract
Beams of energetic runaway electrons are generated during disruptions in tokamaks, and fluid models are used to study their effects on macroscale dynamics. Linear computations of a massless, runaway electron beam coupled to MHD plasma show that resistive hose instabilities grow faster than tearing modes at large resistivity. Eigenvalue results with reduced models of the resistive hose instability are compared with results from the full MHD and beam system, showing that the resistive hose decouples from any plasma response. An estimate of plasma temperature at which growth of the resistive hose dominates tearing for post-disruption DIII-D plasma parameters is in a physically relevant regime
The tokamak is the leading candidate for a magnetic confinement fusion reactor. It confines plasma in a toroidal configuration with an externally generated toroidal magnetic field and a poloidal magnetic field supported by toroidal electrical current that is driven through the plasma. Tokamaks are prone to disruptions--sudden dynamic events in which confinement of the plasma is lost and the plasma current is terminated. A typical disruption begins with a rapid loss of plasma thermal energy, a thermal quench (TQ), followed by a current quench (CQ). The decrease in temperature during the TQ increases the electrical resistivity of the plasma, increasing the electric field along magnetic fieldlines and accelerating a small population of electrons to high energies. Since the effective collision frequency decreases at higher energies, the resulting beam-like population of supra-thermal electrons is largely unimpeded by collisional drag. These runaway electrons (REs) can carry a substantial fraction of the initial plasma current and have caused considerable damage to plasma-facing components[1]. Understanding the dynamics of REs during disruptions is critical for the development of tokamak fusion reactors. For the ITER experiment, which will carry a plasma current of 15 MA, being able mitigate RE-induced damage will be essential[2]. To that end, the development and characterization of computational tools to simulate RE dynamics in tokamaks is required.
One common method for modeling the macroscale effects of REs in tokamaks treats them as a separate cold beam-like fluid species in extended MHD simulations. The RE beam is treated as a source of resistance-free current density whose direction depends on the time-evolving magnetic field that interacts with a plasma consisting of thermal electrons and ions governed by single-fluid MHD equations. The consistency of this approximation presumes negligible inertia of both electron species, \(m_{e}\ll m_{i}\), \(\gamma_{r}m_{e}\ll m_{i}\), and a small particle density of REs \(n_{r}\ll\rho/m_{i}\). Here, \(\gamma_{r}\) is the relativistic factor associated with the runaway parallel velocity: \(\gamma_{r}=1/\sqrt{1-(c_{r}/c)^{2}}\). Since the RE current is essentially resistance free[1], the typical resistive MHD Ohm's law is modified:
\[\mathbf{E}+\mathbf{V}\times\mathbf{B}=\eta\mathbf{J}\;\rightarrow\;\mathbf{E}+ \mathbf{V}\times\mathbf{B}=\eta\left(\mathbf{J}-\mathbf{J}_{r}\right).\]
The runaway current density is subtracted from the total current density so that only the fraction of current carried by the bulk plasma is affected by resistivity.
Because the resistivity is modified, a natural question is the effect of the inclusion of REs on resistive MHD instabilities. Prior work with this model has explored the effect of a fluid runaway current on the linear behavior of the tearing mode in a sheet pinch [3], and tearing and resistive kink modes in a large aspect ratio cylinder [4][5]. Helander et al. additionally consider nonlinear saturation of the tearing mode[6].
The stability of high-energy, relativistic particle beams propagating through resistive, neutralizing background plasma has also been studied. The earliest analytical description of an instability in such a system is due to Rosenbluth [7]. Rosenbluth identifies a kink-type instability of a self-pinched beam of relativistic particles that couples the motion of the beam particles with the resistive response of magnetic field. Weinberg generalizes the work of Rosenbluth first to a spatially modulated beam [8], where the instability is dubbed a 'hose' mode, and later, to a more general dispersion relation for the self-pinched equilibrium [9] The hose mode has long been recognized in the accelerator community as an impediment to the propagation of beams, and controlling its growth with properly tailored beam profiles has been a subject of study[10].
In the resistive hose case, it is assumed that the plasma column surrounding the beam is generated by ionization of a dense gas by the beam particles. The beam is self-pinched, meaning that the self-induced magnetic field is azimuthal, and the equilibrium force balance condition requires that this field is sufficiently strong to provide the centripetal acceleration of the beam particle orbits
around the axis. Some relatively recent work includes the effect of an axial magnetic field, which modifies the equilibrium orbits of the beam particles[11]. In contrast, in the case of the RE beam generated in a tokamak, the background plasma carries current density before the RE population is generated. There is an existing MHD equilibrium with both axial (toroidal) and azimuthal (poloidal) magnetic field, and after its generation, the RE beam carries some substantial fraction of the equilibrium current density. Despite their differences, we show that important aspects of the resulting mathematical descriptions of the particle-beam and tokamak RE systems are equivalent.
In this letter, we draw the connection between the earlier results on resistive hose instabilities of particle beams and resistive instabilities in the context of RE modeling in tokamaks. It is notable that the dynamics of the RE beam in this work are treated under the assumption that the electrons have negligible mass and are guided by the tokamak magnetic fields in contrast to the particle-beam analysis, where inertia is important for the particle orbits. No analysis of the resistive hose in the literature has, heretofore, explored the massless limit. To check its relevance, we estimate that the resistive hose mode that appears in this work would grow faster than the tearing mode at post-TQ plasma temperatures of \(T_{e}\approx 1.5\) eV or less in DIII-D [12].
The model consists of a background plasma whose dynamics are governed by single-fluid, resistive MHD. In the present discussion, the effects of pressure are presumed to be negligible, as appropriate for post-TQ conditions. The MHD equations are augmented with a second fluid species that represents the collective low-frequency behavior of REs. The model for REs is an electron beam with a large constant speed parallel to the magnetic field. In the present work, the \(\mathbf{E}\times\mathbf{B}\) and curvature drifts are neglected. The RE beam dynamics are coupled to the resistive MHD equations via the modification to the Ohm's law noted earlier. Quasi-neutrality between the species is assumed so that there are no space-charge effects from the beam.
The dynamical equations are linearized about a steady-state solution, and equations are derived for the time-evolution of small perturbations from the equilibrium.
\[\partial_{t}n_{r}+\nabla\cdot\left(n_{r}\mathbf{U}_{r}+N_{r} \mathbf{u}_{r}\right)=0, \tag{1}\] \[\rho\partial_{t}\mathbf{v}=\mathbf{j}\times\mathbf{B}+\mathbf{J} \times\mathbf{b},\] (2) \[\partial_{t}\mathbf{b}=-\nabla\times\mathbf{e},\] (3) \[\nabla\times\mathbf{b}=\mathbf{j},\] (4) \[\mathbf{e}=-\mathbf{v}\times\mathbf{B}+\eta\left(\mathbf{j}- \mathbf{j}_{r}\right),\] (5) \[\mathbf{U}_{r}=-c_{r}\mathbf{B}/B,\] (6) \[\mathbf{u}_{r}=-c_{r}\mathbf{b}_{\perp}/B,\] (7) \[\mathbf{j}_{r}=-e\left(n_{r}\mathbf{U}_{r}+N_{r}\mathbf{u}_{r} \right),\] (8) \[\nabla\cdot\mathbf{b}=0. \tag{9}\]
Equations 1-9 describe the time dependence of the perturbations to the magnetic field, \(\mathbf{b}\), the bulk plasma velocity, \(\mathbf{v}\), and the runaway density, \(n_{r}\). The variables with capital letters and the mass density \(\rho\) represent the equilibrium fields, and the lowercase variables are the perturbations.
As mentioned previously, the impetus for this model is to capture the effect that a presence of a substantial amount of runaway electron current has on instabilities that are present in resistive MHD. An early analytic application of this model to tearing modes in slab geometry found that in the limit of zero inertia, stability of the tearing mode is unaffected[3]. More recent results using reduced MHD in cylindrical geometry also suggest that the runaway current does not impact the stability of the tearing mode, but that the nonlinear saturation is affected[6]. The analytic and numerical work in [4] finds an additional correction to the linear dispersion relation that results in overstability of the tearing mode.
We have implemented equations 1-9 in the NIM-ROD extended MHD code[13]. The linear equations are Fourier analyzed in the axial direction, and a high-order spectral element representation is used in the poloidal (\(r-\theta\)) plane. For a single axial Fourier harmonic, \(e^{ikz}\), the initial value problem is solved by numerically integrating the linear equations in time from an arbitrary initial condition. As time \(t\rightarrow\infty\), the solution approaches the most unstable mode from which the growth rate and frequency are determined.
For cylindrical problems, the equations can be reduced to a set of coupled ordinary differential equations in the radial coordinate by also assuming a Fourier representation in the azimuthal angle, \(e^{im\theta}\) and a complex exponential dependence in time \(e^{\gamma t}e^{-i\omega t}\). To verify NIMROD results, a 1D eigenvalue code is used to numerically solve the resulting ordinary differential equations in \(r\) to determine \(\omega\) and \(\gamma\) given \(k\) and \(m\). The eigenvalue solver also uses spectral elements, albeit for the single spatial dimension \(r\)[14].
The plasma equilibrium considered here is a \(0-\beta\) cylindrical screw pinch with a peaked current density profile. The equilibrium current density is assumed to be entirely carried by the RE beam. This equilibrium approximates the situation observed tokamaks during the current plateau phase that follows a TQ[15]. The safety factor \(q\), is given by \(q(r)=1.15(1+1.54(r/a)^{2})\). Note that \(q(0)>1\), and that the profile is monotonically increasing. The only MHD instabilities expected in the absence of runaway effects are tearing modes with \(m\geq 2\).
The cylinder is considered periodic in the axial coordinate \(z\). It has unit minor radius, \(a=1\), there is a perfectly conducting wall placed at \(r=a\), and the RE beam radius is equal to the plasma radius. The axial field strength at \(r=0\) is \(B_{0}=1\). The density \(\rho\) is chosen so that the Alfven velocity \(V_{A}=1\) when \(B_{0}=1\), and \(c_{r}=20V_{A}\). This particular value of \(c_{r}\) is chosen as a representative example for \(c_{r}\gg V_{A}\). The resistivity is normalized to \(aV_{A}\). In the results that follow, the axial wavenumber is \(k=-0.1\).
Figure 1 plots the growth rate and frequency of the
fastest growing mode from NIMROD calculations of equations 1-9 as a function of varying resistivity values. Also plotted are the growth rates of the fastest growing mode in the absence of the RE beam, growth rates and frequencies computed with the M3D-C1 code[4], and growth rates and frequencies computed with our 1D eigenvalue code with \(m=2\). For values of \(\eta\lesssim 10^{-4}\), the fastest growing mode is the \(m=2\) tearing mode, which displays the expected localization around the \(q=2\) surface. For \(\eta>10^{-4}\), the fastest growing mode computed in NIMROD is a hose-like mode with \(m=1\). The appearance of this hose mode accounts for the distinct growth rate and frequency at \(\eta=10^{-3}\) reported by NIMROD. Figure 2 plots contours of the radial and azimuthal components of the solution for \(\eta=10^{-2}\).
The transition from the tearing mode to the hose mode is explicated by calculating each \(m\) separately with the eigenvalue code. Figure 3 plots the frequencies and growth rates for \(m=1\) and \(m=2\) as a function of resistivity. The shaded region denotes the range of \(\eta\) values where the hose mode grows faster than the tearing mode. It is clear from the eigenvalue code results that the hose mode scales linearly with resistivity for small \(\eta\), whereas the tearing mode scales as \(\eta^{3/5}\). The difference in scaling suggests that for equilibria that are unstable to both modes, there is a value of \(\eta\) at which the fastest growing mode transitions from tearing to the hose. The exact details of the transition depend on the equilibrium current profile, and quantifying this will require an analytic expression for the hose dispersion relation in this model. Deriving such an expression remains a task for future work.
Since the resistive hose mode is driven by the interaction of the beam with the magnetic field, we can isolate the instability from the full system by simply neglecting the MHD velocity perturbation \(\mathbf{v}\). This is a good approximation when the background plasma response is on a much longer time scale than the hose mode dynamics. In making this simplification we also redefine the RE density variables as
\[\lambda\equiv\frac{ec_{r}n_{r}}{B}, \tag{10}\]
and
\[\Lambda\equiv\frac{ec_{r}N_{r}}{B}. \tag{11}\]
Then, when \(\mathbf{v}=0\), we can consider only the equations
\[\partial_{t}\lambda-c_{r}\frac{\mathbf{B}}{B}\cdot\nabla\lambda= \frac{c_{r}}{B}\nabla\cdot(\Lambda\mathbf{b}_{\perp}), \tag{12}\] \[\partial_{t}\mathbf{b}-\eta\nabla^{2}\mathbf{b}=\eta\nabla\times (\lambda\mathbf{B}+\Lambda\mathbf{b}_{\perp}). \tag{13}\]
Eigenvalues for equations 12 and 13 are computed with the same 1D eigenvalue code. Note that in this system of equations, \(V_{A}\) is not a relevant parameter.
For the particular equilibrium considered, both with the full MHD system, and with \(\mathbf{v}=0\), the polarization of the perturbed magnetic field for the hose-like mode is primarily transverse to the axis of the cylinder. _i.e._, the axial component of the solution for the perturbed magnetic field is negligible: \(\mathbf{b}\cdot\hat{\mathbf{z}}\approx 0\). This observation motivates the introduction of a Clebsch representation for the perturbed magnetic field, \(\mathbf{b}=\nabla\psi\times\hat{\mathbf{z}}\). In this approximation, again assuming the dependence \(\psi=\psi(r)\exp(im\theta+ikz)\), equations 12 and 13 reduce to two scalar equations for the complex quantities \(\psi\) and \(\lambda\):
\[\partial_{t}\psi-\eta\nabla^{2}\psi = \eta\left(B_{z}\lambda+\Lambda\frac{B_{\theta}B_{z}}{B^{2}}\psi^ {\prime}\right) \tag{14}\] \[+\eta\frac{kr}{m}\left(1-\frac{B_{\theta}^{2}}{B^{2}}\right)\psi^ {\prime},\]
\[\partial_{t}\lambda-ic_{r}F\lambda = \frac{c_{r}}{B}\Lambda^{\prime}\frac{im}{r}\psi+ic_{r}F\Lambda \frac{B_{\theta}}{B^{2}}\psi^{\prime}, \tag{15}\]
where \(F\equiv(mB_{\theta}/r+kB_{z})/B\). Assuming an exponential time dependence results in a coupled eigenvalue problem with a second-order radial derivative operator acting on \(\psi\). As shown in figure 4, numerical solutions of these equations confirm that they are sufficient to reproduce the features of the hose instability observed in the solutions of equations 1-9.
The agreement between these three calculations confirms that the flow of the background plasma is unimportant and that the polarization of the magnetic field is predominantly transverse. Moreover, the form of equation 14 is equivalent to the field equation derived by Rosenbluth and Weinberg to describe the resistive hose mode[7][9].
The hose mode in these calculations does not extend beyond a given radius. Figure 5 shows radial profiles of the hose mode. It is clear that there is some radius \(\tilde{r}\) outside which the profiles of the solution for \(\lambda\) and \(\psi\) drop to zero. The value of \(\tilde{r}\) appears to be determined by the solution to \(\omega+c_{r}F(\tilde{r})=0\). Although there is no rational surface in the tearing sense of \(F(r)=0\), the equation for \(\lambda\) becomes singular in the rotating frame of the eigenmode at the radial location where \(\omega+c_{r}F(r)=0\).
We have shown that resistive hose-type modes associated with a RE beam interacting with a thermal background plasma grow faster than the typical resistive MHD tearing mode at large resistivity. The scaling of the hose mode is linear with the resistivity in the low resistivity regime, and the frequency of the mode is much larger than the growth rate. Since the hose mode is dominant when the background plasma resistivity is high, it may appear in post-TQ tokamak plasmas.
For the RE beam profile considered here, the hose mode grows faster than the tearing mode number at a normalized resistivity of \(\eta/aV_{A}\approx 2\times 10^{-4}\). Ignoring for now whether or not this current density profile is representative of the experiment, we may use the plasma parameters from a DIII-D discharge reported in Reference [12] to estimate the temperature where this transition
occurs. Using \(a=0.5,V_{A}\approx 6.6\times 10^{6}\), and \(Z_{eff}=3\), the Spitzer formula produces a temperature estimate of \(T_{e}\approx 1.5\)eV. This is in the regime of temperatures reported for the post-TQ plasmas observed in DIII-D. Even in cases where the hose mode is subdominant to resistive MHD instabilities, it can introduce an \(m=1\) component into the fluctuation spectrum that would be otherwise unexpected from resistive MHD alone in cases where the minimum \(q\)-value is larger than unity.
Future work remains to find cases in which an analytic dispersion relation may be found to compare with numerical results. We also intend to address the nonlinear dynamics of the hose mode and more thoroughly characterize its influence on RE beam formation and termination in tokamaks.
Figure 1: Comparison of the growth rates and frequencies of the fastest growing mode computed by NIMROD, M3D-C1, and the 1D eigenvalue code (\(m=2\)). In the NIMROD result, the resistive hose mode is the fastest growing at \(\eta=10^{-3}\).
Figure 5: Radial profiles of the most unstable eigenfunction of equations 14 and 15 for (\(m=1,k=-0.1\)) and \(\eta=10^{-2}\).
Figure 2: Contours of the radial and azimuthal components of the perturbed magnetic field associated with the growing mode observed at \(\eta=10^{-2}\).
###### Acknowledgements.
We wish to thank Dr. Chang Liu for providing data on the tearing mode results with runaways. This work is supported by the US DOE through grant DE-SC00180001.
|
2304.09327 | Federated Alternate Training (FAT): Leveraging Unannotated Data Silos in
Federated Segmentation for Medical Imaging | Federated Learning (FL) aims to train a machine learning (ML) model in a
distributed fashion to strengthen data privacy with limited data migration
costs. It is a distributed learning framework naturally suitable for
privacy-sensitive medical imaging datasets. However, most current FL-based
medical imaging works assume silos have ground truth labels for training. In
practice, label acquisition in the medical field is challenging as it often
requires extensive labor and time costs. To address this challenge and leverage
the unannotated data silos to improve modeling, we propose an alternate
training-based framework, Federated Alternate Training (FAT), that alters
training between annotated data silos and unannotated data silos. Annotated
data silos exploit annotations to learn a reasonable global segmentation model.
Meanwhile, unannotated data silos use the global segmentation model as a target
model to generate pseudo labels for self-supervised learning. We evaluate the
performance of the proposed framework on two naturally partitioned Federated
datasets, KiTS19 and FeTS2021, and show its promising performance. | Erum Mushtaq, Yavuz Faruk Bakman, Jie Ding, Salman Avestimehr | 2023-04-18T22:21:40Z | http://arxiv.org/abs/2304.09327v1 | Federated Alternate Training (FAT): Leveraging Unannotated Data Silos in Federated Segmentation for Medical Imaging
###### Abstract
Federated Learning (FL) aims to train a machine learning (ML) model in a distributed fashion to strengthen data privacy with limited data migration costs. It is a distributed learning framework naturally suitable for privacy-sensitive medical imaging datasets. However, most current FL-based medical imaging works assume silos have ground truth labels for training. In practice, label acquisition in the medical field is challenging as it often requires extensive labor and time costs. To address this challenge and leverage the unannotated data silos to improve modeling, we propose an alternate training-based framework, Federated Alternate Training (FAT), that alters training between annotated data silos and unannotated data silos. Annotated data silos exploit annotations to learn a reasonable global segmentation model. Meanwhile, unannotated data silos use the global segmentation model as a target model to generate pseudo labels for self-supervised learning. We evaluate the performance of the proposed framework on two naturally partitioned Federated datasets, KiTS19 and FeTS2021, and show its promising performance.
Erum Mushtaq\({}^{\star}\) Yavuz Faruk Bakman\({}^{\star}\) Jie Ding\({}^{\dagger}\) Salman Avestimehr\({}^{\star}\)\({}^{\star}\) University of Southern California, \({}^{\dagger}\)University of Minnesota Medical Image Federated Segmentation, Semi-supervised Segmentation, Tumor Segmentation Learning
Medical Image Federated Segmentation, Federated Semi-Supervised Learning, Semi-supervised Segmentation, Tumor Segmentation Learning
## 1 Introduction
In recent years, Federated Learning (FL) has been widely explored for medical applications [1]. However, most current works focus on supervised federated learning where all silos have pixel-wise annotations available. In practical scenarios, pixel-level label acquisition for massive medical imaging datasets requires a radiologist expert and therefore, can be time-consuming and expensive, so not all silos can afford it. Examples are silos from rural regions with limited expert resources. It has motivated us to study the research question: _How can a server leverage unannotated data silos, that have no labeled data, along with a few labeled data silos in a realistic non-independent and identical (non-IID) data distribution based FL regime to improve the global model performance. Further, we focus on a more realistic scenario where the number of the unannotated data silos can be larger than the annotated data silos._
Recently, the work of [2] studied this research problem and proposed a threshold-based self-supervised learning method to leverage unannotated data silos to segment COVID-19-affected regions. This work considered two data silos (one annotated and one unannotated). The work of [3] used the model bank approach to extract pseudo labels from all supervised silos' models at unannotated data silos. Given the large model sizes for the 3D medical datasets, the computation of pseudo labels using several models at unannotated silos can be computationally infeasible. Another related work [4] studied semi-supervised federated learning in a different setting where a server has labeled data and silos have unlabeled data.
To leverage unannotated data silos, we propose a new Federated Learning framework, Federated Alternate Training (FAT), to leverage unannotated data silos. We show that a straightforward application of the centralized semi-supervised works in FL may not yield optimal results. Also, alternate training of annotated data silos and unannotated data silos is more efficient than the standard FedAvg training [5] of all silos in terms of aggregation cost per round. Finally, we compare our method with the state-of-the-art method [2] and show significant improvements over it.
## 2 Proposed Method
Federated Optimization focuses on a distributed optimization task where K nodes collaborate with each other to learn a global model with parameters \(\theta\) as shown below,
\[\min_{\theta}G(\mathcal{L}_{1}(\theta;X_{1},Y_{1}),...,\mathcal{L}_{K}(\theta ;X_{K},Y_{K})), \tag{1}\]
where \(\mathcal{L}_{i}(\theta;X_{i},Y_{i})\) represents node i's local loss function, \(X_{i}\) denotes the training data and \(Y_{i}\) represents the labels at node \(i\). \(G(.)\) can be any function, for example, \(G(.)\) aggregates the local objectives \((\sum_{k=1}^{K}\frac{N_{k}}{N}\cdot\mathcal{L}_{k}(\theta;X_{k},Y_{k}))\) in Federated Averaging algorithm [5], where \(N_{k}\) is the total number of training data samples at node \(k\) and \(\sum_{k=1}^{K}N_{k}=N\).
### Problem Formulation
In a typical federated averaging setting, each node consists of annotated data (\(X_{i}\), \(Y_{i}\)). However, it is very unlikely that all
nodes have labeled data. Some nodes may not have any labels at all. In such a setting, the optimization problem becomes,
\[\begin{split}&\min_{\theta\in\mathbb{R}^{d}}G(\mathcal{L}_{1}( \theta;X_{1},Y_{1}),\mathcal{L}_{2}(\theta;X_{2},Y_{2}),...,\mathcal{L}_{S}( \theta;X_{S},Y_{S}),\\ &\mathcal{L}_{S+1}(\theta;X_{S+1}),\mathcal{L}_{S+2}(\theta;X_{S+ 2}),...,\mathcal{L}_{K}(\theta;X_{K})),\end{split} \tag{2}\]
where \(S\) signifies the number of nodes. Nodes in the supervised silo \(\{1,2,..,S\}\) contain annotated data i.e, both \(X\) and \(Y\). The rest of the nodes, \(\{S+1,S+2,..,K\}\), are unsupervised and therefore contain unlabelled data i.e, only \(X\). Hence, the objective is to learn a global model such that learning from the unsupervised nodes \(\{S+1,S+2,..,K\}\) nodes along with supervised nodes \(\{1,2,..,S\}\) increases the global model performance as compared to the global model learned from all the supervised nodes alone.
### Federated Alternate Training (FAT)
We proposed alternate training between the supervised and unsupervised silos to solve the objective in eq. (2). In the first round, we initialize our global model with the models pre-trained on other medical datasets. We send this model to the supervised silos, which will fine-tune the global model using their labeled data. The global objective \(G(.)\) aggregates the model weights obtained from the supervised silos, \(\sum_{k=1}^{S}\frac{N_{k}}{\sum_{i=1}^{S}N_{i}}\cdot\mathcal{L}_{k}(\theta)\) and send it to unsupervised silos where it is used to obtain pseudo-labels for learning. After this round, the global objective \(G(.)\) aggregates the model weights sent by the unannotated data silos, \(\sum_{k=S+1}^{K}\frac{N_{k}}{\sum_{i=S+1}^{S}N_{i}}\cdot\mathcal{L}_{k}(\theta)\). Hence, the objective \(G(.)\) alternates between aggregating the supervised silos model weights for a few rounds and the unsupervised silos model weights for the next few rounds. Next, we explain how we obtain pseudo labels at the unsupervised silos.
### Bootstrapping
We perform self-supervised learning in unsupervised silos. During self-supervised training, we aim to learn from the global model without forgetting what the global model learned from the supervised silos. To that end, we bootstrap the learned labels. Instead of maintaining one neural architecture, as is used in the previous work [2], we maintain two models referred to as the online model with parameters \(\xi\) and the target model with parameters \(\theta\). Unsupervised silos initialize both models with the global model at the start of each round. For self-supervised training, we use the mixup approach [6] to augment the input data and feed the perturbed version \(x^{\prime}=\lambda x_{1}+(1-\lambda)x_{2}\) of two randomly selected input data points \(x_{1}\) and \(x_{2}\), where \(\lambda\in(0,1)\) and is a hyperparameter. We feed \(x^{\prime}\) to the online network, \(f_{\xi}\). The online model outputs each class's prediction probabilities \(p\) for the perturbed input \(x^{\prime}\). In parallel, we feed the unperturbed data points to \(x_{1}\) and \(x_{2}\) to the target model \(f_{\theta}\) and perturbed their corresponding prediction probabilities \(p_{1}\) and \(p_{2}\) via mixup logic \(p^{\prime}=\lambda p_{1}+(1-\lambda)p_{2}\). The pseudo label \(y\) is obtained
Figure 1: The proposed Federated Alternate Training (FAT) framework where we alternate training between Annotated Data Silos and Unannotated Data Silos. The Annotated Data Silos follow a supervised training module where they have ground truth labels available. The Unannotated Data Silos follow a bootstrapping-based self-supervised training module where the target model generates pseudo labels, y, for the self-supervised learning and uses exponential moving average (EMA) for the model updates.
by applying the argmax operation on the perturbed output \(p^{\prime}\). These pseudo labels are used to train the online model via Dice loss and Cross Entropy Loss between the pseudo label \(y\) and \(p\). After each training step of the online model, the target model is updated by the exponential moving average \(\theta=\tau\theta+(1-\tau)\xi\), where \(\tau\in(0,1)\) is a decay rate of the target model. At the supervised silos, we do not need pseudo labels. Thus, we train only one model with parameters \(\theta\) and use Dice loss and Cross Entropy loss between the ground truth label \(y\) and the predicted probabilities \(p\). The overall framework is shown in Figure 1 and Algorithm 1.
```
1:Initialization:\(\theta_{0}\): Pretrained model weights; \(s=\{1,2,...,S\}\); \(u=\{S+1,S+2,...,K\}\); \(E\): number of local epochs; \(A\): number of rounds for supervised silos training before alternating; DL: Soft Dice loss function; CE: Cross-entropy loss function; \(\tau\): weight decay; \(N_{i}\): number of samples at client i.
2:Server runs:
3:for each round \(t=0,1,2,...,T-1\)do
4:if\((t\bmod 2A)<A\)then{Supervised Round}
5:for each supervised client \(s\)in paralleldo
6:\(\theta_{t+1}^{s}\leftarrow\) SupervisedTraining\((s,\theta_{t})\)
7:endfor
8:\(\theta_{t+1}\leftarrow\sum_{s=1}^{S}\frac{N_{s}}{N_{S}}\theta_{t+1}^{s}\), \(N_{S}=\sum_{s=1}^{S}N_{s}\)
9:else{Unsupervised Round}
10:for each unsupervised client \(u\)in paralleldo
11:\(\theta_{t+1}^{u}\leftarrow\) UnsupervisedTraining\((u,\theta_{t})\)
12:endfor
13:\(\theta_{t+1}\leftarrow\sum_{u=S+1}^{K}\frac{N_{u}}{N_{U}}\theta_{t+1}^{u}\), \(N_{U}=\sum_{u=S+1}^{K}N_{u}\)
14:endif
15:endfor
16:SupervisedTraining\((s,\theta)\):// Supervised client \(s\)
17:for\(e\) in epoch Edo
18:for minibatch \(x\) in training data do
19:\(\mathcal{L}_{\mathrm{tr}}(\theta)=\mathrm{DL}(p,y)+\mathrm{CE}(p,y)\), \(p=f_{\theta}(x)\)
20: Update \(\theta=\theta-\gamma_{\theta}\nabla_{\theta}\mathcal{L}_{\mathrm{tr}}(\theta)\)
21:endfor
22:endforreturn \(\theta\) to server
23:UnsupervisedTraining\((u,\theta)\):// Unsupervised client \(u\)
24:\(\xi\leftarrow\theta\)
25:for\(e\) in epoch Edo
26:for sample two batches (\(x_{1}\), \(x_{2}\)) in training data do
27:\(p=f_{\xi}(x^{\prime})\), \(x^{\prime}=\lambda x_{1}+(1-\lambda)x_{2}\)
28:\(p_{1}=f_{\theta}(x_{1})\), \(p_{2}=f_{\theta}(x_{2})\)
29:\(y=argmax(p^{\prime})\), \(p^{\prime}=\lambda p_{1}+(1-\lambda)p_{2}\)
30:\(\mathcal{L}_{\mathrm{tr}}(\xi)=\mathrm{DL}(p,y)+\mathrm{CE}(p,y)\)
31: Update \(\xi=\xi-\gamma_{\xi}\nabla_{\xi}\mathcal{L}_{\mathrm{tr}}(\xi)\)
32: Update \(\theta=\tau\theta+(1-\tau)\xi\)
33:endfor
34:endforreturn \(\theta\) to server
```
**Algorithm 1** FAT Algorithm.
### Datasets and Experimental Setup
We evaluate the performance of the proposed framework over two public, naturally partitioned medical datasets, KiTS19 [7, 8] and FeTS2021 [9, 10, 11]. We follow [1] to obtain the federated version of KiTS19, which give us 6 silos as training silos and the rest of the silos are used as test silos. Since we focus on global semi-supervision learning, we further split the train silos into two supervised silos (S) and four unsupervised silos (U) in Figure 1(a). For the FeTS2021 dataset, we use 13 silos as training silos and 4 silos as test silos. We further split the train silos into four supervised silos and nine unsupervised silos in Figure 1(b). The task for FETS2021 is to segment the whole tumor (WT), enhancing tumor (ET) and tumor core (TC), whereas, the task for KiTS19 consists of segmenting the Kidney and Tumor in abdomen CT scans. We use the DICE score as our evaluation metric.
For preprocessing and training, we use the nnUNet pipeline and model architecture [12]. For model initialization, we use a nnUNet pretrained on LiTS [13] and ACDC [14] dataset for KiTS19 and FETS2021, respectively. For all FL experiments, we use 3000 rounds with 5 local epochs. For FAT, we alternate training after every 5 rounds. To evaluate the SoTA method [2], we followed their approach and trained model first at the supervised silos for 500 rounds. For the remaining 2500 rounds, the unsupervised silos also participate. To keep comparison fair, we used the pretrained model based initialization for both SOTA and our method FAT at round 0. Further, we used random-intensity shift data augmentation with a level of 0.9, as given in their work.
Figure 2: Data distribution in terms of Supervised (S) and Unsupervised (U) Train Silos, and Test Silos.
## 3 Results
## 5 Acknowledgements
This material is based upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. FA8750-19-2-1005. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. This work is also supported by research gifts from Intel and Konica Mi nolta.
## 6 Compliance with Ethical Standards
This research study was conducted retrospectively using human subject data made available in open access. Ethical approval was not required as confirmed by the license attached with the open-access data.
|
2307.10326 | Introduction to Drone Detection Radar with Emphasis on Automatic Target
Recognition (ATR) technology | This paper discusses the challenges of detecting and categorizing small
drones with radar automatic target recognition (ATR) technology. The authors
suggest integrating ATR capabilities into drone detection radar systems to
improve performance and manage emerging threats. The study focuses primarily on
drones in Group 1 and 2. The paper highlights the need to consider kinetic
features and signal signatures, such as micro-Doppler, in ATR techniques to
efficiently recognize small drones. The authors also present a comprehensive
drone detection radar system design that balances detection and tracking
requirements, incorporating parameter adjustment based on scattering region
theory. They offer an example of a performance improvement achieved using
feedback and situational awareness mechanisms with the integrated ATR
capabilities. Furthermore, the paper examines challenges related to one-way
attack drones and explores the potential of cognitive radar as a solution. The
integration of ATR capabilities transforms a 3D radar system into a 4D radar
system, resulting in improved drone detection performance. These advancements
are useful in military, civilian, and commercial applications, and ongoing
research and development efforts are essential to keep radar systems effective
and ready to detect, track, and respond to emerging threats. | Jiangkun Gong, Jun Yan, Deyong Kong, Deren Li | 2023-07-19T09:05:39Z | http://arxiv.org/abs/2307.10326v1 | Introduction to Drone Detection Radar with Emphasis on Automatic Target Recognition (ATR) technology
###### Abstract
This paper discusses the challenges of detecting and categorizing small drones with radar automatic target recognition (ATR) technology. The authors suggest integrating ATR capabilities into drone detection radar systems to improve performance and manage emerging threats. The study focuses primarily on drones in Group 1 and 2. The paper highlights the need to consider kinetic features and signal signatures, such as micro-Doppler, in ATR techniques to efficiently recognize small drones. The authors also present a comprehensive drone detection radar system design that balances detection and tracking requirements, incorporating parameter adjustment based on scattering region theory. They offer an example of a performance improvement achieved using feedback and situational awareness mechanisms with the integrated ATR capabilities. Furthermore, the paper examines challenges related to one-way attack drones and explores the potential of cognitive radar as a solution. The integration of ATR capabilities transforms a 3D radar system into a 4D radar system, resulting in improved drone detection performance. These advancements are useful in military, civilian, and commercial applications, and ongoing research and development efforts are essential to keep radar systems effective and ready to detect, track, and respond to emerging threats.
_Cognitive radar, drone detection, Micro-Doppler, radar automatic target recognition (ATR)._
## 1 Introduction
Small drones possess distinctive characteristics, including a low radar cross-section (RCS), slow speeds, and low altitudes [1]. These drones generally fall under Group 1 &2 (refer to Table 1), as designated by the U.S. Department of Defense, which mandates the use of rotating blades for aerial flight [2]. Group 1 & 2 drones typically exhibit an RCS ranging from 0.01 to 0.1 m\({}^{2}\), making them approximately 1/10,000 to 1/1,000 the size of a typical airplane. In 2008, the concept of Low, Small, Slow (LSS) radar targets was introduced to describe small airborne objects with a general RCS value below 2 m\({}^{2}\), flying at speeds below 200 km/h, and operating at altitudes below 1000 m. This classification is based on the drones' physical attributes, particularly their propulsion systems. Group 1 & 2 drones are commonly lightweight, compact, and employed for short-range reconnaissance and surveillance missions. Furthermore, they find extensive use in civilian applications such as aerial photography, mapping, and inspection. Operated remotely, these drones can be equipped with diverse sensors, cameras, and payloads to gather data and accomplish specific tasks. Overall, Group 1 & 2 drones assume a vital role in contemporary military and civilian operations, offering a cost-effective and versatile platform for a wide array of applications.
The proliferation of drone threats in both civil and military applications has become a significant concern, evident in various incidents. One such incident occurred in 2018 and 2019 when drones infiltrated Gatwick Airport in London, UK, resulting in severe flight disruptions [3]. The Gatwick drone incident served as a wake-up call for airports and aviation authorities worldwide, highlighting the critical need for robust security measures and strategies to counter potential drone threats. Drones also have also played significant roles in conflicts such as the Nagorno-Karabakh conflict in 2021 and the Ukraine wars in 2022 [4][5][6]. Consequently, the development of Counter Unmanned Aerial System (C-UAS) solutions has rapidly gained momentum in many influential nations.
China launched the "Invisible Sword" anti-unmanned aircraft challenge in 2018, which has been conducted annually for several years. The organizer has rigorously tested the
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{**Maximum Gross**} & \multicolumn{1}{c}{**Normal**} \\
**Category** & **Takeoff Weight** & **Operating Altitude** & **Airspeed** \\ & **(Pounds)** & **(R)** & **(Knots)** \\ \hline Group 1 & \(<\)20 & \(<\) 1200 \(\mathrm{AGL}^{1}\) & \(<\)100 \\ Group 2 & 21–55 & \(<\) 3500 \(\mathrm{AGL}\) & \(<\)250 \\ Group 3 & \(<\)1320 & \(<\)18,000 \(\mathrm{MSL}^{1}\) & \(<\)250 \\ Group 4 & \(>\)1320 & \(<\)18,000 \(\mathrm{MSL}\) & Any airspeed \\ Group 5 & \(>\)1320 & \(>\)18,000 \(\mathrm{MSL}\) & Any airspeed \\ \hline \hline \end{tabular}
1. Source: “Eyes of the Army”, U.S. Army Roadmap for UAS 2010–2035 [2]. [https://home.army.mil/rucker/index.php](https://home.army.mil/rucker/index.php), accessed on 8, May, 2023.
2. If the drone has even one characteristic of the next level, it is classified in that level.
3. \(\mathrm{ACL}\) = Above Ground Level.
4. \(\mathrm{MSL}\) = Mean Sea Level.
\end{table}
Table 1: Drone classification according to the US Department of Defense (DoD)\({}^{2}\).
solutions provided by over 100 domestic anti-unmanned aircraft manufacturers involved in technology and product development. Through these assessments, they have gained a comprehensive understanding of the technical landscape in the domestic anti-unmanned aircraft technology field. These efforts have helped debunk false claims made by certain manufacturers, rectify the industry atmosphere, and lay a solid foundation for the advancement of anti-unmanned aircraft technology. The U.S. established the Joint Counter-Small Unmanned Aircraft Systems Office (JCO) with the U.S. Army taking the lead in December 2019. The primary responsibility of the JCO is to oversee all anti-unmanned aircraft development projects within the U.S. military. It works in collaboration with various combatant commands and the Office of the Deputy Secretary of Defense, responsible for procurement, to conduct testing and evaluation of deployed anti-unmanned aircraft projects. These evaluations help determine the future development direction and standards for anti-unmanned aircraft projects within the U.S. military. The establishment of this joint leadership organization has provided a conducive environment for the rapid advancement of the U.S. military's anti-unmanned aircraft system. According to the latest report on "Department of Defense Counter-Unmanned Aircraft Systems" by the Congressional Research Service of the United States [7], published on May 31, 2022, the U.S. Department of Defense has planned to invest a minimum of 668 million U.S. dollars in research and development of counter-unmanned aircraft systems (C-UAS) technology for the fiscal year 2023. Additionally, the budget allocated for the procurement of anti-unmanned aircraft weapons is expected to exceed 78 million U.S. dollars.
In the meantime, numerous commercial C-UAS solutions are currently available in the market. Among these solutions, two prominent examples are the AUDS (Anti-UAV Defense System) and the Drone Dome.
The AUDS system, developed in 2015 by a consortium of UK defense companies, exemplifies a comprehensive C-UAS solution. It integrates various components, including radar, electro-optic sensors, and directional RF suppression/interference systems. Specifically, Blighter's A400 series Ku-band electronic scanning radar employs a modular design incorporating a high-efficiency passive electronic scanning array (PESA) and frequency-modulated continuous wave (FMCW) technology. The system's RF suppression capabilities selectively or simultaneously activate within the 400MHz to 6GHz spectrum, targeting five common threat bands used by drones. With a detection range of approximately 9.66 km (six miles), the AUDS system employs micro-Doppler radar, high-precision infrared and daylight cameras, and advanced video tracking software to detect, locate, and confirm the presence of drones. Subsequently, it employs directional high-power RF interference to disrupt the communication link between the drone and its remote control, effectively forcing the drone to land. Remarkably, this entire process of detection, tracking, identification, and forced landing occurs within a rapid 15-second timeframe. The AUDS system has been successfully deployed by esteemed organizations like the British Army and the Metropolitan Police in the UK. Furthermore, it has gained international recognition with deployments in the United States and the Middle East, safeguarding critical infrastructures such as airports, power plants, and government buildings, as well as supporting military operations.
Another notable C-UAS system is the Drone Dome, developed by Rafael, an Israeli company. Introduced in April 2016 at the Brazil Defense Exhibition, the Drone Dome system offers a comprehensive set of capabilities for countering drone threats. It incorporates the PRS-42 tactical airborne radar, the MEOS electro-optical sensor, and the C-Guard broadband signal jammer. The system's radar, in conjunction with Micro-Doppler classification and EO/IR sensors, enables the detection and identification of enemy drones. Upon detection, the system correlates and analyzes the collected data, issuing warnings to enemy drones. In scenarios where warnings are ignored and drones enter the designated kill zone, the operator can employ either hard kill or soft kill methods to neutralize the threat. Recent reports indicate that the Drone Dome system can be further enhanced with a high-energy laser, referred to as the Drone Dome-L, thereby providing an additional capability to destroy targets. With its ability to operate 24/7 under diverse weather conditions, the Drone Dome system has been recognized for its efficacy. In response to the Gaufuck Airport drone incident in 2018, the British Army procured six sets of Drone Dome anti-drone systems for PS16 million. Additionally, the system has been acquired by Japan to enhance security measures during the Tokyo Olympics in 2021. Notably, the US Army's Joint Counter-Small Unmanned Aircraft Systems Office has recently approved and recommended the inclusion of Drone Dome anti-drone systems in the C-UAS as a Service whitelist,
Figure 1: Radar is the core detection sensors of C-UAS system. (a) the AUDS solution1. (b) Drone Dome system2.
thereby enabling authorized provision of anti-drone services and hardware to the US government.
A general C-UAS solution encompasses three key components: (1) the detection unit, (2) the decision-making unit, and (3) the confrontation unit. These components work in a coordinated manner. The detection unit, primarily utilizing radar technology, is responsible for the initial detection and recognition of potential threats. Concurrently, electro-optical/infrared systems validate and confirm the identified targets. Subsequently, the decision-making unit processes the detection information and selects appropriate countermeasures tailored to the specific type of target. Finally, the confrontation unit executes the chosen countermeasures to neutralize the unmanned aircraft threat.
Radar technology holds a pivotal role in C-UAS solutions, providing distinct advantages such as active detection capabilities, long-range coverage, all-weather functionality, detectability, and trackability. Radar performs two fundamental functions within the C-UAS system. Firstly, it scans the designated area, enabling early detection and providing crucial information to the decision-making unit. Additionally, radar facilitates precise target measurement, guiding the confrontation unit in executing effective countermeasures. As a result, radar serves as a cornerstone sensor in C-UAS solutions, contributing significantly to their operational effectiveness.
## II Drone detection radar systems
Traditional air defense radars and air traffic control radars are not well-suited for detecting small drones. These radar systems, such as the Russian S300 air defense system, primarily focus on high-speed large aircraft and ballistic missiles, rendering them inadequate for effectively detecting and tracking small drones. While these systems excel in detecting, tracking, and engaging fast-moving and large targets, they are often inefficient when it comes to dealing with small, slow-moving, and low-altitude Drones. Attemping to adapt traditional air defense radars to address the specific challenges posed by small Drones would be akin to using excessive force for a minor task.
Traditional radars used in air defense applications employ strategies to minimize interference from small and slow-moving targets. These measures include raising the beam elevation angle to prioritize high-altitude targets, raising the speed threshold to focus on fast-moving targets, and increasing the signal detection amplitude threshold to emphasize larger targets with strong scattering echoes. Consequently, these adjustments inadvertently disregard small Drones, which are constructed from composite materials and exhibit temperatures that closely resemble the surrounding atmosphere. Their relatively low speeds, approximately 150 kilometers per hour, further contribute to the challenges of detection, as they are comparable to the speed of clouds. Consequently, when air defense systems attempt to eliminate cloud interference, they inadvertently filter out low-speed targets like Drones, as these targets are not considered within the intended scope of the radar's design. Such filtering is essential to prevent false alarms caused by various non-threatening objects, such as birds. To address the shortcomings of traditional radar systems in detecting Drones effectively, the development of dedicated drone detection radars is imperative. These specialized radars should be designed to account for the unique characteristics of small, slow-moving targets, enabling reliable detection and tracking of drones in diverse operational environments.
Many countries worldwide are actively advancing the development, procurement, and deployment of anti-drone radar systems to tap into the expanding anti-drone market. While variations exist in terms of radar frequency bands, system designs, and other technical aspects, these systems generally adhere to fundamental radar principles. Table II provides an overview of some prominent drone detection radar systems available in the market. Despite variations in radar parameters, their overall detection performance remains largely consistent. The X-band radar band emerges as the most widely used across these systems. Additionally, micro-Doppler analysis combined with kinetic features serves as the primary basis for Automatic Target Recognition (ATR) of drones. Moreover, these radar systems demonstrate a detection range of up to 6 km for drones with a RCS of approximately 0.01-0.1 m\({}^{2}\), as observed in the case of the DJI Phantom drone.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Country**} & \multirow{2}{*}{**Band**} & \multicolumn{2}{c}{**Range**} \\ & & & **(km)\({}^{1}\)** & **identification** \\ \hline Meteksan - & & & & \\ Retinar FAR- & Turkey & Ku & 4.4 & Micro-Doppler \\ AD & & & & \\ HENSOLDT- & & & & Micro- \\ Spexer 2000 & Germany & X & 6 & Doppler, \\
3DMKII & & & & & Trace, etc. \\ AVELLANT- & & & & & Micro- \\ Gamekeeper & UK & L & 5 & Doppler, \\
16U & & & & & Trace, etc. \\ Qineto- & UK & X & 2 & Micro- \\ Obsidian & UK & Ku & 3 & Doppler, etc. \\ Blighter-A800 & UK & Ku & 3 & Micro- \\ Weibel - & Denmark & X & 10 & Micro- \\ XENTA-Ml & Retia - & Czech & X & 6 & RCS, etc. \\ ReGUARD & & & & & \\ ART-ART & & & & & Micro- \\ Midrange 3D & & & & & Doppler, etc. \\ IAI- & & & & & \\ ELM/2026BF & Israel & X & 5.2 & Trace, etc. \\ Elbi system- & & & & & Trace, micro- \\ DABiR\({}^{\rm th}\) & & & & & & Doppler, etc. \\ Robin- & & & & & \\ ELVIRA & Netherlands & X & 2.7 & Micro- \\ Thales-GO20 & & & & & \\ MM & & & & & Doppler, etc. \\ Thales- & & & & & Micro- \\ SQUIRE & & & & & Doppler, etc. \\ SAAB-Giraffe & Sweden & X & 4 & Trace, Micro \\
1X & & & & & Doppler, etc. \\ Leonardo DRS-RPS-42 & Italy/U.S. & S & 10 & Micro- \\ Teledyne & & & & & Doppler, etc. \\ FLIR-R20SS- & U.S. & X & 4 & NA \\
3D & Rayheon- & & & & \\ KuRFS & U.S. & Ku & NA & & \\ \hline \hline \end{tabular}
1. These data can be found in their official websites.
2. The update rate is the typical value. Some of them can be selectable.
3. The detection range is for drones with RCS of \(\sim\) 0.01 m\({}^{2}\), like DJI Phantom-4. The classification range is normally shorter than the detection range.
4. The specific identification signatures are not available, and those terms are reported in their official brochures.
\end{table} TABLE II Some commercial drone detection radars\({}^{1}\)
In a realistic scenario, the effectiveness of a drone detection radar relies on its ability to distinguish radar signals emitted by drones within a complex background. Designed primarily for monitoring airspace below 1000 meters Above Ground Level (AGL), the drone detection radar is capable of detecting clutter from both ground-based and upper air objects (Fig. 2). Some drone detection radars attempt to mitigate ground clutter by adjusting the radar's elevation angle. However, this approach poses the risk of overlooking potential targets since most small drones typically operate within the super-low-altitude airspace below 100 meters. Consequently, it is important to recognize that a drone detection radar primarily serves as a surface surveillance radar rather than an air defense radar.
The utilization of drone detection radar systems poses various challenges that affect the radar's signal processing steps, as depicted in Fig. 2. In this context, we will consider the example of a pulse-Doppler radar. Firstly, the "Low, Small, and Slow" (LSS) characteristics of drones significantly influence radar system operations. Specifically, the "low altitude" of drones necessitates the drone detection radar system to contend with clutter from ground backgrounds and moving objects, such as people, vehicles, and notably, birds. Consequently, an effective drone detection radar system must possess ATR capabilities. Secondly, the "small size" of drones requires the radar detector to exhibit sufficient agility in order to extract weak signals and attain an extensive detection range. This prerequisite demands a high level of recognition confidence. Thirdly, the "slow speed" of drones poses challenges for radar tracking, potentially resulting in misguidance of the confrontation unit within the C-UAS solution. This situation necessitates a high update rate for radar scanning and fast recognition speeds.
In summary, a reliable and efficient drone detection radar system requires a robust ATR function. By addressing these challenges and incorporating the appropriate capabilities, drone detection radar systems can enhance their performance and contribute significantly to the field of counter-drone operations.
## III Automatic target recognition (ATR)
ATR technology is widely recognized as being at the forefront of radar advancements. However, due to its sensitive nature, many technical details pertaining to ATR are not publicly available. Yet, many radar solutions own the ATR functions. For instance, Fig. 3 illustrates a representative ATR solution offered by the SAAB Giraffe radar system. The SAAB Giraffe radar is a highly sophisticated system renowned for its capability to detect and track aerial targets. Specifically, its Enhanced Low, Slow and Small (ELSS) feature is tailored to identify and monitor low-flying, slow-moving objects such as unmanned aerial vehicles (Drones), cruise missiles, and other diminutive airborne entities.
The ELSS technology, as reported by the manufacturer, incorporates advanced signal processing techniques, high-resolution radar signatures including radar cross-section (RCS), kinetic features, micro-Dopplers, and 3D mapping capabilities. These features empower the system to detect and track targets even in complex and cluttered environments. Notably, the SAAB Giraffe radar can differentiate drones amidst a flock of birds, as depicted in Fig. 2(a), where the green traces represent birds and the yellow traces represent drones. The recognition outcomes are then displayed on the radar screen, exemplified in Fig. 2(b). The illustration in Fig. 3 emphasizes that birds present a significant challenge as clutter for drone detection, and the recognition process involves a certain level of probability. If we refer to the recognition tier, the SAAB radar may achieve the Tier **Identification**.
Fig. 3: The detection & recognition results of drones provided by the SAAB giraffe radar1, (a)multi-objects traces, (b) the recognition results.
Fig. 2: The detection and recognition problem of radar detecting drones with the signal processing chain in the scenario of detecting drones.
### **Scattering regions**
In general, ATR function is typically regarded as an additional module that is integrated into an existing radar system. This implies that the ATR module operates within the constraints imposed by the radar parameters. Consequently, the performance of the ATR module may be limited by certain radar parameters that are not optimally suited for the specific ATR solution. Among these parameters, one of the most overlooked aspects is the radar band, or more precisely, the scattering regions associated with it, as shown in Fig. 4. The scattering of a target can be categorized into three distinct regions based on the ratio of its size to the radar wavelength: the Rayleigh region, the resonance region, and the optical region.
(1) Rayleigh region:
Within this region, the target's size perpendicular to the wavefront is significantly smaller than the wavelength of the incident wave. Consequently, the incident wave does not undergo substantial phase changes when interacting with the target. The RCS of the target follows a linear relationship with frequency and can be approximated as a point source. Echo data obtained from this region provides only rudimentary information such as target size and volume.
(2) Resonance region:
In the resonance region, the target's size perpendicular to the wavefront is comparable to the incident wavelength. The scattering behavior of the target primarily arises from surface waves. The RCS of the target becomes a function of both the target size and the wavelength, leading to echo characteristics that exhibit alternating peaks and valleys due to interference between the scattering field components. The resonant region unveils the inherent frequency structure of the target, and a radar system employing multiple polarization states can provide a comprehensive description of the target's poles. Analyzing these poles enables determination of the natural frequency, and then the material composition and identification of the target type. Thereby, "pole" theory related to the natural frequency is basic in the resonance region.
(3) Optical region:
Within the optical region, the target's size perpendicular to the wavefront greatly exceeds the incident wavelength. The RCS of the target tends to remain relatively constant. Scattering primarily occurs through specular reflection and the field of the edge segment, which are determined by the strong scattering points present on the illuminated surface of the target. The total strength of the scattering field can be approximated as the sum of these strong scattering points. Radar echoes received from this region encompass detailed geometric and structural information of the target, rendering them valuable for automatic target recognition (ATR) purposes. "Scattering centers" theory related to the geometry is the basic in the optical region.
Different scattering regions necessitate the consideration of distinct ATR methods. Notably, Peter Trait has contributed significantly to the field of radar ATR and has authored a book addressing this subject, which outlines fundamental principles governing radar ATR solutions [10]. Similarly, David Blacknell and Hugh Griffiths have published a comprehensive book on radar ATR [11], delving into various aspects of target categorization, encompassing ground targets, air targets, and maritime targets. Over the course of several decades, multiple ATR schools have recognized the significance of radar features in achieving successful ATR applications in specific cases.
### **Recognition signatures**
In recent years, there have been significant advancements in radar ATR technologies, mainly due to the emergence of Deep Learning (DL) methods. However, it is important to note that DL methods primarily serve as a technique rather than signatures themselves. The ATR processing for radar images involves both images and non-images (Fig. 5). Radar imagery encompasses Synthetic Aperture Radar (SAR) images and Inverse Synthetic Aperture Radar (ISAR) images. SAR is commonly used in air-based radar systems, whereas ISAR is prevalent in sea-based radar systems. In this context, our focus is on ground-based radar systems.
Traditional ATR solutions typically employ a template-based matching approach that involves two key steps: feature extraction and pattern recognition. Consequently, the radar signatures play a crucial role in the initial stages. Based on the features extracted from these radar signatures, they can be categorized into two distinct groups: the long-term characteristics, and the transient ones.
(1) Radar Cross Section (RCS)
One category of features used in ATR is statistical features. Radar can detect a target and measure various values related to it, which can be divided into two parts: Radar Cross Section (RCS) related values and kinetic values. RCS, which represents the scattering power or electric size of the target, serves as a common signature for classification. The RCS of a target, \(\sigma\) is given as:
\[\sigma=\lim_{r\to 0}4\pi r^{2}\frac{{E_{s}}^{2}}{{E_{i}}^{2}} \tag{1}\]
where, \(r^{\prime}\)= the range, \({E_{s}}\)= the scattered field, and \({E_{i}}\)=the incident field. Thereby, RCS is the area intercepting that amount of power which, if radiated isotopically, produced the
Fig. 4: The variation of RCS of spheres within three scattering regions, where \(\mathbf{a}\) is the radius of the sphere, \(\mathbf{\lambda}\) is the wavelength [8][9]
Fig. 5: The flowchart of designing a radar driving by different ATR methods (the red dotted circle marks the one for drone detection radar)
same received power in the radar. It is a statical mean value. For example, a jet usually has an RCS level of 100 m\({}^{2}\), while a small quad-rotor drone's RCS lies around 0.1 m\({}^{2}\). The measured RCS can fluctuate over time, and it can be represented as a waveform or time series. Additionally, the statistical features derived from RCS values can provide valuable information about the target, such as altitude, which can aid in target recognition.
1. Kinetic features
Another category of features is kinetic features, which include speed and trajectory. Consider the target's speed is \(V_{b}\), which is determined though Doppler measurements in the radar system. The Doppler shift, can be given as:
\[\overline{f_{bd}}=-\frac{2V_{b}}{\lambda} \tag{2}\]
where, \(\overline{f_{bd}}\) = the Doppler shift, and \(\lambda\) =the radar wavelength. Subsequently, the trace function of the target, denoted as \(L(t)\) can be defined as:
\[L(t)=V_{b}t \tag{3}\]
where, \(t\)= the measured time. The essential kinetic features encompass differential functions of the measured trace, taking into account factors such as velocity, time, and range. Mathematically, this can be represented as,
\[D(r,t)=f(\frac{dL}{dt},\frac{dL}{dr},\frac{dV_{b}}{dt}...) \tag{4}\]
where, \(D(r,t)\)= the general kinetic features, and \(L\) =the detection range of the target, \(r\)= the range resolution of the radar. These features, also referred to as trace classification, have long been employed in radar systems, especially for classifying aerial targets. When a radar system detects an aerial target, it generates a trace, which is a chronological record of the target's movement. Analyzing these traces using trace classification algorithms allows for the determination of various target characteristics, including size, speed, and flight pattern [12]. This information plays a crucial role in classifying the target as a drone, airplane, helicopter, or other types of aerial vehicles. However, there are certain considerations when utilizing kinetic features. Firstly, successful target detection is necessary for extracting kinetic features. Since drones typically emit weak radar signals, the detection probability significantly affects the performance of trace classification. Additionally, the presence of birds with trace patterns similar to those of drones poses additional challenges to ATR performance. As a result, trace classification requires longer processing time and has a shorter recognition range, as kinetic features do not exhibit robust signature characteristics.
1. The range-profiles
Geometry information can be explained using the scattering center theory in the optic region. The High Range Resolution Profile (HRRP) technology is widely used to separate the scattering centers of a target [13]. Radar systems transmit ultrawide-band signals and capture target profiles in the range direction. The HRRP provides a representation of the target's shape, which is then processed using template matching techniques within the dataset to determine the target class. The U.S. Air Force Research Laboratory (AFRL) conducted a notable project called the Systems-Oriented High Range Resolution (HRR) Automatic Recognition Program (SHARP) [14]. In essence, the HRRP maps the target's shape information projected in the range domain, which is defined by the range resolution equation, given by
\[R_{e}=\frac{c}{2B} \tag{5}\]
where, \(R_{e}\)= the range resolution, \(c\)= the transmitted velocity of light, and \(B\)= the transmitted bandwidth.
1. The cross-range profiles
Since the Doppler resolution is directly proportional to the cross-range resolution, the cross-range profile of an object is also a separation of scattering centers of the target. The application is the ISAR technology [8], which is described by
\[\Delta f_{d}=\frac{2\Delta R_{c}\omega}{\lambda} \tag{6}\]
Where, \(\Delta f_{d}\)= the Doppler resolution, \(\Delta R_{c}\)=the distance of the two scatter centers, \(\omega\) =the angular rotation rate, and \(\lambda\) =the radar wavelength. The cross-range profile technology can also be used for classifying small targets. For example, it can be used for classifying radar echoes from vehicles and helicopters [15], or the small birds and large birds [16]. Cross-range resolution improves with shorter wavelengths and over larger rotation angles. It is interesting that the cross-range resolution is independent of the range of the target, unlike SAR, in which the synthetic aperture has to be increased at long ranges to maintain range resolution.
1. The micro-Doppler signature
The micro-Doppler approach can extract specific geometry from a target. Micro-Doppler refers to the additional Doppler component generated by micro-motions exhibited by the target, such as the rotational movement of helicopter blades or the flapping of bird wings [17]. Micro-Doppler analysis enables the extraction of embedded kinematic and structural information, which can be used for target registration. While micro-Doppler signatures are typically treated as kinetic features, in this context, we consider some specific micro-Doppler signatures as structure/geometry signatures. Dr. Chen highlights a key challenge associated with the micro-Doppler method, emphasizing the need for effective interpretation of the extracted features and their correlation with the structural aspects of the target under scrutiny [17]. In the context of drones, the micro-Doppler phenomenon primarily stems from the coherent rotational motion or the presence of rotating structures, as exemplified in Fig. 6b. Consequently, this phenomenon enables the extraction of distinctive radar signatures pertaining to drones, such as the number and rotational rate of blades, which greatly facilitate drone recognition. Consequently, as depicted in Table II, a multitude of drone detection radar systems employ micro-Doppler analysis for the identification of drone-related radar signals. The observed micro-Doppler effect is also a consequence of the classic Doppler effect. Assuming a drone's velocity as \(V_{b}\), the micro-Doppler shift can be approximated by considering the radial component of the blade velocity projected onto the radar's line-of-sight. The relationship can be expressed as:
\[\overline{f_{md}(t)}=\frac{L}{\lambda}\omega\,cos\alpha\,cos\beta\,cos(\omega t )+\overline{f_{bd}} \tag{7}\]
where, \(R\) = the radar range, \(L\) = the blade length, \(\omega\) = the rotating rate of blades, \(\alpha\)= the azimuth angle, and \(\beta\) = the
elevation angle. Based on the aforementioned equation, the micro-Doppler shift is modulated by two sinusoidal functions. Consequently, a comprehensive description of micro-Doppler signatures encompasses the modulation function, which is inherently dependent on the observation time.
(6) The natural frequency
The natural frequency of a target is a function of its shape and material contents. Since the natural frequency is independent of range and altitude, it exhibits robustness that can enhance target recognition [18][19][20][21]. It is important to note that utilizing this method requires the basic pre-condition of understanding the scattering polar theory in the resonance region with the "poles" theory. However, extracting the natural frequency and linking it to specific objects is challenging, and practical radars utilizing this method are rare in the market. Most of the research on this topic remains in laboratories due to the lack of clarity surrounding the scattering mechanism in the resonance region.
### _Drone ATR methodologies_
As for drone detection radar applications, it is worth noting that the detection and identification of small drones using high range resolution radar profiles (HRRPs) pose challenges due to the need for sub-centimeter resolution to capture the longitudinal structure of targets measuring less than 100 cm in length [22][23]. Consequently, many radar systems employed for bird and drone detection utilize low range profiles, making micro-Doppler and kinetic features the primary choices for identification purposes.
However, both kinetic features and micro-Doppler have limitations when it comes to drone automatic target recognition (ATR). The main issue with kinetic features is their vulnerability to variations in time. As shown in equation (4) above, because range and velocity are influenced by the sampling time, the resulting kinetic features lack robustness and fail to provide distinctive signatures. On the other hand, micro-Doppler has become increasingly popular in the drone recognition field. Equation (7) emphasizes the importance of observation time for effectively utilizing micro-Doppler.
Fig. 6: The two recognition method examples, **(a)** The example of segments with 5 plots and 2 overlapping plots trace classification [24], (b) The simulated micro-Doppler spectrogram of one blade[25].
Micro-Doppler classification, despite its advantages, also faces certain challenges. Firstly, micro-Doppler is more of a phenomenon than a signature. Fig. 7 illustrates three forms of the micro-Doppler phenomenon: Jet Engine Modulation (JEM) or Helicopter Rotor Modulation (HERM) lines modulation in the spectrum, the "blade flash" patterned spectrogram obtained through the Short-time Fourier Transform (STFT), and the range-micro-Doppler profile. JEM spectrum refers to spectral peaks with certain adjacent intervals and similar amplitudes in the spectrum [15], while the "blade flash" pattern describes the sinusoidal trace on the spectrogram. The instant micro-Doppler signature represents the stable mapping of scattering centers of rotating blades in the cross-range profile, capturing the scattering characteristics or structures of the rotating component rather than the rotational pattern. Quantifying the micro-Doppler signature poses a challenge that needs to be addressed. Secondly, enhancing micro-Doppler signals comes at a cost. Not all radar dwell times are suitable for detecting micro-Doppler signals produced by the rotating blades in drone radar signals. The radar dwell time should be neither too long nor too short for effective micro-Doppler detection [25]. Additionally, a sufficiently high sampling frequency is required to obtain an adequate amount of micro-Doppler information for separating the micro-Doppler signals. Insufficient micro-Doppler data can lead to incorrect estimates of blade numbers [29]. Moreover, achieving a high signal-to-noise ratio (SNR) of micro-Doppler images using machine learning algorithms is essential for accurate drone recognition [30]. In summary, the micro-Doppler method has a short detection range and recognition range but a long detection response time (DRT). Despite these challenges, micro-Doppler recognition remains a crucial component of modern radar systems, enabling the accurate identification and classification of small aerial targets like drones. With the increasing use of drones, the importance of micro-Doppler recognition in drone detection and monitoring systems is expected to grow.
## IV Guiding Principles by ATR and Case Studies
### _The design principles_
The design of traditional radar systems is guided by the radar equation, which assumes that a target can be represented as a point object with a mean RCS. This equation allows for the calculation of the signal-to-noise ratio (SNR), a measure of a radar system's ability to detect a specific target at a given range, by comparing the target's scattering power with the background noise. The radar equation is mathematically expressed as follows [31]:
\[R=\sqrt[4]{\frac{P_{t}G_{r}G_{t}\lambda^{2}\sigma}{(4\pi)^{3}RT_{s}B_{n}L(SNR)}} \tag{8}\]
where \(T_{s}=\) the system noise temperature, \(B_{n}=\) the noise bandwidth of the receiver, \(L=\) the total system losses, \(K=\) the Boltzmann's constant. \(P_{t}=\) the transmitted power, \(G_{r}=\) the received gain, \(G_{t}=\) the transmitted gain, \(R=\) the measured range, \(\sigma=\) the RCS of the target, \(\lambda=\) the radar wavelength. The radar equation illustrates that the fundamental rule of radar design is based on radar detection principles. For single pulse detection, if the detection probability exceeds 50%, the SNR of the target should be at least 13.1 dB. To achieve a 95% probability of detection, the SNR should be 16.8 dB [31]. In simpler terms, smaller targets with lower RCS values will have lower SNRs, resulting in a reduced detection range. Consequently, radar systems often encounter the challenge of "Missed Target" when detecting drones and other objects with small RCS values.
The design of drone detection radar systems should be guided by the ATR method. Fig. 5 provides a comparison between two types of drone detection radars that utilize different ATR methods. In this discussion, we will use the pulse-Doppler radar as an example.
(1) The ATR method based on kinetic features primarily addresses the detection problem.
To increase the recognition probability, it is beneficial to improve the detection probability. Firstly, the resonance region is crucial for kinetic features as it amplifies the RCS of the target and enhances the detection probability. Given that small drones typically fall within the submeter size range, S-band or L-band radars are more suitable for detecting small drones and achieving higher detection probabilities. Moreover, a higher detection probability contributes to better radar tracking, which, in turn, improves ATR using kinetic features. Secondly, increasing the efficiency of ATR can be achieved by accelerating the tracking rate to enhance the kinetic features. Kinetic features are characterized by time and range variations. Therefore, a faster inspection rate translates to a higher update rate of kinetic features but requires shorter radar dwell time.
(2) The ATR method based on signal signatures addresses the classification problem.
To increase the recognition probability, it is preferable to enhance the signal signatures. Taking the micro-Doppler signature as an example, although it falls under the category of kinetic features, it also captures specific target structures. The micro-Doppler "flash" or JEM spectra reflect the rotating blades of drones, which can be utilized for ATR. Firstly, the optical region is more favorable than the resonance region. The micro-structures are attached structures on the target body and are smaller in size compared to the body itself. Consequently, the resonance effect in the resonance region may amplify the scattering power of the body but suppress the micro-Doppler scattering power. Additionally, a shorter wavelength results in a better Doppler shift and a larger difference between the body Doppler shift and the micro-Doppler shifts. As illustrated in Fig. 8a, L-band radar offers higher SNR values (over 20 dB) compared to X-band radar, but its JEM signature is weaker. Conversely, X-band radar can detect almost 100% of JEM spectra, making it more effective in drone detection. Secondly, since micro-Doppler is a result of the Doppler effect, a higher Doppler resolution enhances the micro-Doppler signatures. Therefore, longer radar dwell times and higher frequency resolutions are favorable for achieving better Doppler resolution. In most cases, this entails a slower inspection rate or tracking rate. As depicted in Fig. 8b, there exists an optimal radar dwell time for micro-Doppler of drones. In our analysis, we examined radar data detected by three radar systems with different radar dwell times but similar frequency and velocity resolutions, including Radar-\(\alpha\) and
Fig. 7: The presentation forms of micro-Doppler of quad-rotor drones, **(a)** the JEM-like spectrum detected by a X-band pulse-Doppler radar [26], **(b)** the “blade flash” patterned spectrogram detected by a X-band radar [27], **(c)**The micro-Doppler range-Doppler detected by a X-band FACW radar [28].
Radar-\(\gamma\), with radar dwell times of 2.7 ms and 89 ms, respectively. Radar-\(u\) barely detected any micro-Doppler, while Radar-\(\gamma\) captured weak micro-Doppler signals, with a magnitude only 10% of the body Doppler's. Proper radar dwell time is crucial for micro-Doppler detection. This research provides insight into designing a cognitive micro-Doppler radar by adjusting radar dwell time to detect and track micro-Doppler signals of drones.
In summary, when designing a drone detection radar system guided by the ATR method, it is important to consider not only the traditional detection unit but also the recognition unit. Several key aspects should be taken into account. Firstly, careful selection of the radar wavelength is essential due to the influence of scattering regions. The choice of wavelength should be made with consideration of the specific scattering characteristics of the target being detected. Secondly, the transmitted radar parameters, including radar dwell time (Coherent Processing Interval or CPI), sampling rate (Pulse Repetition Frequency or PRF), and others, play a crucial role in enhancing both the kinetic features and signal signatures. Optimal values for these parameters can improve the detection and classification capabilities of the radar system. Thirdly, the tracking strategy must be carefully considered within the ATR method. It is important to note that a higher tracking rate corresponds to a shorter radar dwell time. However, a shorter dwell time results in a lower SNR, which can potentially lower the detection probability. This contradiction between the detection and tracking units must be carefully managed, as it ultimately impacts the performance of the ATR unit. Therefore, it is imperative to adopt a comprehensive ATR approach when designing a drone detection radar system, taking into account all these factors and striking a balance between the conflicting requirements of detection and tracking. By doing so, the effectiveness and efficiency of the radar system can be maximized for accurate and reliable drone detection.
### **The drone detection radar systems with ATR function**
In this section, we present our drone detection radar system, referred to as the WHU system, along with the results obtained using the ATR function. Fig. 9 illustrates the diverse clutter backgrounds encountered by our X-band drone detection radar, and the performance metrics are summarized in Table III. The radar operates at a frequency of approximately 9 GHz, with a CPI of about 20 ms and a PRF of 5 kHz. The transmitted band encompasses approximately 12.5 MHz, providing a range resolution of approximately 12 m. Equiped with an active electronically scanned phased array (AESA) antenna, the radar system is mounted on a rotating table to achieve a 360-degree coverage in azimuthal scanning. A comparative analysis of the detection performance reveals that our radar system exhibits an extended range for drone detection, faster recognition speed, and higher recognition accuracy, which can be attributed to the implementation of the ATR function.
Several factors contribute to the enhanced performance of our radar system. Firstly, being a pulse-Doppler radar, it is capable of detecting moving targets while effectively
\begin{table}
\begin{tabular}{l c c} \hline
**Contents** & **General ones1** & **WHU one** \\ \hline Systems & FMCW, Pulse-Doppler & Pulse-Doppler \\ Bands & S, I, X, Ku, etc & X, Ku, etc. \\ Antenna & Phased array antennas & Phased array antennas \\ Scattering region & Resonance region, Optic & Optic region \\ & region & \\ Detection method & SNR based & SNR \& SCR based \\ Recognition method & Micro-Dopper, trace, & Scattering \& Doppler, imaging, etc. & etc. \\ Detection range2 & \(<\) 6 km & \(>\) 12 km \\ Recognition range & \(<\) 3 km & \(>\) 12 km \\ Detection response & \(\sim\)1 s & \(<\) 10 ms \\ Data Inspection & & \\ Frequency (DIF)4 & Adjustable & Adjustable \\ Automatic Target & & Birds (Large birds, \\ Recognition (ATR) & & Small Birds, \\ & Birds, Drones, & Drones(Fixed-wing \\ & Pedestrians, Vehicles, & drones, Multi-rotor \\ & clutters. & Vehicles, Ships, \\ & & Pedestrians, \\ & Heliopiters, Jets, etc. \\ \hline \end{tabular}
* 1. The general ones refer to the most typical drone detection radars in the market, such as thoses in Table I.
* 2. Here, the detection range and recognition range means the one for comon quadrotor drone, DJI phantom series.
* 3. The DRF represents the lags between an echo return, detection, and eventual display.
* 4. The drone detection radar systems must be capable of continuous operation to allow for the continuous inspection of the surveillance areas, and the DIF means the time interval between successive updates for the position of a tracked object.
\end{table}
Table III: The comparison of drone detection radars
Figure 8: The examples of micro-Doppler of quad-rotor drone, **(a)** the comparison detection using L-band and X-band radar, **(b)** the comparison detection using different X-band radar with different CPIs, where Radar-\(u\), and Radar–\(\gamma\) with radar dwell times of 2.7 ms, and 89 ms, respectively [25].
Figure 9: The example of our drone detection radar (WHU one) in the different clutter backgrounds.
suppressing static clutter present in the background. The inherent raw velocity resolution of 0.75 m/s enables the detection of drones even at very low speeds. Secondly, operating in the X-band frequency range ensures that the radar scattering data from small drones originates from the optical region, facilitating the extraction of shape information from the radar signatures. Thirdly, the adoption of a narrow band and a range resolution of 12 m ensures that small drones cannot simultaneously occupy more than three range bins, thereby mitigating range migration issues. Fourthly, using both the traditional SNR detector and our newly developed Dynamic Signal-to-Clutter Ratio (DSCR) detector [32][33], the detection range for small drones with RCS levels ranging from 0.01 to 0.1 m\({}^{\circ}\)2 extends up to 12 km. Lastly, leveraging phased-array technology, our radar system has the capacity to simultaneously track up to 1000 targets. It can accurately classify and identify various targets, including birds, drones, vehicles, ships, humans, helicopters, among others, and present the detection results in a visual manner through graphical icons (Fig. 9). The tracking numbers associated with each icon provide a clear tracking reference.
The utilization of the ATR function can significantly enhance the performance of radar systems in both the detection and tracking units. The effectiveness of this function is exemplified in Fig. 10, which showcases the radar display cases referenced from our previous publications. The experimental tests were conducted in the coastal region of Qidong city, situated along the Yellow Sea coast in China. This area is characterized by a cluttered sea environment. The radar system was installed on the rooftop of a 12-meter tall building, enabling horizontal scanning of the sea surface. Initially, our project aimed to develop a prototype drone detection radar. Throughout the project, an infrared sensor and an optical camera were incorporated to support and validate the recognition results obtained from the ATR function. The project spanned several months in 2020, and this paper presents some of the data extracted from these experiments. The test area exhibited a sea scale ranging from Degree 3 to Degree 5. With Degree 3, the height of the waves ranged from 0.5 to 1.25 meters, while with Degree 5, it reached 2.50 to 4.00 meters.
During the tests, we collected radar signals from different types of drones, including a quad-rotor drone, a fixed-wing drone, and a hybrid Vertical Take-off and Landing (VTOL) fixed-wing drone. These drones were considered cooperative targets, and their key parameters can be found in our earlier publications [34][35]. The Albatross 1 is a homemade fixed-wing drone with dimensions of 1.08 m \(\times\) 0.80 m and a mass of 0.3 kg. The DJI Phantom 4 is a well-known quad-rotor drone equipped with four lifting blades, measuring 0.40 m \(\times\) 0.40 m and weighing 1.38 kg. Finally, the TX25A is a large hybrid VTOL fixed-wing drone featuring one pusher blade and four lifting blades, with dimensions of 3.6 m \(\times\) 1.97 m and a mass of 26 kg. Fig. 10 displays images of these drones. Referring to Table I, both the Albatross 1 and DJI Phantom 4 belong to Group 1, while the TX25A belongs to Group 2. These drones represent the primary targets for the current C-UAS solution. Additionally, local fishing ships and birds were also included as test targets, and their radar data was collected for analysis.
(1) The Beyond-Range increased by the ATR function
The ATR function offers a remarkable capability to extend the detection range of radar systems, surpassing their theoretical limits. The range of a target can be theoretically determined using the radar equation, which considers the target's Radar Cross Section (RCS) value and Signal-to-Noise Ratio (SNR). This relationship can be expressed as follows: The function can be given as:
\[R\propto\sqrt[4]{\frac{\sigma}{SNR}} \tag{9}\]
Thus, for instance, if a radar system can detect an aircraft with an RCS of 100 m\({}^{2}\) at a range of 60 km, it may only detect a small drone with an RCS of 0.01 m\({}^{2}\) at a range of 6 km. To achieve a 3 dB increase in the drone's detection range, the SNR must be reduced by 12 dB while keeping other factors constant. However, if the radar's SNR threshold is decreased from 16.8 dB [31] to 4.8 dB, the detection process will encounter a significant increase in false alarms, resulting in clutter that hampers the radar display [33]. Fig. 10a illustrates the excessive number of false alarms on the radar screen, with only a few large ships being detected.
In contrast, by employing the ATR function to process these false alarms, numerous clutter objects can be rejected, enabling the detection of both small and large objects within the area. Methodologically, the target identification provided by the ATR function is fed back to the detection unit, integrating detection with recognition (IDR). Fig. 10b demonstrates the detection and recognition of various targets, including ships, birds, drones, and others [34][35]. Moreover, the red sector represents the theoretically expected range of 14 km for drones. However, the ATR function can achieve a Beyond-Range ability for the radar system, surpassing the theoretical limits. For instance, it can detect and track birds at ranges exceeding 16 km. In summary, the ATR function enables the radar system to extend the detection range of a target and even exhibit a Beyond-Range capability, as long as the ATR function is operational.
(2) The sub-class of targets recognized by the ATR function
When the geometry information of the target is extracted from the radar echoes, the sub-class of the targets can be recognized by the ATR function. Firstly, we propose using flight morphology for classifying radar signals of large birds and small birds, in that large birds habitually carry their feet stretched out behind them during flight, leaving their feet apart from the body and being recognizable to both human observation and radar detection, while small birds tend to carry their feet drawn up in front, clinging to the body, thus hiding their feet from detection [16]. The visibility of a bird's feet will register radar signatures on bird echoes, and these signatures contribute to the classification of radar echoes from different sized birds in relatively short data sampling time. And then, we categorize small drones into three types based on their blade types: fixed-wing drones with only puller blades, multi-rotor drones with only lifting blades, and hybrid vertical take-off and landing (VTOL) fixed-wing drones with both lifting and puller blades (Fig. 10b) [35], and the micro-Doppler signals of the puller blades were weaker and more stable than that of the lifting blades; thus, the detailed micro-Doppler signatures modulated by different blades can be used for improving drone detection and identification of drone types by drone detection radar.
(3) The Situation-Awareness enhanced by the ATR function
The ATR function plays a crucial role in enhancing the situational awareness of radar systems. The recognition of target attributes by the ATR function can be effectively correlated with tracking information, enabling the acquisition of comprehensive situational awareness in the radar monitoring area. The classify-while-scan (CWS) technology [36] employed in this context processes the raw data within a single radar resolution cell, leading to the identification of objects. The obtained target ID can serve various purposes, including recording, display on the radar screen, and assisting the tracking unit. By connecting the same ID across consecutive tracking data, a track-after-identify (TAI) process is achieved, facilitating seamless tracking of targets. Consequently, the CWS function processes radar data in each radar cell, providing outputs of targets along with the corresponding range cells in the radar beam. The continuous scanning of the radar beam captures the movements and trajectories of active targets, thus presenting a comprehensive situational awareness of the entire scenario on the radar display [36].
Fig. 9(c) illustrates a typical application of situational awareness, where birds are seen chasing ships to seize fish that have been disturbed by the ship's propellers. The white dotted lines depict the tracking traces of objects, while the numbers around the icons represent the tracking numbers of the objects. Specifically, the tracking birds with No. 6264 are observed flying around the ship with No. 6283, resulting in radar echoes from the birds interfering with the detection of the ship. The recognition process sometimes categorizes the data as either birds or the ship when they appear in the same radar cell. The birds habitually chase the ship in the sea to feed on the fish, which are stirred up by the rotating propellers. Thus, this situational awareness reveals an interesting behavior of sea birds. The detection response time (DRT) is approximately a0 ms, indicating that the lag between echo return, detection, and eventual display is only 10 ms. Consequently, situational awareness can be continuously updated in real-time, enhancing the radar system's capabilities to function as a WYSIWYG (What You See Is What You Get) system or a real-time sense-and-alert system.
### **Lessons learned from real-world implementations**
In recent times, numerous C-UAS solutions and drone detection radar systems have emerged, each claiming to possess exceptional performance in detecting radar signals emitted by drones. Some of these systems have been procured by clients and deployed in critical facilities. However, despite these advancements, certain governments remain skeptical about the functionality and value of drone detection radar systems. Consequently, they have initiated projects aimed at validating the effectiveness of such systems. In this context, we employ semi-simulated implementations to shed light on the significance of an ATR function in both C-UAS solutions and drone detection radar systems.
**Civil lessons**
Fig. 10: The example of a drone detection radar, (a) a static detection screen short containing plenty of clutter, (b) a static detection display after using ATR function [35], (c) a dynamic Situation Awareness ”video”enhanced by the ATR function [36].
On the evening of December 19, 2018, drones were reported flying near the runway at Gatwick Airport in London, UK. As a precaution, airport authorities suspended all flight operations, resulting in significant disruptions for over 140,000 passengers, and more than 1,000 flights were canceled, causing 36 hours of disruption. This incident highlighted the need for security measures and strategies to protect against potential drone threats at airports worldwide. In response, the UK government implemented several measures, such as no-drone airspace within a five-kilometer radius (Fig. 11) and the deployment of C-UAS solutions, including the Drone Dome system. However, a subsequent drone disruption occurred at Gatwick Airport in 2019, indicating that the Drone Dome system was not a foolproof solution and that other factors contributed to the incident. The drone operator may have found ways to evade the system's detection and jamming capabilities, or multiple operators were involved, making it difficult to locate and neutralize all threats.
### Military lessons
After Russia's invasion of Ukraine, the use of drones in combat has become increasingly prevalent and significant. Recent drone attacks by Ukrainian forces in the ongoing conflict have provided valuable insights and lessons. Drones equipped with advanced sensors, cameras, and precision targeting capabilities have proven to be highly effective in engaging and neutralizing heavily armored vehicles and infantry, which has leveled the playing field against Russian forces. Ukraine's use of drones exemplifies the principles of asymmetric warfare, by exploiting innovative tactics to challenge a stronger opponent while minimizing direct exposure to combat.
Civilian drones have transformed into multi-purpose assets, combing various combat capabilities. Micro-drones available in the market for less than $10,000 offer a wide range of possibilities, including obtaining information, preparing ambushes, designating targets for artillery, and tracking troop movements and fighter take-offs. Drones have provided Ukraine with a cost-effective means to challenge Russia's military superiority, disrupting their forces and undermining their morale. The domestically developed R18 UAV (Fig. 12) used by Ukraine in the ongoing war with Russia has already caused approximately $130 million in losses of various types of enemy material, which translates to around $670 in military assets destroyed for every dollar spent on producing the drone. The R18 drone can drop grenades from a height of 100-300 meters, effectively hovering over the target. It utilizes Soviet
Figure 11: Drones are banned within $km of all UK airports. (a) no drone rules1, (b) Gatwick Airport on the map2.
Figure 12: Ukraine’s forces drone eliminated 3 Russian tanks T-72A T-80B T-72B & Infantry Vehicle in South. (a) R18 UAV deployed in the ongoing war with Russia4, (a) tanks tracked by drones in drone perspective2, (b) destroyed tanks in scouting perspective2.
cumulative anti-tank grenades RKG-3 or RKG-1600 as bombs. Equipped with a thermal imager, the R18's attacks are unpredictable and highly effective, as demonstrated in the video images, where a multiple-rotor drone tracks and attacks Russian tanks. Ukraine's adoption of drone technology highlights its adaptability and innovation in response to the evolving nature of warfare. By embracing emerging technologies, Ukraine has demonstrated its ability to exploit new avenues for strategic advantage and potentially inspire other nations to follow suit. In contrast, there is a high demand for C-UAS solutions in the modern battlefield.
### _Analysis using technology points_
Why does the drone radar detection systems at Gatwick Airport function poorly, and why is it also ineffective in the context of the Ukraine war? One of the primary reasons is the range problem. According to the Drone Dome system, it can be operated by a single operator to detect, track, identify, and neutralize hostile drones. The system has a detection range of 3.5 km for a target with a RCS of 0.002 m\({}^{2}\). However, the identification distance extends beyond 3 km using CCD/IR automatic video motion detection (VMD) and Automatic Target Recognition (ATR) technologies. This indicates that the Drone Dome system utilizes radar for detecting potential drone signals and employs EO/IR for identification purposes.
Firstly, the recognition range of 3 km represents the effective alert range of the C-UAS solution. However, it is insufficient to cover a significant airspace area. As depicted in Fig. 11, the 3 km recognition range may be longer than the runway but considerably shorter than both the Runway Protection Zone (RPZ) and the outer circle in Fig. 11.
Secondly, there is a range latency between the short detection range and the recognition range. This latency results in a time delay when transitioning from the radar sensor to the EO/IR sensor. Consequently, it increases the overall response time of the C-UAS solution. For instance, the Switchblade 600, a commonly used one-way attack (OWA) drone, has a final-attack speed of 185 km/h. Extending the detection and identification range by 6 km would provide an advanced alert time of 120 seconds, while the current 3 km range offers only 60 seconds. In summary, a longer detection and identification range enables earlier alert times, highlighting the need for a significantly greater range than the current 3 km to meet the requirements of C-UAS applications.
The core reason behind the time problem lies in the response latency between the detected echo and the final identification of the target. The latency of the detection and recognition system, denoted as \(t_{\textit{CUAS}}\), can be calculated as follows:
\[t_{\textit{CUAS}}=DRL_{\text{radar}}+SRL_{e_{0}}+RRL_{e_{0}}+t_{\text{com}} \tag{1}\]
where, \(DRL_{\text{radar}}\)=the Detection Response Latency (DRL) between an echo return, detection, and eventual display of the radar sensor, \(SRL_{e_{0}}\)= the Search Response Latency (SRL) between the reception and the detection of the target for the EO/IR sensor, \(RRL_{e_{0}}\)=the Identification Response Latency (IRL) between the detection, and eventual identification of the EO/IR sensor, and \(t_{\text{com}}\) =the delay time costing on communication between the radar and the EO/IR sensor. The basic angular resolution of a sensor can be determined by the equation:
\[\rho_{\textit{azt}}=\frac{1.22\lambda}{D} \tag{2}\]
where, \(\rho_{\textit{azt}}\)= the angle resolution, \(\lambda\) = the wavelength, and \(D\) = the antenna size. Despite the optical wavelength of an EO/IR sensor being 1/100000th that of a radar sensor, the angular resolution of the EO/IR sensor is only 1/100000th that of radar. This indicates that the searching efficiency of the EO/IR sensor is much poorer compared to the radar sensor, resulting in potential seconds of \(SRL_{e_{0}}\).
To minimize the system latency, it is preferable to reduce either \(DRL_{\text{radar}}\) or \(RRL_{e_{0}}\), or ideally, both. As illustrated in Table III, compared to a second-level DRT, the millisecond DRT of the radar can save more time for the EO/IR sensor and the overall C-UAS solution. This reduction in latency is particularly beneficial when detecting drones with high flying speeds, such as one-way attack (OWA) drones. For example, the popular OWA drone Switchblade 600 has a final-attack speed of 185 km/h, meaning it can travel approximately 51 meters in 1 second. If the radar's \(DRL_{\text{radar}}\) is in the second-level range, the EO/IR sensor may miss the location when the radar sensor sends out the alert. However, with a millisecond-level DRT of the radar, the EO/IR sensor can timely capture the location. Furthermore, by leveraging the powerful ATR function of the radar sensor, the identification task can be partially performed by the radar sensor instead of solely relying on the EO/IR sensor. Consequently, the entire \(t_{\textit{CUAS}}\) can be reduced to the millisecond level, resulting in a real-time WYSIWYG (What You See Is What You Get) system or a sense-and-alert system. It should be noted that the elimination of the EO/IR system in certain applications may be limited due to policy reasons, which extend beyond the scope of this paper. However, at the very least, achieving "real-time" capabilities is essential for an effective S-UAS solution.
## V Future Challenges
### _New drone targets_
The development of drone detection radar and C-UAS systems must strive towards achieving a real-time sense-and-alert capability, adapting to the evolving landscape of drone technology. In particular, drone detection radar systems need to address the challenges posed by the proliferation of one-way attack (OWA) drones (or suicide drones), as shown in Fig. 13. These drones have gained prominence since Russia's 2022 invasion of Ukraine, with both countries deploying at least eight types of OWA drones. Furthermore, numerous other nations have recently acquired or are in the process of acquiring OWA drones, leading to changes in military organization, training, and defense strategies. OWA drones, also referred to as kamikaze or suicide drones, have revolutionized modern warfare by combining the precision of drone technology with the destructive capabilities of guided munitions. These compact and highly maneuverable unmanned aerial vehicles are difficult to detect and intercept. Equipped with advanced guidance systems, high-resolution cameras, and sensors, OWA drones autonomously navigate to designated targets, ensuring accurate strikes with exceptional precision.
The market for OWA drones has experienced significant growth, with the number of new models unveiled in 2021 and 2022 matching the total of the previous five decades combined. More than 120 entities from over 30 countries are involved in
the development and production of OWA drones, with the United States and Israel leading the way. However, other nations are increasingly developing their own models through acquisition programs that prioritize domestically produced products. Notably, the prevalence of vertical take-off and landing (VTOL) OWA drones, which constitute over one-fourth of all models, reflects a broader trend towards lightweight, hand-carried aircraft [37]. The unique characteristics of OWA drones, including their high-speed capabilities and VTOL flying ability, present additional challenges for drone detection radar systems. It is therefore crucial to review the current available detection LSS (low, slow, and small) strategy of C-UAS solutions in order to effectively address these evolving threats.
The future of drone detection radar systems will also involve addressing the challenges posed by drone swarms. A drone swarm refers to a coordinated group of autonomous drones that work together in a synchronized manner, utilizing advanced communication and control systems [38][39]. These swarms are designed to execute complex tasks and missions that would be difficult or impossible for a single drone to accomplish alone, as shown in Fig. 13b. A key advantage of drone swarms lies in their collaborative nature and ability to share information. Each drone within the swarm acts as a node, engaging in real-time communication with other drones. This allows for the exchange of data, sharing of situational awareness, and effective coordination of actions. By employing swarm intelligence algorithms, these drones can exhibit self-organizing behaviors, collectively making decisions and adapting to changing circumstances or mission objectives.
The concept of drone swarms has garnered significant attention in both military and civilian contexts. In military applications, swarms have the potential for use in reconnaissance, surveillance, target identification, and even offensive operations. Their ability to overwhelm and confuse enemy defenses provides a substantial tactical advantage. Swarm technology also enables collaborative attacks, where multiple drones can synchronize their strikes, thereby increasing mission effectiveness and precision. Within civilian domains, drone swarms have found applications in areas such as search and rescue operations, disaster management, agriculture, and entertainment. In search and rescue scenarios, swarms can efficiently cover large areas, aiding in the detection of missing individuals or survivors. In agriculture, drone swarms can be deployed for crop monitoring, mapping, and precision spraying, optimizing farming practices. Swarms are also utilized to create captivating aerial light shows for entertainment purposes, coordinating the movement and lighting of multiple drones. Drone detection radar systems have to face both the dense drone swarm and the sparse drone swarms. Here, the swarms only means the drones are working together in a network, but the space between the drones can be either close to each other, or far from each other. Thereby, the dense drone swarms can take part in one or two neighboring radar resolution cells, as well as intermittent radar resolution cells.
The presence of micro-Doppler signals generated by the rotating blades of drones currently plays a crucial role in the classification of radar signals. However, it raises concerns about the future scenarios where drones might lack blades altogether, resulting in the absence of micro-Doppler signals. Alternatively, the micro-Doppler signals from a drone could be weakened due to stealth blades or increased distance. In such cases, it becomes imperative to explore alternative radar signatures for the identification of radar echoes from drones. The development of effective ATR methods for drones may require consideration of additional radar signatures beyond micro-Doppler. These alternative signatures could provide valuable insights and enable accurate identification of drones even in scenarios where the micro-Doppler signals generated by blades are absent or significantly weakened. Therefore, it is crucial to investigate and leverage other radar signatures to enhance the ATR capabilities for drone detection and classification.
### _New radar technologies_
1. Cognitive radar system
Cognitive radar is a novel technology designed specifically to detect small drones. It employs machine learning techniques and receiver feedback to improve detection
Fig. 13: The drone challenges for drone detection radar systems, (a) an example of OWA drone\({}^{\prime}\), (b) an example of sparse drone swarms\({}^{\prime}\).
performance, as illustrated in Fig. 14a. Coined by Simon Haykin [40], "cognitive radar" refers to the implementation of four fundamental cognitive features: the perception-action cycle, memory, attention, and intelligence [41][42][43]. The radar field has shown considerable interest in cognitive radar, leading to numerous studies exploring its capabilities. Markus Steck et al. have reported successful utilization of cognitive capabilities in the latest Hensoldt radars, including naval and ground radars, as well as sense & avoid radar prototypes by using adaptive processing to mitigate interference signals between nearby transmitted frequencies [44]. K. Barth et al. showed that employing a cognitive waveform approach yielded a 15% improvement in classification accuracy compared to a static approach using a single waveform [45]. Furthermore, A. Huizing et al. explored the potential of incorporating deep learning techniques like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) to classify mini-drones using micro-Doppler spectrograms in the context of cognitive radar [30]. Our earlier research has indicated the feasibility of designing a "cognitive micro-Doppler radar" capable of detecting and tracking micro-Doppler signals from drones by adapting the radar dwell time [25]. In conclusion, micro-Doppler radar represents a promising approach to drone detection due to its advanced performance in radar detection, classification, and tracking, especially for small drones.
1. Millimeter wave radar
Millimeter wave radar, with the feature of low size, weight and power (SWaP), may be the perfect ones for attaching on the military vehicles for providing drone detection. Millimeter wave radar offers a range of advantages for detecting and tracking drones, both in terms of its technological capabilities and cost-effectiveness [46][47]. Technologically, millimeter wave radar stands out due to its ability to operate in high-frequency electromagnetic waves, providing exceptional resolution and accuracy in drone detection. It can effectively identify small and fast-moving targets, making it suitable for monitoring drones in various environments, including urban areas or crowded spaces. From a cost perspective, millimeter wave radar has become increasingly affordable and accessible compared to earlier versions, thanks to advancements in technology and increased market availability. This accessibility has enabled its integration into a wider range of security systems, making it an attractive choice for drone detection and tracking. The lesson that Ukraine force attacked Russian Troops Tank & Infantry with drones indicate that modern tanks or other military vehicles require urgently shield protection from drones. The range of these micro-drones are generally not exceeding $10,000, can reach several kilometres or even tens of kilometres for the most enduring. They can also fly at very high altitudes and discrerets. Furthermore, thanks to their electric motor, these drones emit no heat and have a very low acoustic signature. Therefore, they are hardly perceptible, drastically reducing the probability of an interception. Note that, a small consumer-grid drone, such as DJI Phantom 4, with $1000 can destroy a $100 million T72 tank, which is a quite high exchange ratio. Since millimeter wave radars can be small and cost-effective, it can be attached on the modern tanks or other military vehicles. Overall, the combination of advanced technological capabilities and cost-effectiveness makes millimeter wave radar a highly desirable option for effective and efficient drone detection and tracking.
## VI Conclusion
The drone detection radar system requires ATR (Automatic Target Recognition) capabilities more than traditional air surveillance radar. This is because traditional radar operators rely on the trace and RCS to classify targets. However, drones are considered LSS (Low-Slow-Small) targets, which means they can appear as flickering ghosts and are difficult to detect, track, and classify from clutters, including birds. This poses a challenging task for human operators. Therefore, ATR for drone detection radar systems is necessary and urgent.
In the field of radar automatic target recognition (ATR), the detection and classification of small drones, particularly those belonging to Group 1 and 2, present a significant challenge. These drones fall under the recognition problem category, and the current ATR methods for small drones typically involve the analysis of kinetic features and signal
Fig. 14: The drone challenges for drone detection radar systems, **(a)** A block diagram of a cognitive radar seen as a dynamic closed loop feedback system with the perception–action cycle [48][42], **(b)** the Millimetre wave radar attached on a tank, which is countering drone threats\({}^{i}\),
signatures, such as micro-Doppler. Designing a drone detection radar system with an ATR capability requires careful consideration of adjusting radar parameters to enhance ATR signatures, guided by the scattering region theory. Creating an effective drone detection radar system necessitates adopting a comprehensive ATR approach that addresses all relevant factors and achieves a balance between the conflicting requirements of detection and tracking. This comprehensive approach ensures the maximization of the radar system's effectiveness and efficiency in accurately and reliably detecting drones. As an illustrative example, we present a drone detection radar system that demonstrates the performance improvements achieved through the integration of an ATR unit. The advantages of ATR unit bring to the drone detection radar includes: (1) enhancing beyond-range detection by providing feedback to the detection unit; (2) improving the recognition tier by recognizing sub-class of targets; (3) facilitating situation awareness by forwarding information to the tracking unit.
The increasing challenges posed by one-way attack (OWA) drones highlight the need for effective drone detection radar systems. Cognitive radar is an emerging solution to tackle these problems. With cognitive capabilities, radar systems can adapt to changing sensing tasks, optimize target detection performance, and improve situational awareness. By incorporating ATR capabilities, a 3D radar system can be elevated to a 4D radar system, providing 3D location and 1D attribution, thereby enhancing the drone detection performance. These advancements have the potential to improve radar performance across military, civilian, and commercial applications. Staying ahead in research and development enables radar systems to effectively detect, track, and respond to emerging threats, resulting in valuable insights in diverse fields.
|
2305.01452 | Heating and cooling in self-consistent many-body simulations | We present a temperature extrapolation technique for self-consistent
many-body methods, which provides a causal starting point for converging to a
solution at a target temperature. The technique employs the Carath\'eodory
formalism for interpolating causal matrix-valued functions and is applicable to
various many-body methods, including dynamical mean field theory, its cluster
extensions, and self-consistent perturbative methods such as the
self-consistent GW approximation. We show results that demonstrate that this
technique can efficiently simulate heating and cooling hysteresis at a
first-order phase transition, as well as accelerate convergence. | Yang Yu, Sergei Iskakov, Emanuel Gull | 2023-05-02T14:31:36Z | http://arxiv.org/abs/2305.01452v2 | # Heating and cooling in self-consistent many-body simulations
###### Abstract
We present a temperature extrapolation technique for self-consistent many-body methods, which provides a causal starting point for converging to a solution at a target temperature. The technique employs the Caratheodory formalism for interpolating causal matrix-valued functions and is applicable to various many-body methods, including dynamical mean field theory, its cluster extensions, and self-consistent perturbative methods such as the self-consistent GW approximation. We show results that demonstrate that this technique can efficiently simulate heating and cooling hysteresis at a first-order phase transition, as well as accelerate convergence.
## I Introduction
When conducting experiments in condensed matter physics, it is common practice to investigate the temperature dependence of observables while keeping other parameters constant. A particular example are specific heat or transport measurements, which are often used as a preliminary probe to identify intriguing temperature-dependent behavior. For example, in a first-order coexistence regime, heating and cooling curves may reveal history-dependent hysteresis.
In theoretical calculations using self-consistent finite-temperature field theories, changing the temperature of a system is generally not practical as it causes a shift of Matsubara frequencies [1; 2]. Extrapolating the lowest Matsubara frequency during cooling can be problematic, resulting in non-causal solutions. Therefore, implementing heating and cooling protocols in self-consistent finite-temperature field theories, in analogy to heating and cooling measurements in experiment, is an unresolved issue.
Self-consistent finite-temperature simulation methods include non-perturbative techniques, such as the dynamical mean field theory (DMFT) [3] and its cluster variants [4; 5; 6; 7]; self-consistent perturbative methods, such as GW [8; 9; 10; 11; 12; 13], second-order perturbation theory [14; 15; 16; 17], fluctuation-exchange or T-matrix approximations [18; 19], as well as bold-line diagrammatic Monte Carlo methods [20; 21; 22; 23]; and combinations of embedding theories with perturbation theory [24; 25; 26].
In these methods, solutions are obtained through an iterative process that involves starting with an initial guess and continuing the process until self-consistency is achieved. The number of iterations required to reach convergence is directly related to how close the starting point is to the iteration fixed point. A 'good' starting point can significantly reduce the computational effort required for the simulation, whereas a 'bad' starting point may result in iterations that diverge, iterate in limit cycles, or even converge to unphysical fixed points [27]. In parameter regimes where first-order coexistence occurs, multiple physical fixed points may exist [28; 29; 30; 31; 32].
This paper addresses the challenge of generating better starting points and implementing heating and cooling protocols in self-consistent many-body methods. It presents a solution that guarantees a causal starting point by using a converged solution at a different temperature. The proposed method relies on the Caratheodory formalism [33] for interpolating causal matrix-valued functions. Originally developed for analytic continuation of matrix-valued Matsubara functions to real frequencies, this formalism can be extended to temperature extrapolation, which involves evaluating an interpolant at different Matsubara frequencies. We demonstrate the effectiveness of this approach in obtaining improved starting points in the context of DMFT and real-materials perturbation theory. Additionally, we examine heating and cooling hysteresis in the context of a first-order phase transition.
The paper proceeds as follows. In Sec. II we introduce the formalism. Sec. III contains a brief description of a pedagogical implementation of the method which we provide as a supplement. Sec. IV contains results for temperature extrapolation, convergence acceleration, and first-order hysteresis. Finally, Sec. V contains our conclusions.
## II Method
The central object of this paper is a causal matrix-valued fermionic Matsubara function which may represent a Green's function, a self-energy, or a cumulant [34]. The Matsubara function is expressed as a three-dimensional tensor, \(G_{ij}(\mathrm{i}\omega_{n})\), which associates a matrix (identified by the indices \(i\) and \(j\)) with every Matsubara frequency \(\omega_{n}=(2n+1)\pi/\beta\) (\(n\) denotes an integer, \(\beta\) the inverse temperature). Its continuation to the complex plane [33] is denoted as \(G_{ij}(\tilde{z})\). \(\mathrm{i}G(\tilde{z})\) is a Caratheodory function (up to a convention-dependent minus sign), such that \([\mathrm{i}G(\tilde{z})+(\mathrm{i}G(\tilde{z}))^{\dagger}]/2\) is a positive semi-definite matrix for any \(\tilde{z}\) in the upper half of the complex plane.
The matrix structure of \(G\) depends on the application. Matrices may be scalar, diagonal, block-diagonal, or fully dense. The scalar case typically appears in single-site single-orbital cases, such as single-site DMFT. The diagonal case is often encountered in momentum-space simulations, such as cluster DMFT [4; 5; 6; 7]. The general multi-orbital case with dense matrices typically appears
in real-materials ab-initio calculations [25; 26].
The frequency-dependence of \(G\) is known at discrete Matsubara frequencies \(\omega_{n}\). In the simplest case, the frequencies are uniformly spaced positive fermionic Matsubara frequencies \(\omega_{n}=(2n+1)\pi/\beta\), \(n=0,1,\cdots N-1\). More generally, data is provided on a non-uniform frequency grid using Chebyshev [35], intermediate representation [36], Legendre [37], spline [38], or discrete Lehmann representation [39] schemes with associated Fourier transforms [40; 41]. Bosonic Matsubara response functions \(\Pi(\tilde{z})\) are not directly related to Caratheodory functions. However, \(\Pi(\tilde{z})\tilde{z}\) and \(\Pi(\tilde{z})/\tilde{z}^{*}\) correspond to Caratheodory functions [42], and Nogaki and Shinaoka [43] recently investigated a related mapping of bosonic functions.
Extrapolating in temperature requires the transformation of one Matsubara grid to another one corresponding to a different temperature. We propose to construct a causal matrix-valued interpolant through \(G(\mathrm{i}\omega_{n})\) and evaluating it at the Matsubara points corresponding to the changed temperature grid to realize the temperature extrapolation. The method's algorithmic steps follow those presented for analytic continuation in Ref. [33], with the only distinction that the interpolant is evaluated on the imaginary axis, rather than just above the real axis. The major steps of the algorithm are described below.
For a Caratheodory function \(\tilde{F}(\tilde{z})=\mathrm{i}G(\tilde{z})\), \(\tilde{z}\) in the upper half of the complex plane, we assume the values of \(\tilde{F}\) are known at a set of points \(\{\tilde{z}_{n}|n=0,1,\cdots,N-1\}\). A Mobius transform
\[z=\frac{\tilde{z}-\mathrm{i}}{\tilde{z}+\mathrm{i}} \tag{1}\]
maps the upper half of the complex plane to the unit disk, the real axis to the unit circle, and the Matsubara points to points onto the real axis. Since a Caratheodory function \(F\) is a function defined on an open subset of the complex plane and exhibits the property \((F+F^{\dagger})/2\) being positive semi-definite, \(C(z)=\tilde{F}(\tilde{z})\) then defines a Caratheodory function on the unit disk \(|z|<1\). The function \(C(z)\) is known at a set of points \(\{z_{n}|n=0,1,\cdots,N-1\}\) with the values of
\[C_{n}\equiv C(z_{n})=\tilde{F}(\tilde{z}_{n}). \tag{2}\]
The Cayley transform [44] and its inverse, defined as
\[S(z) =\left[I-C\left(z\right)\right]\left[I+C\left(z\right)\right]^{- 1}, \tag{3}\] \[C(z) =\left[I+S\left(z\right)\right]^{-1}\left[I-S\left(z\right) \right], \tag{4}\]
map the Caratheodory function \(C(z)\) to a matrix-valued Schur function \(S(z)\)[45] and vice versa. A Schur function is defined as a function \(S(z)\) on the unit disk with the property \(||S(z)||\leq 1\), where the matrix norm \(||\cdot||\) is defined as the largest singular value of \(S\), or equivalently the largest eigenvalue of \((SS^{\dagger})^{1/2}\).
The problem of interpolating the Caratheodory function \(C(z)\) is thus converted into the problem of determining an interpolant for the Schur function \(S(z)\), where \(S(z)\) is known at \(N\) discrete points, given as \(S_{n}\equiv S(z_{n})\), with \(n=0,1,\cdots,N-1\).
To identify the interpolant of the Schur function \(S(z)\), we proceed to find a set of Schur functions \(S^{0}(z),S^{1}(z),\cdots,S^{m}(z),\cdots,S^{N}(z)\), where each Schur function \(S^{m}(z)\) is known at \(N-m\) points \(z=z_{m},z_{m+1},\cdots,z_{N-1}\) with the values of \(S_{n}^{m}\equiv S^{m}(z_{n})\). \(S^{N}(z)\) is an arbitrary Schur function without any constraints. We require \(S_{n}^{0}=S_{n}\) such that \(S^{0}(z)\) serves as the desired interpolant for the Schur function \(S(z)\).
To establish a connection between the two Schur functions \(S^{m}(z)\) and \(S^{m+1}(z)\), a function \(B^{m}(z)\) is introduced, enabling a progressive determination of \(S^{0}(z)\) starting from \(S^{N}(z)\)[33]. The relationship between \(B^{m}(z)\) and \(S^{m}(z)\) is given by:
\[\begin{split}\frac{|z_{m}|(z_{m}-z)}{z_{m}(1-z_{m}^{*}z)}B^{m}(z )&=L^{m}[S^{m}(z)-S_{m}^{m}]\\ &\times[I-S_{m}^{m\dagger}S^{m}(z)]^{-1}R^{m}\end{split} \tag{5}\]
Figure 1: Schematic representation of the interpolation procedure for a matrix-valued finite-temperature Matsubara function \(G(\mathrm{i}\omega_{n})\). See text for detailed description.
with
\[L^{m} =[I-S_{m}^{m}{S_{m}^{m}}^{\dagger-\frac{1}{2}}, \tag{6}\] \[R^{m} =[I-{S_{m}^{m}}^{\dagger}S_{m}^{m}]^{\frac{1}{2}}. \tag{7}\]
An equivalent form of this relation is
\[S^{m}(z)=[I+V^{m}(z){S_{m}^{m}}^{\dagger}]^{-1}[V^{m}(z)+S_{m}^{m}] \tag{8}\]
with
\[V^{m}(z)=\frac{|z_{m}|(z_{m}-z)}{z_{m}(1-z_{m}^{*}z)}(L^{m})^{-1}B^{m}(z)(R^{m} )^{-1}. \tag{9}\]
The relation between \(B^{m}(z)\) and \(S^{m+1}(z)\) reads
\[\begin{split} S^{m+1}(z)&=[I-K^{m}K^{m\dagger}]^{- \frac{1}{2}}[B^{m}(z)-K^{m}]\\ &\cdot[I-{K^{m}}^{\dagger}B^{m}(z)]^{-1}[I-{K^{m}}^{\dagger}K^{m }]^{\frac{1}{2}},\end{split} \tag{10}\]
where \(K^{m}\) is an arbitrary matrix with \(||K^{m}||<1\).
The freedom in choosing arbitrary \(K^{0}\), \(K^{1}\), \(\cdots\), \(K^{N-1}\), along with \(S^{N}(z)\), allows for a full coverage of all possible interpolants for \(S(z)\). The seemingly undetermined quantities in the aforementioned equations are the values of \(\{\,S_{m}^{m}\mid m=0,1,\cdots,N-1\,\}\). Starting from the given \(\{\,S_{n}^{m}\mid n=0,1,\cdots,N-1\,\}\), as \(S_{n}^{m+1}\) are entirely determined by \(S_{n}^{m}\) and \(S_{n}^{n}\) by proceeding from Eq. (5) to Eq. (10), all \(S_{m}^{m}\) can be acquired in the process of calculating \(\{\,S_{n}^{m}\mid m=0,1,\cdots N-1;n=m,m+1,\cdots,N\,\}\) from \(m=0\) to \(m=N-1\). After obtaining all \(S_{m}^{m}\) and the corresponding \(L^{m}\) and \(R^{m}\), one can reverse the procedure, transitioning from Eq. (10) to Eq. (8), to compute \(S^{m}(z)\) from \(S^{m+1}(z)\) with a chosen \(S^{N}(z)\) as the starting point, until the desired interpolant \(S^{0}(z)\) is attained. Finally, an interpolant for \(C(z)\) can be derived using Eq. (4), and by multiplying \(-\mathrm{i}\) and substituting \(z\) with \(\tilde{z}\) through the Mobius transform, the interpolant for \(G(\tilde{z})\) is obtained.
A sketch of the algorithm is shown in Fig. 1 (starting top left, following the arrows, terminating bottom left). In practice, we take \(\{\tilde{z}_{n}\}=\{\mathrm{i}\omega_{n}|n=0,1,\cdots,N-1\}\) along with the corresponding \(G(\mathrm{i}\omega_{n})\) as the input where \(\omega_{n}=(2n+1)\pi/\beta\). Setting \(\tilde{F}=\mathrm{i}G\), and with Eqs. (1), (2), and (3), we obtain the constraints \(\{(z_{n},S_{n})\}\) for the interpolation problem of a Schur function \(S(z)\). By taking \(K^{m}=0\) for all \(m\)s, from Eq.(10), we obtain \(S^{m+1}(z)=B^{m}(z)\) which leads to
\[S_{n}^{m+1}=\frac{z_{m}(1-z_{m}^{*}z_{n})}{|z_{m}|(z_{m}-z_{n})}L^{m}[S_{n}^{m }-S_{m}^{m}][I-{S_{m}^{m}}^{\dagger}S_{n}^{m}]^{-1}R^{m} \tag{11}\]
according to Eq. (5). This equation offers a concrete way to calculate all \(S_{n}^{m}\), starting from \(S_{n}^{0}=S_{n}\). Subsequently, we can obtain \(S_{m}^{m}\) and the corresponding \(L^{m}\) and \(R^{m}\). Assume the desired new Matsubara points for a different temperature \(\beta^{\prime}\) are \(\mathrm{i}\omega_{n}^{\prime}=\mathrm{i}(2n+1)\pi/\beta^{\prime}\). Using the Mobius transform, the corresponding points in the domain of \(S(z)\) are obtained as \(z_{n}^{\prime}\). By setting \(S^{N}(z)=I\), and using the obtained \(\{S_{m}^{m},L^{m},R^{m}\}\), the values of \(S_{n^{\prime}}\equiv S^{0}(z_{n}^{\prime})\) can be determined by proceeding from Eq. (10) to Eq. (8). With the inverse Cayley transform in Eq. (4) and \(G=-\mathrm{i}\tilde{F}\), the desired values of the causal interpolant \(G(\mathrm{i}\omega_{n}^{\prime})\) at \(\mathrm{i}\omega_{n}^{\prime}\) are obtained as output.
## III Implementation
We provide a straightforward pedagogical implementation of the algorithm as supplement to this paper [46]. The code is written in the python programming language, using no dependencies other than numpy. Unlike related real-frequency analytic continuation codes [33; 47], we found that an implementation in standard double precision was sufficient to perform all calculations, and that the temperature extrapolation of noisy Monte Carlo data presented no difficulties.
The implementation requires a Green's function (or, equivalently, self-energy or cumulant) \(G[n,i,j]=G_{ij}(\mathrm{i}\omega_{n})\) as a three-dimensional tensor and \(w[n]=\mathrm{i}\omega_{n}\) as a one-dimensional vector. After reading the data, all preprocessing steps indicated by the red arrows in Fig. 1 are performed, and \(\{S_{m}^{m},L^{m},R^{m}\}\) are obtained and stored.
The code also requires a set of frequencies \(wp[n]=\mathrm{i}\omega_{n}^{\prime}\), provided as a one-dimensional vector. These frequencies should correspond to the Matsubara points at the new temperature \(\beta^{\prime}\). Using this data, the code evaluates the interpolated Green's function at the new Matsubara points, according to the blue arrows in Fig. 1 and returns \(G[n^{\prime},i,j]=G_{ij}(\mathrm{i}\omega_{n}^{\prime})\) as a three-dimensional tensor.
The supplemental materials contain detailed instructions and usage examples for the code.
## IV Results
We showcase results for the temperature extrapolation technique applied to a range of typical self-consistent finite-temperature methods. Sec. IV.1 investigates the accuracy and convergence properties of the extrapolation technique. Sec. IV.2 examines the convergence acceleration resulting from an enhanced starting point provided by the extrapolation technique. This study is conducted specifically within the context of single-site DMFT and cluster DMFT calculations for the Hubbard model, as well as the self-consistent GW approximation for nickel oxide (NiO) real material calculations. Sec. IV.3 demonstrates the occurrence of hysteresis during heating and cooling processes at a first-order phase transition, utilizing the extrapolation technique.
### Temperature extrapolation
In this section, we provide an illustration of temperature extrapolation in a cluster DMFT [7] calculation, as depicted in Figure 2. Specifically, we present the self-energy at the antinodal point, \((\pi,0)\), for a simulation of
the two-dimensional Hubbard model [48]. The simulation was performed on a 16-site (\(4\times 4\)) cluster, and the continuous-time auxiliary field impurity solver [49; 50] was employed. The system was solved at half-filling, in the paramagnetic state, with no next-nearest neighbor hopping, and at an interaction strength of \(U=6t\), which corresponds to a Mott insulating regime. The self-energy in this regime exhibits a strong temperature dependence, and various phases such as metallic, insulating, superconducting, and pseudo-gapped phases are in close proximity [51; 31; 52]. For a comprehensive discussion of the physics of this self-energy, we refer the reader to the extensive literature on this system, including the reviews [7; 48] and references therein.
Fig. 2 presents the fully converged, self-consistent imaginary part of the self-energy, \(\Sigma(\mathrm{i}\omega_{n})\), at \(\beta t=18\) (red crosses) and at \(\beta t=20\) (black dots). Red circles illustrate the extrapolation of the \(\beta=18t\) self-energy to \(\beta=20t\) using the Caratheodory formalism. The extrapolated data aligns with the converged points for all frequency points, except for the lowest one. The discrepancy between converged data and extrapolation data at \(\beta=20t\) serves as an indicator of additional correlations emerging in the system as it cools down.
Fig. 3 demonstrates the deviation of the extrapolated self-energy at the three lowest Matsubara frequencies at \(\beta t=20\). The extrapolation is conducted using converged data at \(\beta t=10,11,...,19\). Notably, while the lowest frequency of the self-energy exhibits significant deviations when extrapolated by a factor of two from \(\beta t=10\) to \(\beta t=20\), these deviations rapidly diminish if the extrapolation interval (the difference between the initial temperature and targeted temperature) is reduced.
Figs. 2 and 3 demonstrate that the Caratheodory structure of many-body functions can be effectively utilized to extrapolate data in temperature.
### Starting point and Convergence
#### iii.2.1 Convergence of the extrapolated starting point away from phase transitions
A temperature extrapolation of this type can be used to improve the starting point for the convergence of self-consistent many-body methods, such as DMFT [53; 28; 54] and its cluster extensions [4; 5; 6; 7]. The DMFT equations are solved using a fixed-point iteration scheme, in
Figure 3: Difference \(|\Delta\mathrm{Im}\Sigma(\mathrm{i}\omega_{n})/\mathrm{Im}\Sigma(\mathrm{i} \omega_{n})|\) between the converged self-energy at \(\beta t=20\) and the extrapolation from the converged self-energy at higher temperature, shown as a function of the extrapolation inverse temperature for the three lowest Matsubara frequencies \(\omega_{n}\), with \(n=0,1,2\) (see text for calculation setup).
Figure 2: Imaginary part of a cluster DMFT [7] self-energy, \(\mathrm{Im}\,\Sigma(\mathrm{i}\omega_{n})\), as a function of \(\omega_{n}\) for the 2D Hubbard model at the antinodal point \((\pi,0)\) with \(U=6t\) (see text for calculation setup). Converged data at \(\beta t=18\) (red crosses) and \(\beta t=20\) (black dots) along with temperature extrapolation (red circle) from \(\beta t=18\) to \(\beta t=20\). The labels of the extrapolation results indicate the initial inverse temperature used for extrapolation.
Figure 4: Cluster DMFT convergence of the imaginary part of the local Green’s function \(G_{\mathrm{loc}}\) as a function of iteration for the three lowest Matsubara frequencies at \(\beta t=20\) (see text for calculation setup). Convergence to the fixed point is shown for the non-interacting starting point (filled symbols) and the starting point extrapolated from the converged solution at \(\beta t=18\) (open symbols). The labels of the extrapolation results indicate the initial inverse temperature used for extrapolation.
which an initial estimate of the self-energy (commonly chosen as \(\Sigma(i\omega_{n})\equiv 0\)) serves as an initial guess for the iteration. The equations typically converge rapidly if the initial starting point of the iteration is near the stationary point. Since the computational effort is proportional to the number of iterations required, a starting point closer to the stationary point reduces the calculation time.
Fig. 4 displays the imaginary part of the lowest three frequencies of the local Green's function, \(G_{\rm loc}({\rm i}\omega_{n})\), for a typical cluster DMFT calculation away from phase transitions, using non-interacting starting points (filled symbols) and starting points derived from temperature extrapolation (open symbols), as a function of iteration. The calculation setup is identical to the one in Sec. IV.1 (a 16-site cluster in the paramagnetic state with \(\beta t=20\), \(U=6t\), and at half filling). A naive non-interacting starting point converges in approximately seven iterations. Moreover, a few additional iterations are necessary to confirm that convergence has been achieved. In contrast, a starting point generated by extrapolating from higher to lower temperatures almost immediately yields a converged result.
#### iv.1.2 DMFT convergence near a phase transition
Self-consistent many-body methods are notoriously slow to converge in the vicinity of phase transitions. This can be illustrated at the example of a single-site DMFT calculation on an infinite coordination number Bethe lattice with bandwidth \(W=4t\). The single-site DMFT calculation exhibits a first-order phase transition between a paramagnetic metal at weak interaction and low temperature and a paramagnetic Mott insulator at large interaction and higher temperature [54, 55, 56, 57, 58, 59, 60], as depicted in the inset of Fig. 5. At an interaction strength \(U=4.7t\), the system is insulating at high temperature, metallic at low temperature, and both metallic and insulating solutions coexist in an intermediate temperature regime. We show the convergence of the imaginary-time Green's function in the middle of the imaginary time interval, \(-G(\beta/2)\), in Fig. 5, which is related to the spectral density at the Fermi surface in the low-temperature limit as \(A(\omega=0)\sim-\beta G(\beta/2)\), at temperatures close to the two boundaries of the coexistence region.
At a temperature of \(\beta t=20.31\), the system exhibits Mott insulating behavior. The upper panel of Fig. 5 shows that hundreds of iterations are required to achieve convergence when initiating from the non-interacting limit (black dots). Similarly, starting with an extrapolation from a lower temperature metallic solution in the coexistence region (blue circles) results in slow convergence toward the insulating fixed point. Conversely, the extrapolation from a higher temperature insulating phase converges rapidly (red circles). A starting point derived from the atomic limit also leads to relatively fast convergence (gray dots).
Conversely, at a temperature of \(\beta t=21.56\) (Fig. 5 lower panel), the system is in a metallic state. Convergence from an 'insulating' starting point requires hundreds of iterations to reach convergence (red circles for the starting point extrapolated from a higher temperature insulating solution, and gray dots for the atomic limit starting point). Convergence from a metallic solution, like the non-interacting limit solution (black dots), is substantially faster, and a starting point extrapolated from a converged metallic solution at lower temperature leads to even faster convergence (blue circles).
Fig. 5 therefore shows that even though all starting points, including atomic, non-interacting, and extrapolated, converge to the identical solution, extrapolation efficiently accelerates convergence, provided that the extrapolated solution is in the same phase as the solution at the target temperature. For extrapolation originating from a different phase (at the boundary of phase transitions, solutions with distinct properties become possible for two neighboring temperatures), the extrapolated starting points still
Figure 5: DMFT convergence of \(-G(\beta/2)\) on a half-filled infinite-coordination Bethe lattice with \(U=4.7t\) and bandwidth \(W=4t\) as a function of iteration. The top and lower panels display the insulating (\(\beta t=20.31\)) and metallic (\(\beta t=21.56\)) phases, respectively. Atomic starting point: gray dots. Non-interacting starting point: black dots. Extrapolated starting points: red circles and blue circles. Inset: phase diagram adapted from Ref. [55]. The labels of the extrapolation results indicate the initial inverse temperature used for extrapolation.
plistic starting points corresponding to the 'wrong' phase, specifically, the non-interacting solution or the atomic solution.
#### iii.1.3 Cluster DMFT convergence near a phase transition
At second order phase transitions, critical slowing down, rather than coexistence and hysteresis, is expected. This phenomenon leads to a slow convergence of the fixed point iteration. We illustrate this at the example of a half-filled three-dimensional Hubbard model treated within cluster DMFT [62; 63; 64], calculated on a 34-site cluster with a doubled unit cell. This model exhibits a phase transition between a paramagnetic (PM) state at high temperature and an antiferromagnetic (AFM) insulator at low temperature, as depicted in the inset of Fig. 6. The model has been studied extensively within the context of ultracold atomic gases. In Fig. 6, we examine two points, \(\beta t=3.3\) and \(\beta t=3.5\), in the ordered phase at \(U=6t\), near the AFM phase transition (the critical point is at about \(\beta t=2.7\)). A starting point from temperature extrapolation cannot completely overcome the slowing down (since the self-energy itself exhibits strong temperature dependence) but does lead to an acceleration of the convergence by at least a factor of two, as illustrated by the convergence of the squared magnetic moment shown in Fig. 6.
#### iii.1.4 GW convergence of realistic many-body simulations
Finally, we turn to realistic simulations within a weak-coupling framework. We show examples for periodic solid NiO treated within the so-called GW approximation [8; 9; 10; 11; 12; 13]. The GW approximation takes into account screening processes via a renormalized, frequency-dependent interaction, but neglects the second-order exchange diagram. It is therefore mostly used for weakly correlated systems such as semiconductors. In the calculation for NiO, a 4x4x4 cluster with a doubled unit cell along the \([1,1,1]\) direction and a fixed lattice constant \(a=4.1705\) A [65] is utilized. The _gth-dzvp-molopt-sr_ basis [66] along with the _gth-bpe_ pseudopotential [3] is employed. For the density fitting of Coulomb integrals, the _def2-svp-ri_ basis set is chosen as the auxiliary basis [67]. Finite-size errors in the GW exchange diagram are corrected using the Ewald probe-charge approach [68; 69]. The Coulomb integrals and non-interacting matrix elements are obtained from PYSCF [70]. In order to decrease the number of frequencies utilized in the computation, the intermediate representation grid [36] is employed. A comprehensive description of the methods and implementation, in conjunction with the computational setup for evaluating NiO, is extensively detailed in Refs. [71; 12].
When applied to the antiferromagnetic materials NiO, the GW method shows a continuous transition to an ordered state with a non-zero staggered magnetization at low temperature. Within the GW framework, the transition temperature is situated near \(\beta=25\) Ha\({}^{-1}\). As illustrated in Fig. 7, the convergence to this ordered state, initiated from an unrestricted Hartree-Fock solution (black dots), is relatively slow. This is evident in the convergence of both the squared magnetic moment for Ni and the total energy. However, when starting from extrapolated starting points (blue or red circles), the convergence occurs at a significantly faster pace, with an initial starting point akin to iteration 30 of the Hartree-Fock-initialized convergence.
At considerably lower temperatures, deep within the AFM phase, the extrapolation of the self-energy results in a starting point that is essentially exact because the self-energy is only weakly temperature-dependent, similar to the 2D Hubbard model case shown at the beginning of Sec. IV.2. Fig. 8 displays the temperature extrapolation from \(\beta t=100\) to \(\beta t=200\), and vice versa, in comparison
Figure 6: Cluster DMFT convergence of the squared magnetic moment on the half-filled 3D Hubbard model (using a 34-site cluster with a doubled unit cell) at \(U=6t\) and \(\beta t=3.3\) (upper panel) and 3.5 (lower panel) as a function of iteration. Dots: non-interacting starting point. Circles: temperature extrapolation. Inset: phase diagram adapted from Ref. [61]. \(M^{2}\) at the first and last iteration indicated by numbers. \(\overline{M^{2}}\) corresponds to the values at the last iteration of red or blue circles, which serves as a reference for converged values. The labels of the extrapolation results indicate the initial inverse temperature used for extrapolation.
with the Hartree-Fock starting points.
### Heating and Cooling
The possibility of obtaining a starting point for fixed-point iteration by extrapolating from a nearby temperature point allows to smoothly change temperature in subsequent simulation, in analogy to 'heating' and 'cooling' measurements in experiments. This capability is especially important at first-order phase transitions, where multiple stable fixed points, corresponding to the different phases, exist.
Such a heating and cooling process is shown in Fig. 9. Shown is a coexistence region and hysteresis curve at the example of the single-site DMFT on a Bethe lattice (the same calculation setup as the one mentioned in Sec. IV.2). We plot the double occupancy, which is directly related to the potential energy, as a function of inverse temperature \(\beta\). With starting points extrapolated from higher temperature converged results (red open circles), we find a transition from an insulating phase with small double-occupancy to a metallic phase with large double-occupancy around \(\beta t=21.56\). Conversely, with starting points extrapolated from lower temperature converged results (blue open circles), we find a transition from a metallic phase to an insulating phase around \(\beta t=20.31\). Between those temperatures, both metal and insulator are stable solutions of the fixed-point equations, indicating phase coexistence [54; 55; 56; 57; 58; 59; 60; 72].
Figure 7: GW convergence of the squared magnetic moment (left) and energy (right) near the AFM phase transition of NiO at \(\beta=30\) Ha\({}^{-1}\) (upper panel) and \(\beta=31\) Ha\({}^{-1}\) (lower panel) as a function of iteration. Calculations with unrestricted Hartree-Fock self-energy starting point are represented by black dots, while calculations with temperature-extrapolated self-energy are denoted by red circles (indicating extrapolation from higher temperature solutions) or blue circles (indicating extrapolation from lower temperature solutions). The values of \(M^{2}\) and \(E\) at the first and last iterations are indicated by the numbers. The values of \(\overline{M^{2}}\) and \(\overline{E}\) are determined as the values at the last iteration of red or blue circles. The Hartree atomic units are used. The labels of the extrapolation results indicate the initial inverse temperature used for extrapolation.
## V Conclusions
The Caratheodory temperature extrapolation technique, introduced in this paper, provides an effective way to accelerate the convergence of self-consistent many-body calculations for both model Hamiltonians and realistic systems. It allows for smooth temperature variation in simulations, making it suitable for studying heating and cooling processes in many-body systems. The starting points provided by this method are generally superior, especially in systems with phase transitions and convergence issues. The Caratheodory technique therefore offers a versatile and efficient approach for studying temperature-dependent properties in self-consistent many-body calculations that should be adapted in any self-consistent finite-temperature many-body simulation.
We note that 'convergence acceleration' techniques such as DIIS [73; 74; 75] and Anderson acceleration [76; 77; 78; 79] are complementary to the Caratheodory temperature extrapolation method employed in this study. Convergence acceleration techniques work by extrapolating from a set
Figure 8: GW convergence of the squared magnetic moment (left) and energy (right) deep within the AFM phase of NiO at \(\beta=100\) Ha\({}^{-1}\) (upper panel) and \(\beta=200\) Ha\({}^{-1}\) (lower panel) as a function of iteration. Calculations with unrestricted Hartree-Fock self-energy starting point are represented by black dots, while calculations with temperature-extrapolated self-energy are denoted by red circles (indicating extrapolation from higher temperature solutions) or blue circles (indicating extrapolation from lower temperature solutions). The values of \(M^{2}\) and \(E\) at the first and last iterations are indicated by the numbers. The values of \(\overline{M^{2}}\) and \(\overline{E}\) are determined as the values at the last iteration of red or blue circles. The Hartree atomic units are used. The labels of the extrapolation results indicate the initial inverse temperature used for extrapolation.
Figure 9: DMFT analysis of a half-filled Bethe lattice with \(U=4.7t\) and a bandwidth of \(4t\), displaying the hysteresis near \(\beta t=21\). Points on the red curve utilize extrapolation from the nearest higher temperature converged results as the starting points for the calculation, while points on the blue curve employ extrapolation from the nearest lower temperature converged results as the starting points for the calculation. Inset: phase diagram adapted from Ref. [55].
of initial iterations. The combination of such techniques with the better starting points provided by Caratheodory temperature extrapolation is therefore straightforward.
###### Acknowledgements.
We thank Andre Erpenbeck for helpful discussions. This material is based upon work supported by the National Science Foundation under Grant No. NSF DMR 2001465. Real material calculations used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award BES-ERCAP0023196.
|
2303.09380 | A small remark on Bernstein's theorem | We investigate splitting-type variational problems with some linear growth
conditions. For balanced solutions of the associated Euler-Lagrange equation we
receive a result analogous to Bernstein's theorem on non-parametric minimal
surfaces.
Without assumptions of this type, Bernstein's theorem cannot be carried over
to the splitting case, which follows from an elementary counterexample.
We also include some modifications of our main theorem. | Michael Bildhauer, Bernhard Farquhar, Martin Fuchs | 2023-03-16T15:09:10Z | http://arxiv.org/abs/2303.09380v1 | # A small remark on Bernstein's theorem
###### Abstract
We investigate splitting-type variational problems with some linear growth conditions. For balanced solutions of the associated Euler-Lagrange equation we receive a result analogous to Bernstein's theorem on non-parametric minimal surfaces.
Without assumptions of this type, Bernstein's theorem cannot be carried over to the splitting case, which follows from an elementary counterexample.
We also include some modifications of our main theorem.1, 2
Footnote 1: AMS Classification 49Q20, 49Q05, 53A10, 35J20
Footnote 2: Keywords: Bernstein’s theorem, non-parametric minimal surfaces, two-dimensional variational problems, splitting-type functionals
## 1 Introduction
A famous theorem of Bernstein (see [1]) states that a smooth solution \(u=u(x),x=(x_{1},x_{2})\), of the non-parametric minimal surface equation
\[\operatorname{div}\left(\frac{\nabla u}{\sqrt{1+|\nabla u|^{2}}}\right)=0 \tag{1}\]
defined on the whole plane must be an affine function. Letting \(f_{0}(P):=\sqrt{1+|P|^{2}},P\in\mathbb{R}^{2}\), the validity of (1) on some domain \(\Omega\subset\mathbb{R}^{2}\) just expresses the fact that \(u\) is a solution of the Euler-Lagrange equation associated to the area functional
\[J_{0}[u,\Omega]:=\int_{\Omega}f_{0}(\nabla u)\,\mathrm{d}x. \tag{2}\]
For a general overview on minimal surfaces, variational integrals with linear growth and for a careful analysis of Bernstein's theorem the reader is referred for instance to [2], [3], [4], [5], [6] and [7] and the references quoted therein.
We ask the following question: does Bernstein's theorem extend to the case when the area integrand \(f_{0}(\nabla u)\) is replaced by the energy density \(\sqrt{1+(\partial_{1}u)^{2}}+\sqrt{1+(\partial_{2}u)^{2}}\) being also of linear growth with respect to \(|\nabla u|\) but without any obvious geometric meaning?
More generally we let for \(P=(p_{1},p_{2})\in\mathbb{R}^{2}\)
\[f(P):=f_{1}(p_{1})+f_{2}(p_{2}) \tag{3}\]
with functions \(f_{i}\in\,C^{2}(\mathbb{R}),i=1,2\), satisfying
\[0<f_{i}^{\prime\prime}(t)\leq C_{i}(1+t^{2})^{-\frac{\mu_{i}}{2}},t\in\mathbb{ R}, \tag{4}\]
for numbers \(C_{i}>0\) and with exponents
\[\mu_{i}>1. \tag{5}\]
Note that (4) implies the strict convexity of \(f\) and on account of (5) the density \(f\) is of linear growth in the sense that
\[|f_{i}(t)|\leq a|t|+b,\quad t\in\mathbb{R},\]
for some constants \(a,b>0\). For a discussion of the properties of densities \(f\) satisfying (3)-(5) we refer to [8]. We then replace (1) by the equation
\[\operatorname{div}\left(Df(\nabla u)\right)=0 \tag{6}\]
and observe that the non-affine function \((\alpha,\beta,\gamma,\delta\in\mathbb{R})\)
\[w(x_{1},x_{2}):=\alpha x_{1}x_{2}+\beta x_{1}+\gamma x_{2}+\delta \tag{7}\]
is an entire solution of equation (6), in other words: the classical version of Bernstein's theorem does not extend to the splitting case. The behaviour of the function \(w\) defined in (7) is characterized in
**Definition 1.1**.: A function \(u\in\,C^{1}(\mathbb{R}^{2}),u=u(x_{1},x_{2})\), is called unbalanced, if and only if both of the following conditions hold
\[\limsup_{|x|\to\infty}\frac{|\partial_{1}u(x)|}{1+|\partial_{2}u (x)|} = \infty, \tag{8}\] \[\limsup_{|x|\to\infty}\frac{|\partial_{2}u(x)|}{1+|\partial_{1}u (x)|} = \infty. \tag{9}\]
Otherwise we say that \(u\) is of balanced form.
_Remark 1.2_.: Condition (8) for example means that there exists a sequence of points \(x_{n}\in\mathbb{R}^{2}\) such that \(|x_{n}|\to\infty\) and for which
\[\lim_{n\to\infty}\frac{|\partial_{1}u(x_{n})|}{1+|\partial_{2}u(x_{n})|}=\infty.\]
_Remark 1.3_.: If for instance (8) is violated, no such sequence exists. Thus we can find constants \(R,M>0\) such that \(|\partial_{1}u(x)|\leq M\left(1+|\partial_{2}u(x)|\right)\) for all \(|x|\geq R\). Since \(u\) is of class \(C^{1}(\mathbb{R}^{2})\), this just shows
\[|\partial_{1}u(x)|\leq m|\partial_{2}u(x)|+M,\quad x\in\mathbb{R}^{2}, \tag{10}\]
with suitable new constants \(m,M>0\).
Now we can state the appropriate version of Bernstein's theorem in the above setting:
**Theorem 1.4**.: _Let (3)-(5) hold and let \(u\in\)\(C^{2}(\mathbb{R}^{2})\) denote a solution of (6) on the entire plane. Then \(u\) is an affine function or of unbalanced type._
_Remark 1.5_.: We do not know if (7) is the only entire unbalanced solution of (6).
Before proving Theorem 1.4 we formulate some related results: in Theorem 1.7 below we can slightly improve the result of Theorem 1.4 by adjusting the notation introduced in Definition 1.1 and by taking care of the growth rates of the second derivatives \(f_{i}^{\prime\prime}\) (compare (4)).
**Definition 1.6**.: Let \(\mu:=(\mu_{1},\mu_{2})\) with numbers \(\mu_{i}>1\), \(i=1,2\). A function \(u\in\)\(C^{1}(\mathbb{R}^{2})\) is called \(\mu\)-balanced, if we can find a positive constant \(c\) and a number \(\rho>0\) such that at least one of the following inequalities holds:
\[|\partial_{1}u|\leq c(|\partial_{2}u|^{\rho\mu_{2}}+1), \tag{11}\]
\[|\partial_{2}u|\leq c(|\partial_{1}u|^{\rho\mu_{1}}+1), \tag{12}\]
where in case (11) we require \(\rho\in(1/\mu_{2},1)\), whereas in case (12) \(\rho\in(1/\mu_{1},1)\) must hold.
Note that for example (11) is a weaker condition in comparison to (10).
The extension of Theorem 1.4 reads as follows:
**Theorem 1.7**.: _Let (3)-(5) hold and let \(u\in\)\(C^{2}(\mathbb{R}^{2})\) denote an entire solution of (6). If the function \(u\) is \(\mu\)-balanced, then it must be affine._
In Theorem 1.8 we suppose that \(|\partial_{1}u|\) is controlled in \(x_{2}\)-direction and from this we derive a smallness condition in \(x_{1}\)-direction - at least for a suitable sequence satisfying \(|x_{1}|\to\infty\). The idea of proving Theorem 1.8 again is of Bernstein-type in the sense that the proof follows the ideas of Theorem 1.4 combined with a splitting structure of the test functions.
**Theorem 1.8**.: _Let (3)-(5) hold and let \(u\in C^{2}(\mathbb{R}^{2})\) denote a solution of (6) on the entire plane. Suppose that there exist real numbers \(\kappa_{1}>0\), \(0\leq\kappa_{2}<1\) such that_
\[\mu_{1}>1+\frac{1}{\kappa_{1}}\frac{\kappa_{2}}{1-\kappa_{2}} \tag{13}\]
_and such that with a constant \(k>0\)_
\[\limsup_{|x_{2}|\to\infty}\frac{|\partial_{1}u(x_{1},x_{2})|}{|x_{2}|^{\kappa_ {2}}}\leq k. \tag{14}\]
_Then we have_
\[\liminf_{|x_{1}|\to\infty}\frac{|\partial_{1}u(x_{1},x_{2})|}{|x_{1}|^{\kappa_ {1}}}=0. \tag{15}\]
_More precisely, by (13) we choose \(\rho\) such that_
\[\frac{\kappa_{2}}{1-\kappa_{2}}<\rho<\kappa_{1}(\mu_{1}-1). \tag{16}\]
_Then the \(\limsup\) in (14) is taken in the set_
\[M_{2,\rho}:=\left\{(x_{1},x_{2})\colon|x_{1}|\leq 2|x_{2}|^{1/(1+\rho)}\right\}\]
_and the \(\liminf\) in (15) is taken in the set_
\[M_{1,\rho}:=\left\{(x_{1},x_{2})\colon|x_{2}|\leq 2|x_{1}|^{1+\rho}\right\}.\]
Our final Bernstein-type result is given in Theorem 1.9. Here a formulation in terms of the densities \(f_{i}\) is presented without requiring an upper bound for the second derivatives \(f_{i}^{\prime\prime}\) in terms of some negative powers (see (4)).
**Theorem 1.9**.: _Suppose that \(f_{i}\in C^{2}(\mathbb{R})\), \(i=1,2\), satisfies \(f_{i}^{\prime\prime}(t)>0\) for all \(t\in\mathbb{R}\) and \(f_{i}^{\prime}\in L^{\infty}(\mathbb{R})\). Let \(u\in C^{2}(\mathbb{R}^{2})\) denote an entire solution of (6), i.e. it holds_
\[0=\operatorname{div}\left(Df(\nabla u)\right)=f_{1}^{\prime\prime}(\partial_ {1}u)\partial_{11}u+f_{2}^{\prime\prime}(\partial_{2}u)\partial_{22}u\quad \text{on $\mathbb{R}^{2}$.}\]
_If_
\[\Theta:=\frac{f_{2}^{\prime\prime}(\partial_{2}u)}{f_{1}^{\prime\prime}( \partial_{1}u)}\in L^{\infty}(\mathbb{R}^{2})\quad\text{or}\quad\frac{1}{ \Theta}\in L^{\infty}(\mathbb{R}^{2}),\]
_then \(u\) is an affine function._
In the next section we prove our main Theorem 1.4 while in Section 3 the variants mentioned above are established.
Proof of Theorem 1.4
Our arguments make essential use of a Caccioppoli-type inequality involving negative exponents. This result was first introduced in [8].We refer to the presentation given in Section 6 of [9], where Proposition 6.1 applies to the situation at hand. Let us assume that the conditions (3)-(5) hold and that \(u\) is an entire solution of equation (6) being not necessarily of balanced type.
**Lemma 2.1** (see [9], Prop.6.1).: _Fix \(l\in\mathbb{N}\) and suppose that \(\eta\in\mathit{C}_{0}^{\infty}(\Omega)\), \(0\leq\eta\leq 1\), where \(\Omega\) is a domain in \(\mathbb{R}^{2}\). Then the inequality_
\[\int_{\Omega}D^{2}f(\nabla u)(\nabla\partial_{i}u,\nabla\partial _{i}u)\eta^{2l}\Gamma_{i}^{\alpha}\,\mathrm{d}x \tag{17}\] \[\leq \int_{\Omega}D^{2}f(\nabla u)(\nabla\eta,\nabla\eta)\eta^{2l-2} \Gamma_{i}^{\alpha+1}\,\mathrm{d}x,\quad\Gamma_{i}:=1+|\partial_{i}u|^{2},\]
_holds for any \(\alpha>-1/2\) and for any fixed \(i=1,2\)._
Here and in what follows the letter \(c\) denotes finite positive constants whose value may vary from line to line but being independent of the radius.
Assume next that the solution \(u\) is balanced and w.l.o.g. let \(u\) satisfy (10). In order to show that \(u\) is affine we return to inequality (17), choose \(i=1\) and fix some function \(\eta\in C_{0}^{\infty}(B_{2R}(0))\) according to \(\eta\equiv 1\) on \(B_{R}(0)\), \(|\nabla\eta|\leq c/R\). Then (17) yields for any exponent \(\alpha\in(-1/2,\infty)\) and with the choice \(l=1\) (\(B_{r}:=B_{r}(0)\), \(r>0\))
\[\int_{B_{2R}}D^{2}f(\nabla u)(\nabla\partial_{1}u,\nabla\partial _{1}u)\eta^{2}\Gamma_{1}^{\alpha}\,\mathrm{d}x \tag{18}\] \[\leq c\int_{B_{2R}}D^{2}f(\nabla u)(\nabla\eta,\nabla\eta)\Gamma_{1}^{ \alpha+1}\,\mathrm{d}x\] \[\stackrel{{(\ref{eq:1})}}{{=}} c\int_{B_{2R}-B_{R}}\left(f_{1}^{\prime\prime}(\partial_{1}u)| \partial_{1}\eta|^{2}+f_{2}^{\prime\prime}(\partial_{2}u)|\partial_{2}\eta|^{ 2}\right)\Gamma_{1}^{\alpha+1}\,\mathrm{d}x\] \[\leq cR^{-2}\left(\int_{B_{2R}-B_{R}}f_{1}^{\prime\prime}(\partial_{1 }u)\Gamma_{1}^{\alpha+1}\,\mathrm{d}x+\int_{B_{2R}-B_{R}}f_{2}^{\prime\prime} (\partial_{2}u)\Gamma_{1}^{\alpha+1}\,\mathrm{d}x\right)\] \[\stackrel{{(\ref{eq:2})}}{{\leq}} cR^{-2}\left(\int_{B_{2R}-B_{R}}\Gamma_{1}^{\alpha+1-\frac{\mu_{1}}{2}} \,\mathrm{d}x+\int_{B_{2R}-B_{R}}\Gamma_{2}^{-\frac{\mu_{2}}{2}}\Gamma_{1}^{ \alpha+1}\,\mathrm{d}x\right).\]
Recall (5) and choose \(\alpha\) according to
\[\alpha\in\left(-1/2,\min\left\{-1+\frac{\mu_{1}}{2},-1+\frac{\mu_{2}}{2} \right\}\right). \tag{19}\]
Here we note that - depending on the values of \(\mu_{1}\) and \(\mu_{2}\) - actually a negative exponent \(\alpha\) can occur. It follows from (10) that
\[cR^{-2}\left(\int_{B_{2R}-B_{R}}\Gamma_{1}^{\alpha+1-\frac{\mu_{1}}{2}}\, \mathrm{d}x+\int_{B_{2R}-B_{R}}\Gamma_{2}^{-\frac{\mu_{2}}{2}}\Gamma_{1}^{ \alpha+1}\,\mathrm{d}x\right)\leq cR^{-2}\int_{B_{2R}-B_{R}}c\,\mathrm{d}x \leq c<\infty, \tag{20}\]
recalling that \(c\) is independent of \(R\). Combining (20) and (18) it is obvious that (by passing to the limit \(R\to\infty\))
\[\int_{\mathbb{R}^{2}}D^{2}f(\nabla u)(\nabla\partial_{1}u,\nabla\partial_{1}u) \Gamma_{1}^{\alpha}\,\mathrm{d}x<\infty \tag{21}\]
for \(\alpha\) satisfying (19).
As in the proof of Proposition 6.1 from [9] (with \(l=1\)) and by applying the Cauchy-Schwarz inequality we get
\[\int_{B_{2R}}D^{2}f(\nabla u)(\nabla\partial_{1}u,\nabla\partial_{1 }u)\eta^{2}\Gamma_{1}^{\alpha}\,\mathrm{d}x \tag{22}\] \[\leq \left|\int_{B_{2R}}D^{2}f(\nabla u)(\nabla\partial_{1}u,\nabla \eta^{2})\partial_{1}u\Gamma_{1}^{\alpha}\,\mathrm{d}x\right|\] \[\leq c\left[\int_{\mathrm{spt}\nabla\eta}D^{2}f(\nabla u)(\nabla \partial_{1}u,\nabla\partial_{1}u)\eta^{2}\Gamma_{1}^{\alpha}\,\mathrm{d}x \right]^{\frac{1}{2}}\left[\int_{\mathrm{spt}\nabla\eta}D^{2}f(\nabla u)( \nabla\eta,\nabla\eta)\Gamma_{1}^{\alpha+1}\,\mathrm{d}x\right]^{\frac{1}{2}}.\]
The second integral on the right-hand side is bounded on account of our previous calculations. Because of the validity of (21) the limit of the first integral for \(R\to\infty\) is \(0\). Thus (22) implies
\[\nabla\partial_{1}u\equiv 0. \tag{23}\]
In particular (23) guarantees the existence of a number \(a\in\mathbb{R}\) such that
\[\partial_{1}u\equiv a. \tag{24}\]
From (24) we obtain
\[u(x_{1},x_{2})-u(0,x_{2})=\int_{0}^{x_{1}}\frac{\mathrm{d}}{\mathrm{d}t}u(t,x _{2})\,\mathrm{d}t=\int_{0}^{x_{1}}a\,\mathrm{d}t=a\,x_{1},\]
implying
\[u(x_{1},x_{2})=u(0,x_{2})+a\,x_{1}. \tag{25}\]
Considering (24) again, equation (6) reduces to
\[\partial_{2}\left(f_{2}^{\prime}(\partial_{2}u)\right)=0. \tag{26}\]
We set \(\varphi(t):=u(0,t)\) and interpret the PDE (26) as the ODE
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(f_{2}^{\prime}(\varphi^{\prime}(t)) \right)=0 \tag{27}\]
implying
\[f_{2}^{\prime}(\varphi^{\prime}(t)))=\mathrm{const}.\]
Since \(f_{2}^{\prime}\) is strictly monotonically increasing, this just means
\[\varphi^{\prime}(t)=b,\quad t\in\mathbb{R},\]
for some real number \(b\), which consequently gives
\[u(x_{1},x_{2})=a\,x_{1}+b\,x_{2}+c,\quad a,b,c\in\mathbb{R} \tag{28}\]
completing our proof.
Remaining proofs
**ad Theorem 1.7**.: Let the assumptions of Theorem 1.7 hold and assume w.l.o.g. that we have inequality (11) from Definition 1.6. Consider the mixed term in the last line of (18) and note that on account of (11) we may estimate
\[\Gamma_{2}^{-\frac{\mu_{2}}{2}}\Gamma_{1}^{\alpha+1}\leq c\Gamma_{2}^{-\frac{ \mu_{2}}{2}(1-2\rho(\alpha+1))}. \tag{29}\]
The validity of \(\rho<1\) allows us to choose \(\alpha\) sufficiently close to \(-1/2\) such that \(1-2\rho(\alpha+1)>0\) which again yields (21) and allows us to proceed as before giving our claim.
**ad Theorem 1.8**.: Suppose by contradiction that there exists a real number \(\hat{c}>0\) such that w.r.t the set \(M_{1,\rho}\)
\[\hat{c}\leq\liminf_{|x_{1}|\to\infty}\frac{|\partial_{1}u(x_{1},x_{2})|}{|x_{1 }|^{\kappa_{1}}}. \tag{30}\]
For intervals \(I_{1},I_{2}\subset\mathbb{R}\) we let
\[S_{I_{1};I_{2}}:=\{x\in\mathbb{R}^{2}:|x_{1}|\in I_{1},|x_{2}|\in I_{2}\}.\]
We fix \(0<R_{1}<R_{2}\) and consider
\[\eta\in\,C_{0}^{\infty}(S_{[0,2R_{1});[0,2R_{2})}),\quad 0\leq\eta\leq 1,\quad \eta\equiv 1\quad\text{on}\quad S_{[0,2R_{1});[0,2R_{2})}\,,\]
\[\text{spt}\partial_{1}\eta\subset S_{(R_{1},2R_{1});[0,2R_{2})}\,,\quad\text{ spt}\partial_{2}\eta\subset S_{[0,2R_{1});(R_{2},2R_{2})}\,, \tag{31}\]
\[|\partial_{1}\eta|\leq c/R_{1}\,,\quad|\partial_{2}\eta|\leq c/R_{2}. \tag{32}\]
Exactly as in (18) one obtains using (31) and (32)
\[\int_{S_{[0,2R_{1});[0,2R_{2})}}D^{2}f(\nabla u)(\nabla\partial_ {1}u,\nabla\partial_{1}u)\eta^{2}\Gamma_{1}^{\alpha}\,\mathrm{d}x \tag{33}\] \[\leq \int_{S_{[0,2R_{1});[0,2R_{2})}}D^{2}f(\nabla u)(\nabla\eta,\nabla \eta)\eta^{2}\Gamma_{1}^{1+\alpha}\,\mathrm{d}x\] \[\leq \frac{c}{R_{1}^{2}}\int_{S_{(R_{1},2R_{1});[0,2R_{2})}}\Gamma_{1 }^{\alpha+1-\frac{\mu_{1}}{2}}\,\mathrm{d}x+\frac{c}{R_{2}^{2}}\int_{S_{[0,2R _{1});(R_{2},2R_{2})}}\Gamma_{2}^{-\frac{\mu_{2}}{2}}\Gamma_{1}^{\alpha+1}\, \mathrm{d}x.\]
By definition we have
\[|S_{[0,2R_{1});(R_{2},2R_{2})}|\leq cR_{1}R_{2}.\]
Moreover, our assumption \(\kappa_{2}<1\) implies that \(\alpha\) can be chosen such that in the case \(\kappa_{1}>0\)
\[-\frac{1}{2}<\alpha<\frac{1}{2\kappa_{2}}-1. \tag{34}\]
In the case \(\kappa_{2}=0\) we do not need an additional condition. We apply assumption (14), which leads to
\[\frac{1}{R_{2}^{2}}\int_{S_{[0,2R_{1});(R_{2},2R_{2})}}\Gamma_{2}^ {-\frac{\mu_{2}}{2}}\Gamma_{1}^{\alpha+1}\,\mathrm{d}x \leq\frac{c}{R_{2}^{2}}\int_{S_{[0,2R_{1});(R_{2},2R_{2})}}|x_{2} |^{2\kappa_{2}(\alpha+1)}\,\mathrm{d}x\] \[\leq cR_{1}R_{2}^{2\kappa_{2}(\alpha+1)-1}. \tag{35}\]
Let us consider the first integral on the right-hand side of (33) recalling that \(\alpha+1-\mu_{1}/2<0\). Assumption (30) implies
\[\frac{1}{R_{1}^{2}}\int_{S_{(R_{1},2R_{1});[0,2R_{2})}}\Gamma_{1}^{ \alpha+1-\frac{\mu_{1}}{2}}\,\mathrm{d}x \leq\frac{c}{R_{1}^{2}}\int_{(R_{1},2R_{1});[0,2R_{2})}|x_{1}|^{2 \kappa_{1}(\alpha+1-\frac{\mu_{1}}{2})}\,\mathrm{d}x\] \[\leq cR_{2}R_{1}^{-\kappa_{1}(\mu_{1}-2(\alpha+1))-1}, \tag{36}\]
where we suppose that \(\mu_{1}>2(\alpha+1)\) by choosing \(\alpha\) sufficiently close to \(-1/2\). If we further suppose that
\[R_{2}=R_{1}^{1+\rho}\quad\text{with a positive real number}\quad\rho<\kappa_{1}(\mu_{1}-1), \tag{37}\]
then by decreasing \(\alpha\), if necessary, still satisfying \(\alpha>-1/2\), we obtain from (36)
\[\frac{1}{R_{1}^{2}}\int_{S_{(R_{1},2R_{1});[0,2R_{2})}}\Gamma_{1}^{\alpha+1- \frac{\mu_{1}}{2}}\,\mathrm{d}x\to 0\quad\text{as}\quad R_{1}\to\infty. \tag{38}\]
Using \(R_{2}=R_{1}^{1+\rho}\) (recall (37)) we return to (35) recalling that by the choice (34) we have \(2\kappa_{2}(\alpha+1)-1<0\). We calculate
\[R_{1}R_{1}^{(1+\rho)(2\kappa_{2}(\alpha+1)-1)}=R_{1}^{2\kappa_{2}(\alpha+1)+ \rho(2\kappa_{2}(\alpha+1)-1)}. \tag{39}\]
If we suppose that
\[\kappa_{2}+\rho(\kappa_{2}-1)<0, \tag{40}\]
then we may choose \(\alpha>-1/2\) sufficiently small such that the exponent on the right-hand side of (39) is negative, hence together with (35)
\[\frac{1}{R_{2}^{2}}\int_{S_{[0,2R_{1});(R_{2},2R_{2})}}\Gamma_{2}^{-\frac{\mu _{2}}{2}}\Gamma_{1}^{\alpha+1}\,\mathrm{d}x\to 0\quad\text{as}\quad R_{1}\to\infty. \tag{41}\]
By (33), (38) and (41) it follows that
\[\int_{S_{[0,2R_{1});[0,2R_{2})}}D^{2}f(\nabla u)(\nabla\partial_{1}u,\nabla \partial_{1}u)\eta^{2}\Gamma_{1}^{\alpha}\,\mathrm{d}x\to 0\quad\text{as} \quad R_{1}\to\infty, \tag{42}\]
provided that we have (37) and (40), i.e. provided that we have (16) which is a consequence of (13). Hence we have (42) which exactly as in the proof of Theorem 1.4 shows that \(u\) has to be an affine function and this contradicts (30) which in turn proves Theorem 1.8.
**ad Theorem 1.9.** W.l.o.g. we suppose that \(\Theta\in L^{\infty}(\mathbb{R}^{2})\) and that \(u\in C^{3}(\mathbb{R}^{2})\), \(f_{1}\), \(f_{2}\in C^{3}(\mathbb{R})\). Otherwise we argue in a weak sense. Let
\[w_{i}:=f_{i}^{\prime}(\partial_{i}u),\quad i=1,2.\]
Then we have
\[\partial_{1}w_{1}+\partial_{2}w_{2}=0\quad\text{on }\mathbb{R}^{2}, \tag{43}\]
hence
\[\partial_{11}w_{1}+\partial_{1}\partial_{2}w_{2}=\partial_{11}w_{1}+\partial_ {2}\partial_{1}w_{2}=0\quad\text{on }\mathbb{R}^{2}. \tag{44}\]
A direct calculation shows
\[\partial_{1}w_{2}=\partial_{1}\!\left(f^{\prime}_{2}(\partial_{2}u)\right)=f^{ \prime\prime}_{2}(\partial_{2}u)\partial_{1}\partial_{2}u=\Theta f^{\prime \prime}_{1}(\partial_{1}u)\partial_{2}\partial_{1}u=\Theta\partial_{2}w_{1}\]
and the weak form of (44) reads as
\[\int_{\mathbb{R}^{2}}\left(\begin{array}{c}\partial_{1}w_{1}\\ \Theta\partial_{2}w_{1}\end{array}\right)\cdot\nabla\varphi\,\mathrm{d}x=0, \quad\varphi\in C^{1}_{0}(\mathbb{R}^{2}). \tag{45}\]
Inserting \(\varphi=\eta^{2}w_{1}\) with suitable \(\eta\in C^{1}_{0}(B_{2R})\) such that \(0\leq\eta\leq 1\), \(\eta\equiv 1\) on \(B_{R}\) and \(|\nabla\eta|\leq c/R\), we obtain
\[\int_{B_{2R}}|\partial_{1}w_{1}|^{2}\eta^{2}\,\mathrm{d}x+\int_{B _{2R}}\Theta|\partial_{2}w_{1}|^{2}\eta^{2}\,\mathrm{d}x\] \[= -2\int_{B_{2R}-B_{R}}\eta\partial_{1}w_{1}\,\partial_{1}\eta\,w _{1}\,\mathrm{d}x-2\int_{B_{2R}-B_{R}}\Theta\,\eta\,\partial_{2}w_{1}\, \partial_{2}\eta\,w_{1}\,\mathrm{d}x. \tag{46}\]
Applying Young's inequality and using \(w_{1}\), \(\Theta\in L^{\infty}(\mathbb{R}^{2})\) we obtain that
\[\int_{\mathbb{R}^{2}}\left(|\partial_{1}w_{1}|^{2}+\Theta|\partial_{2}w_{1}|^ {2}\right)\mathrm{d}x<\infty. \tag{47}\]
We then return to (46) and apply the inequality of Cauchy-Schwarz to obtain
\[\int_{B_{2R}}|\partial_{1}w_{1}|^{2}\eta^{2}\,\mathrm{d}x+\int_{ B_{2R}}\Theta|\partial_{2}w_{1}|^{2}\eta^{2}\,\mathrm{d}x \tag{48}\] \[\leq 2\Bigg{[}\int_{B_{2R}-B_{R}}\eta^{2}|\partial_{1}w_{1}|^{2}\, \mathrm{d}x\Bigg{]}^{\frac{1}{2}}\Bigg{[}\int_{B_{2R}-B_{R}}|\partial_{1}\eta |^{2}w_{1}^{2}\,\mathrm{d}x\Bigg{]}^{\frac{1}{2}}\] \[+2\Bigg{[}\int_{B_{2R}-B_{R}}\Theta\eta^{2}|\partial_{2}w_{1}|^{ 2}\,\mathrm{d}x\Bigg{]}^{\frac{1}{2}}\Bigg{[}\int_{B_{2R}-B_{R}}\Theta| \partial_{2}\eta|^{2}w_{1}^{2}\,\mathrm{d}x\Bigg{]}^{\frac{1}{2}}.\]
On the right-hand side of (48) we observe that for both parts the first integral is vanishing when passing to the limit \(R\to\infty\) since we have (47), while the remaining integrals stay uniformly bounded.
This gives \(\partial_{1}w_{1}=0\) and \(\partial_{2}w_{1}=0\) since we have \(\Theta>0\). Hence we obtain \(w_{1}\equiv c_{1}\) for some constant \(c_{1}\). The monotonicity of \(f^{\prime}_{1}\) then implies \(\partial_{1}u\equiv\tilde{c}_{1}\) for some different constant \(\tilde{c}_{1}\).
By (43) we then also have \(\partial_{2}w_{2}=0\). Since we have already observed above that \(\partial_{1}w_{2}=\Theta\partial_{2}w_{1}\), we deduce \(\partial_{2}u\equiv\tilde{c}_{2}\) for some other real number \(\tilde{c}_{2}\) and in conclusion \(u\) must be an affine function which completes the proof of Theorem 1.9.
|
2306.14269 | Weakly Supervised Scene Text Generation for Low-resource Languages | A large number of annotated training images is crucial for training
successful scene text recognition models. However, collecting sufficient
datasets can be a labor-intensive and costly process, particularly for
low-resource languages. To address this challenge, auto-generating text data
has shown promise in alleviating the problem. Unfortunately, existing scene
text generation methods typically rely on a large amount of paired data, which
is difficult to obtain for low-resource languages. In this paper, we propose a
novel weakly supervised scene text generation method that leverages a few
recognition-level labels as weak supervision. The proposed method is able to
generate a large amount of scene text images with diverse backgrounds and font
styles through cross-language generation. Our method disentangles the content
and style features of scene text images, with the former representing textual
information and the latter representing characteristics such as font,
alignment, and background. To preserve the complete content structure of
generated images, we introduce an integrated attention module. Furthermore, to
bridge the style gap in the style of different languages, we incorporate a
pre-trained font classifier. We evaluate our method using state-of-the-art
scene text recognition models. Experiments demonstrate that our generated scene
text significantly improves the scene text recognition accuracy and help
achieve higher accuracy when complemented with other generative methods. | Yangchen Xie, Xinyuan Chen, Hongjian Zhan, Palaiahankote Shivakum, Bing Yin, Cong Liu, Yue Lu | 2023-06-25T15:26:06Z | http://arxiv.org/abs/2306.14269v2 | # **Highlights**
###### Abstract
We design generative frameworks which employ integrated attention to exploit the global and local relationships between between content features and generated features.
* With the proposed methods, we generate a large-scale scene text dataset for low-resource languages, which is used to train a scene text recognizer. Our approach significantly improved the performance of the recognizer.
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[[
[
[
[
[
[
[[
[
[
[
[[
[
[
[
[
[
[[
[
[
[
[
[
[
[
[
[[
[[
[[
[[
[[
[[
[[
[[
[[
[
[[
[[
[
[
[
[[
[
[
[[
[[
[[
[[
[[
[
[
[[
[[
[
[[
[[
[[
[[
[
[
[[
[[
[
[[
[[
[[
[[
[
[[
[
[
[[
[[
[[
[[
[
[
[[
[
[[
[
[[
[[
[
[
[
[[
[[
[
[[
[[
[[
[[
[[
[[
[[
[[
[[
[[
[[
[
[[
[[
[
[[
[[
[[
[[
[[
[
[
[[
[[
[
[
[[
[[
[
[[
[
[
[[
[[
[[
[
[[
[[
[[
[[
[
[
[[
[
[[
[
[
[
[
[
[
[
[
[
[[[
[
[[
[
[
[
[[
[
[
[
[[
[
[
[
[[
[[
[
[
[
[
[
[
[
[
[
[
[[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
there are few computer fonts for low-resource languages. For instance, there are 1394 Latin fonts in Google Fonts, but only 197 Cyrillic and 31 Korean fonts. In addition, Peng Z et.[47] assumed that simply merging text and background in traditional image editing methods cannot simulate the distribution of real data. After that, [21, 24, 40] improve the performance of the synthesis engines and extend to more optimized and more diverse scenarios, but they still rely heavily on computer fonts and can not be applied to low-resource languages.
Some other researchers exploit scene text editing methods to generate scene text. [34, 37] introduce two conversion modules to process foreground text and background images separately. These methods apply GAN[11] to fuse the processed text and text-erased images such that the distribution of generated images can converge to the true distribution. Due to the requirement for target-style images, these methods are limited to training on synthetic data. Rewrite[20] and TextStyleBrush[19] introduce a pre-trained OCR to ensure the correctness of the generated text and employ real-world images for training. With real-world images, models can imitate more text styles from scene images. However, these methods are restricted to high-resource languages such as English and Chinese. A robust and accurate OCR is hard to obtain for low-resource languages. Inspire by image-to-image translation methods, DGFont[36] and DGFont++[7] introduce a novel deformable module that deforms and transforms the character of one font to another in an unsupervised way. These methods are promising in addressing generation problems for low-resource languages. However, on the one hand, DG-Font is designed for a "character" in black and white, while scene text images often contain "text" with diverse backgrounds. On the other hand, DG-Font can only be applied to specific languages, which cannot fully leverage the existing scene text images from high-resource languages.
Compelled by the above observations, we propose a novel weakly supervised scene text generation method. The main idea of the proposed method is to train scene text generation models based on a few recognition-level annotations for low-resource languages. As illustrated in Figure 1, the recognition-level labels are weak supervision; the text of the images are known but the target edited images are unknown. With the recognition-level labels of the given content and scene text images as weak supervision, the proposed method learns to separates the content and style features respectively and then mixes the two domain representations to generate target scene text images. The content is defined as textual information from images and the style is defined as scene text characteristics such as font, alignment, and background. We introduce an integrated attention module to process features from the content encoder at different levels respectively. For low-level features, a sequence of densely connected deformable convolutions is applied to perform global attention. We first employ the 'query' features from the content encoder and 'key' features from the decoder to predict pairs of displacement maps. The predicted maps are then applied to these dense deformable convolutions to process the 'value' features from the content encoder. For high-level features, a deformable convolution followed by local attention is introduced to predict the local relationship between the encoder and mixer features. Moreover, to further leverage the existing datasets of high-resource languages, we add data from high-resource languages into training and perform inference in a cross-languages way. Specifically, in the training process, images of high-resource languages help train the scene text generation model by increasing the training set. In the testing process, the model replaces text in scene text images from high-resource languages by leveraging the provided text from low-resource languages, which helps generate datasets of diverse scenes. Besides, a pre-trained font classifier is introduced to maintain the consistency of style in different languages. Extensive experiments demonstrate that our model outperforms
Figure 1: The pipeline of training scene text recognition models based on the proposed methods. (a) Images are annotated at the character level. (b) the generative model is trained using annotated datasets from previous steps. (c) The trained model replaces text in scene text images from high-resource languages by leveraging the provided text from low-resource languages. (d) The recognition model is trained using synthetic datasets.
the state-of-the-art scene text generation methods in improving scene text recognition accuracy. Moreover, results show that our method is able to be complementary to commonly used generation methods.
In summary, our contributions can be summarized as follows:
1. A weakly supervised generative method for scene text generation is proposed that utilizes recognition-level annotations for low-resource languages. Also, a cross-language generative scheme is introduced to reduce reliance on labeled data in low-resource languages.
2. We design generative frameworks which employ integrated attention to exploit the global and local relationships between between content features and generated features.
3. With the proposed methods, we generate a large-scale scene text dataset for low-resource languages, which is used to train a scene text recognizer. Our approach significantly improved the performance of the recognizer.
## 2 Related works
### Scene Text Recognition
Automatic Scene text recognition of various texts in scene images has been a popular research topic and an important task in a wide range of industrial applications for many years [6, 39]. Early researches focus on hand-crafted features[29, 38]. With the development of deep learning, the scene text recognition models have been well explored and achieved great success in the past few years [2, 2, 27, 35, 46]. However, training scene text recognition models requires a large number of labeled images which is extremely difficult to collect for low-resource languages. The accuracy of English scene text recognition is over 90 percent [2] in the regular datasets. In contrast, there are few reports on low-resource languages like Kazakh. On the one hand, existing low-resource languages data sets such as ICDAR2017 [10] and ICDAR2019 [28] contain only a few hundred or thousand training samples which are far from being able to train scene recognition models. On the other hand, collecting and annotating scene images of low-resources languages requires experts, which is expensive. In this work, the proposed universal scene text generation method tries to address this challenge by generating a large amount of realistic scene text images. The proposed generation methods is evaluated based on the state-of-the-art recognition model [2].
### Data augmentation
Data augmentation is an important technique to avoid overfitting when training deep neural networks, especially in the shortage of training data. Typical image augmentations include two kinds of transforms, spatial levels such as cropping, rotation, perspective, and resizing, and pixels such as noise, blur, and color change. Existing scene text recognition methods often select a subset of these augmentations in the training process. Recently, some researchers explore data augmentations that are suitable for text recognition models. [26] propose Random Blur Region and Random Blur Units to help train more robust deep models. [1] analyze the difference in data augmentation between object recognition and text recognition and formulates a library of data augmentation function to improve the performance of existing recognition methods. However, data augmentation cannot solve the data shortage problem of low-resource languages. The aforementioned data augmentation strategy is often based on experiments of English datasets. Identification of optimal augmentations for low-resources is a challenge. Besides, the robust scene text recognition methods rely on lots of scene text images with diverse backgrounds and fonts, which cannot be achieved by using data augmentation.
### Scene Text Generation
Scene text generation aims to automatically generate scene text images based on given textual content. SynthText [12] is a notable scene text image generator and has been commonly applied to train scene text recognition models. It first analyzes images with off-the-shelf models and searches for suitable text regions on semantically consistent regions to put processed text from the specific font. After that, SynthText3D [21] and its improved version UnrealText [24] synthesize realistic scene text images from the unreal world via a 3D graphics engine. However, these methods rely heavily on computer fonts designed by human experts. There are few computer fonts for Low-resource languages. Some image composition methods [43, 42] introduce generative adversarial networks to make synthesized images more realistic, but they still suffer from the reliance of computer fonts.
Recently, there are some attempts to generate scene text by scene text editing methods. Initial works [34, 37] train the model to separate the text region and the background region using the text-erased image. These methods require target-style images for training, thereby being restricted to training on synthetic data. STEFANN [30] proposed a font
adaptive neural network that replaces individual characters in the source image using a target alphabet. This approach assumes per-character segmentation which is impractical in many real-world images. Rewrite [20] and TextStyleBrush [19] introduce self-supervised training schemes and employ real-world images in the training process. However, these methods rely on the pre-trained OCR models to constrain the content of the generated images, and low-resource languages often lack enough images to train robust OCR models.
### Unsupervised Image-to-image translation
The purpose of image-to-image translation is to learn the mapping from the image in the source domain to the target domain, and it has been applied to many image generation fields, such as artistic style transfer [18, 45], video frame generation [5, 8], and font generation [36]. Pix2pix [16] is the first image-to-image conversion model based on the conditional GAN. To solve the unpaired image-to-image conversion, FUNIT [23] and TUNIT [3] assume that animal images can be decomposed into content and style features, and these features can be recombined through adaptive instance normalization [15]. DG-Font [36] further proposes an unsupervised font generation method that does not require human experts and can be applied to any language system. It proved that using deformable convolution as the skip connection helps maintain the integrity of the content in generated images. Our work is related to image-to-image translation when we assume that the scene text image can be decomposed into content and style features. However, the above-mentioned image-to-image conversion method usually defines a fixed number of styles and cannot be directly applied to scene text generation. Besides, a carefully designed spatial attention mechanism is proved to improve the quality of generated images [41, 44]. Inspired by the previous work, we propose a weakly supervised scene text generation method.
## 3 Methods
### Overview
Given the content images \(I_{c}\) and scene text images \(I_{s}\), the proposed model aims to replace the text in the \(I_{s}\) with the given content string while maintaining the original style. We propose a novel scene text generation method by utilizing recognition-level labels as weak supervision. As shown in Figure 2, the generative model consists of a content encoder, a style encoder, a decoder, and an integrated attention module. To this end, we use a content encoder and a style encoder: for content, (\(E_{c}\)), and style, (\(E_{s}\)). Thus, content representation is produced by \(F_{cnt}=E_{c}(I_{c})\) and style representation by \(F_{sty}=E_{s}(I_{s})\). Given the style images \(I_{s}\) cropped from scene images, we employ the recognition-level labels as weak supervision and render it using a standard font for the text, on a plain grey background, producing the image \(I_{c1}\in R^{H\times W}\). Besides, another image \(I_{c2}\) with random text is produced in the same way. The proposed method extracts
Figure 2: **Overview of the proposed method. The style encoder and content encoder encode the scene text images and content images respectively. The integrated attention module takes the content features \(F_{cnt}\) and decoder features \(F_{dec}\) as input and output the \(F_{cnt}\) which is then concatenated with \(F_{dec}\). The output images are input to the typeface classifier and discriminator to distinguish the style and reality.**
latent style and content representations based on \(I_{s}\) and \(I_{c}\). The style vectors are mapped to AdaIN normalization coefficient [15] through several fully connected layers. The generator is designed to generate the edited scene text image features \(F_{dec}\) by mixing these two representations. Besides, to generate images with complete content structure, the integrated attention module aims to learn the global and local relations of features between content features and generated features from different layers. Details are described in Sec 3.2. When the images are generated from the generative networks, a typeface classifier pre-trained on a set of synthetic fonts and a discriminator are introduced to distinguish the style and reality between scene text images \(I_{s}\) and generated images \(O_{c1}\), \(O_{c2}\).
### Integrated Spatial Attention Module
The proposed weakly supervised approach reduces the annotation cost of scene text generation for low-resource languages. But the content of generated images is prone to missing some parts. Compelled by the observation that there exists deformation and stroke mapping between content image and generated image, we design an integrated attention module to ensure the complementation of generated content at both global and local levels. Specifically, the global attention which is modeled by deformable convolution learns the point-wise deformation and deforms the source content features by learning a global sparse weight \(A_{\text{global}}\). The local attention learns the local stroke mapping between the source content feature and the target generator feature by learning a local dense attention weight \(A_{\text{local}}\). As shown in figure 3, the content features \(F_{cnt}\) from different levels are processed in different ways. In figure 3 (a), for the high-level features, the global attention is first adopted to help deform the content feature \(F_{cnt}^{high}\) to \(F_{dec}^{high}\). After that, a local attention module is employed to learn the local spatial mapping between content images and generated images, such as strokes and radical decomposition, which exist in high-level features. Different from the high-level features, low-level features are often minor details of the images, like lines or dots, and have similar spatial properties as the original input images. For the low-level features in Figure 3(b), we adopt a densely connected global attention module to deform the features \(F_{cnt}^{low}\).
Attention module measure the similarity of query-key pairs by the compatibility function of the query with a set of key elements and can be viewed from a unified perspective [48]. Specifically, \(q\) indicates the index of a query element with the content \(z_{q}\), and \(k\) indicates the index of a key element with the content \(x_{k}\). The output of the attention module is computed as:
\[y_{q}=W^{\prime}\sum_{k\in\Omega_{q}}A(q,k,z_{q},x_{k})\odot Wx_{k}, \tag{1}\]
Figure 3: Details of the integrated attention.
where \(\Omega_{q}\) indicates the supporting key region for the query, \(A(q,k,z_{q},x_{k})\) indicates the attention weights or the similarity, and \(W^{{}^{\prime}}\) and \(W\) indicate the learnable weights. The output \(y_{q}\) is calculated by element-wise multiplication of \(W^{{}^{\prime}}\) multiplied by the attention weights and \(W\times_{k}\). Besides, the similarity are usually normalized based on \(\Omega_{q}\)m like \(\sum_{k\in\Omega_{q}}A(q,k,z_{q},x_{k})=1\). In our model, \(q_{k}\), \(x_{k}\), and \(W\times x_{k}\) denote the sampled features from query \(Q\), key \(K\), and value \(V\) respectively, and \(W^{{}^{\prime}}\) denotes a convolution or the control weight of the output.
The global attention employs the learnable offsets to adjust the sampling positions of key elements to capture the global spatial relationship. These target key elements can be sampled from any sampling position of the input feature map. The learnable offsets are predicted based on the query content \(q_{k}\) and key content \(x_{k}\) and are dynamic to the input. When incorporating the global attention into the Eq. 1, the attention weights is:
\[A_{\text{global}}(q,k,z_{q},x_{k})=interp(k,q+p+w^{\top}\cdot concat(z_{q},x_{k})), \tag{2}\]
where \(q\), \(k\) are the index of feature map, \(p\) are the sampling offset in regular convolution. \(w^{\top}\) is modeled by a regular convolution layer and projects the concatenation query content \(z_{q}\) and the key content \(x_{k}\) to a learned offset. Follow [49], \(interp\) is the bilinear interpolation kernel. For the global attention, the weight \(W\) is fixed as identity and \(W^{{}^{\prime}}\) is a convolution layer that samples on the irregular and offset locations.
For local attention, the module is different from traditional transformer attention in [32] and predicts the weight of a location relative to the features of its neighbors, rather than the entire input feature. Specifically, for each position \((i,j)\) in spatial dimensions of the query element \(z_{q}\), denoted as \(z_{q}^{i,j}\in\mathbb{R}^{1\times 1\times c}\), we extract a patch with size \(s\) centered at \((i,j)\) from the corresponding key element\(x_{k}\), denoted as \(x_{k}^{i,j}\in\mathbb{R}^{3\times 5\times 3c}\). Then the weight \(A_{\text{local}(q,k,z_{q},x_{k})}^{i,j}\in\mathbb{R}^{3\times 5\times c}\) is obtained by reshaping the estimation of a fully connected network(FCN):
\[A_{\text{local}(q,k,z_{q},x_{k})}^{i,j}=reshape(FCN(concat(flatten(x_{k}^{i,j}), z_{q}^{i,j}))). \tag{3}\]
The flattened patch query \(flatten(k)\) and the corresponding query \(z_{q}^{i,j}\) are first concatenated, and compute the FFN function. The FFN function is composed of a fully-connected layer followed by leaky ReLu and a linear fully-connected layer. We iterate over all the \((i,j)\) to constitute the output of the local spatial attention module. For the local attention, \(W\) represents the mathcal transformation between \(Q\) and \(K\), and \(W^{{}^{\prime}}\) is the control weight of the output.
### Typeface Classifier
We leverage existing datasets from high-resource languages in both training and testing processes to help synthesize datasets with diverse backgrounds and font styles. In order to help the generator captures the style of input text regardless of its language, we introduce a pre-trained typeface classifier inspired by [19]. Specifically, the classifier is trained on synthetic datasets of high-resource languages, which is generated by [17]. The classifier is trained to identify the synthetic fonts and is only used to provide a perceptual signal for training the generative models.
The detail of the synthetic data used to train the classifier is described in Sec 4.2. Given a word image, a VGG19 network is employed to classify its style. We first encode font class into one-hot labels and train the networks with softmax loss. After the classifier is trained, we employ the network as supervision and compute the style alignment loss \(\mathcal{L}_{type}\). Specifically, given a word image, the loss is as follow:
\[\mathcal{L}_{type}=\lambda_{1}\ell_{per}+\lambda_{2}\ell_{lex}+ \lambda_{3}\ell_{emb}, \tag{4}\] \[\ell_{per}=\mathbb{E}[\sum_{i}\frac{1}{M_{i}}\parallel\phi_{i}(I_ {s})-\phi_{i}(O_{c})\parallel_{1}],\] (5) \[\ell_{tex}=\mathbb{E}_{i}[\parallel G_{i}^{\phi}(I_{s})-G_{i}^{ \phi}(O_{c})\parallel_{1}],\] (6) \[\ell_{emb}=\mathbb{E}[\parallel\psi(I_{s})-\psi(\mathcal{O}_{c}) \parallel_{1}]. \tag{7}\]
The perceptual loss \(\ell_{per}\) is computed from the feature maps at layer i denoted as \(\phi_{i}\). We normalize the loss using the number of elements in the particular feature map, denoted by \(M_{i}\). We also compute a texture loss, \(\ell_{tex}\), from the Gram matrices \(G_{i}^{\phi}=\phi_{i}\phi_{i}^{T}\) of the feature maps. These losses capture the background style information and are computed on the output features of relu1_1, relu2_1, relu3_1, relu4_1, and relu5_1. Besides, the embedding loss \(\ell_{emb}\) is computed on the feature maps of penultimate layers \(\psi\) and mainly learns the font style of the generated image.
### Loss Function
Our model aims to achieve automatic scene text generation via a weakly supervised method. We adopt four losses: 1) adversarial loss is used to produce realistic images; 2) content consistent loss is introduced to encourage the content of the generated image to be consistent with the content image; 3) image reconstruction loss is used to maintain the domain-invariant features; 4) style alignment loss to bridge the style gap between generate images and scene text images. We describe the formula of these losses and the full objective in this section.
**Adversarial loss:** our model aims to generate plausible images by solving a mini-max optimization problem. The generative network \(G\) tries to fool discriminator \(D\) by generating fake images. The adversarial loss penalty the wrong judgment when real/generated images are input to the discriminator. The hinge GAN loss[44] was used:
\[\mathcal{L}_{adv}^{D} =-\mathbb{E}_{I_{s},y\in P_{s},I_{c}\in P_{c}}\min(0,-1+D(O_{c},y ))-\mathbb{E}_{I_{s},y\in P_{s},I_{c}\in P_{c}}\min(0,-1-D(G(I_{s},I_{c}),y)), \tag{8}\] \[\mathcal{L}_{adv}^{G} =-\mathbb{E}_{I_{s},y\in P_{s},I_{c}\in P_{c}}D(G(I_{s},I_{c}),y). \tag{9}\]
**Content consistent loss:** adversarial loss is adopted to help the model to generate a realistic style while ignoring the correctness of the content. To prevent mode collapse and ensure that the features extracted from the same content can be content consistent after the content encoder \(f_{c}\), we impose a content consistent loss here:
\[\mathcal{L}_{cnt}=\mathbb{E}_{I_{s}\in P_{s},I_{c}\in P_{c}}\left\|Z_{c}-f_{ c}(G(I_{s},I_{c}))\right\|_{1}. \tag{10}\]
\(L_{cnt}\) ensures that given a source content image \(I_{c}\) and corresponding generated images, their feature maps are consistent after the content encoder \(f_{c}\).
**Image reconstruction loss:** adversarial loss is adopted to help the model generate realistic styles while focusing on high-frequency. With recognition-level labels, the generator can reconstruct the source image \(I_{c}\) when given its origin style, we impose a reconstruction loss:
\[\mathcal{L}_{rec}=\mathbb{E}_{I_{c}\in P_{c}}\left\|I_{c}-G(I_{s},O_{c}) \right\|_{1}. \tag{11}\]
The objective helps preserve domain-invariant characteristics (_e.g._content) of its input image \(I_{c}\).
**Full objective loss:** combining all the loss, our full objective loss is given as follow:
\[\mathcal{L}=\max_{D}\min_{G}\mathcal{L}_{adv}+\lambda_{img}\mathcal{L}_{img}+ \lambda_{cnt}\mathcal{L}_{cnt}+\lambda_{styl}\mathcal{L}_{sty}(O_{1})+ \lambda_{styl2}\mathcal{L}_{sty}(O_{2}), \tag{12}\]
where \(\lambda_{img}\), \(\lambda_{cnt}\), \(\lambda_{styl}\), and \(\lambda_{styl2}\) are parameters to control the weight of each objective. The details of these hyper-parameters are report in Sec 4.1.
## 4 Experiments
In this section, we evaluate our proposed model based on scene text recognition tasks for Korean and Kazakh. The implementation details are described first. We then introduce our dataset. After that, the results of our experiments are shown to verify the advantages of our model.
### Implementation Details
the weights of convolution layers are initialized with He initialization [13], whereby the biases are set to zero, and the weights of the linear layers are sampled from a Gaussian distribution with mean 0 and standard deviation 0.01. We use the Adam optimizer with \(\beta_{1}=0.9\) and \(\beta_{2}=0.99\) for the style encoder, while the content encoder and decoder are optimized using the RMSprop optimizer with \(\alpha\)=0.99. The model is trained for 200 epochs with a learning rate of 0.0001 and a weight decay of 0.0001. The hinge adversarial loss [44] is used, with R1 regularization [25] using \(\gamma=10\). We empirically set the weights of different hyper-parameters as follow: \(\lambda_{1}=1\), \(\lambda_{2}=250\), \(\lambda_{3}=1\), \(\lambda_{cnt}=1\), \(\lambda_{img}=10\), \(\lambda_{styl1}=1\), and \(\lambda_{styl1}=0.1\).
With regard to the generation model, the batch size is set to 16 and we resize the text image height to 64 and keep the same aspect ratio. In the training process, we randomly select the batch data and resize these images to the average width, and during testing, we input images of variable width to get the desired results. In the integrated attention, \(F_{cnt}^{high}\) and \(F_{gen}^{high}\) are extracted from the second downsampling and penultimate upsampling layer, and \(F_{cnt}^{low}\) and \(F_{gen}^{low}\) are
extracted from first downsampling and the last upsampling layer. Table 1 shows the details of our generative networks, including both the encoder components and decoder components. BN, IN, AdaIN denote batch normalization, instance normalization, and adaptive instance normalization, respectively. FC means the fully connected layers. To evaluate our method, we employ three recognition methods. The batch size is 256 and none of these methods are trained with data augmentation unless otherwise specified.
### Dataset and Evaluation Metrics
To evaluate our model for scene text recognition in low-resource languages, we choose Kazakh (84 characters) and Korean (2180 characters) to represent the low-resource languages in alphabetical language and logographic language respectively and English and Chinese to represent the high-resource languages to help train our model. For the cases of Kazakh, we collect a dataset that contains 22,182 Kazakh images and 81,900 English images for training, and 4571 Kazakh images for testing. For the cases of Korean, the training set consists of a total of 16,279 Korean images and 113,491 Chinese images, and the test set contains 4644 Korean images cropped from ICDAR2019-MLT[28]. In our experiments, all the training and testing images are box images cropped from real scene images. Besides, the training sets for typeface classifiers are generated by SynthText, which employs 284 Chinese fonts and 800 English fonts, respectively.
In this paper, we evaluate the effectiveness of synthetic data in training scene recognition models. To this end, we generate datasets using different methods and evaluate the trained models on real scene text images. For each experiment, a total of 1 \(M\) box images generated by one of the comparison methods are used for training the recognition models. In this study, we investigate the contribution of generated images in training recognition models. As a result, we report the accuracy and normalized edit distance [31] on the testing set instead of FID score [14] that are commonly used in image generation task. Specifically, the normalized edit distance is computed as:
\[Norm=1-D(s_{i},\hat{s_{i}}), \tag{13}\]
where \(D(\cdot)\) indicates the Levenshtein Distance. \(s_{i}\) and \(\hat{s_{i}}\) indicate the predicted text line in string and the corresponding ground truths.
### Comparison with State-of-the-art Methods
In this subsection, we first present the quantitative results based on the TRBA [2] model to show that our method is effective in alleviating the shortage of data for low-resource languages. Then we demonstrate the robustness of our
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & Type & Kernel size & Resample & Padding & Feature maps & Normalization & Nonlinearity \\ \hline \multirow{8}{*}{Style encoder} & Convolution & 3 & MaxPool & 1 & 64 & BN & ReLU \\ & Convolution & 3 & MaxPool & 1 & 128 & BN & ReLU \\ & Convolution & 3 & - & 1 & 256 & BN & ReLU \\ & Convolution & 3 & MaxPool & 1 & 256 & BN & ReLU \\ & Convolution & 3 & - & 1 & 512 & BN & ReLU \\ & Convolution & 3 & MaxPool & 1 & 512 & BN & ReLU \\ & Convolution & 3 & - & 1 & 512 & BN & ReLU \\ & Convolution & 3 & MaxPool & 1 & 512 & BN & ReLU \\ & Average pooling & - & - & - & 128 & - & - \\ & FC & - & - & - & 128 & - & - \\ \hline \multirow{4}{*}{Content encoder} & Deform. conv. & 7 & - & 3 & 64 & IN & ReLU \\ & Deform. conv. & 4 & stride-2 & 1 & 128 & IN & ReLU \\ & Res block \(\times\) 2 & 3 & - & 1 & 256 & IN & ReLU \\ \hline \multirow{4}{*}{Decoder} & Res block \(\times\) 2 & 3 & - & 1 & 256 & AdaIN & ReLU \\ & Convolution & 5 & Upsample & 2 & 128 & AdaIN & ReLU \\ \cline{1-1} & Convolution & 5 & Upsample & 2 & 64 & AdaIN & ReLU \\ \cline{1-1} & Convolution & 7 & - & 3 & 3 & - & tanh \\ \hline \hline \end{tabular}
\end{table}
Table 1: Architecture of Generative networks.
proposed method by presenting results obtained from three different scene recognition models. After that, qualitative results are shown to describe the disadvantages of comparison methods. Last but not least, we designed the ablation study to investigate the effect of different parts. We compare our model with the following methods.
* SRNet [34]: SRNet employs a text conversion module, and a background inpainting module to generate rendered text and text-erased images respectively. SRNet relies on pre-designed font libraries to generate training datasets.
* TUNIT [3]: TUNIT is an unsupervised image-to-image translation model which separates the content and style of natural animal images and combines them with an adaptive instance normalization layer.
* DGFont [36]: DGFont is an unsupervised font generation method that introduces the feature deformable skip connection to deform and transform the character of one font to another.
We also explore the data augmentation used in [33] and claim that data augmentation alone cannot solve the recognition problem of low-resource languages. To further evaluate generalizability of our methods across various scenes, we employ three state-of-the-art scene recognition models.
* TRBA[2]: TRBA is a previous time-tested state-of-the-art model and it is widely used in evaluating synthetic dataset [24, 40]. The method introduces a unified four-stage framework that most existing scene text recognition models fit into. We choose the TPS-ResNet-BiLSTM-Attn architecture in our experiments.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Training data & Kazakh(Acc) & Kazakh(ED) & Korean(Acc) & Korean(ED) \\ \hline Real dataset & 54.295 & 0.799 & 40.909 & 0.662 \\ Real dataset(aug)[33] & 59.899 & 0.830 & 48.971 & 0.714 \\ \hline SRNet[34] & 34.697 & 0.644 & - & - \\ TUNIT[3] & 55.874 & 0.800 & 17.539 & 0.444 \\ DGFont[36] & 60.053 & 0.820 & 39.237 & 0.625 \\ Ours & 71.319 & 0.873 & 64.837 & 0.806 \\ \hline SynthText[12] & 62.248 & 0.799 & 68.418 & 0.820 \\ SynthText+SRNet & 43.251 & 0.711 & - & - \\ SynthText+TUNIT & 68.366 & 0.853 & 66.467 & 0.818 \\ SynthText+DGFont & 70.750 & 0.859 & 69.554 & 0.834 \\ SynthText+Ours & 73.047 & 0.877 & 72.706 & 0.849 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Compared with the state-of-the-art Methods based on TRBA[2]
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Method & Training data & Kazakh(Acc) & Kazakh(ED) & Korean(Acc) & Korean(ED) \\ \hline \multirow{4}{*}{TRBA [2]} & Real dataset & 54.295 & 0.799 & 40.909 & 0.662 \\ & Ours & 71.319 & 0.873 & 64.837 & 0.806 \\ & SynthText & 62.248 & 0.799 & 68.418 & 0.820 \\ & SynthText+Ours & 73.047 & 0.877 & 72.706 & 0.849 \\ \hline \multirow{4}{*}{PARSeq [4]} & Real dataset* & 56.836 & 0.808 & 37.263 & 0.627 \\ & Ours & 71.188 & 0.870 & 64.415 & 0.804 \\ & SynthText & 67.928 & 0.849 & 69.754 & 0.832 \\ & SynthText+Ours & 74.491 & 0.883 & 74.874 & 0.871 \\ \hline \multirow{4}{*}{SCATTER [22]} & Real dataset & 63.728 & 0.875 & 48.395 & 0.715 \\ & Ours & 77.182 & 0.932 & 64.301 & 0.797 \\ \cline{1-1} & SynthText & 72.982 & 0.923 & 66.981 & 0.823 \\ \cline{1-1} & SynthText+Ours & 80.398 & 0.949 & 73.768 & 0.855 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluate the generalizability of the proposed methods
* PARSeq [4]: ParSeq combines the features from a Permutated Language Modelling (PLM) multi-head attention model, with the encoded features from a Visual Transformer backbone [9]. Noted that this method requires a large number of training samples due to the architecture of transformer.
* SCATTER[22]: SCATTER train deep BiLSTM encoders by stacking BiLSTM blocks with intermediate supervision that improving the encoding of contextual dependencies. Moreover, SCATTER utilizes selective decoders to process the encoded features more efficiently.
#### 4.3.1 Quantitative comparison
The quantitative results are shown in Table 2. The quantitative experiments are divided into three parts by horizontal lines. In the first part, we compare the TRBA model with and without augmentation used in [33]. For this experiment, the recognition models are trained only on real datasets from Kazakh and Korean and _aug_ denotes the data with augmentation used in [33]. The model with data augmentation achieves higher accuracy and edit distance but the improvement is not obvious. In the second part, each method generates \(1\)\(M\) images, which we use to train recognition models respectively. It can be seen that the proposed methods outperform all comparison methods. Specifically, SRNet can hardly generate useful scene text images for recognition datasets. TUNIT and DGFont show promising results in Kazakh but are still impractical in Korean, whose text is more complex than Kazakh. In the third part, we replace half of the original \(1\)\(M\) images generated by each method with other scene text images generated by SynthText. DGFont and our proposed method achieve better results when cooperating with SynthText. Our methods improve the results of SynthText by more than 10 points in recognition accuracy.
Figure 4: Quantitative evaluation of Kazakh image generation.
Figure 5: Quantitative evaluation of Korean image generation.
To assess the generalizability of our methods, we utilized three scene text recognition methods and reported eight results for each method based on four distinct datasets: a real dataset, a dataset generated by our method, a dataset generated by SynthText, and a combination of the SynthText dataset and our generated dataset. As shown in Table 3, the recognition accuracy for Korean was poor when the recognition model was only trained with real datasets. While our method of generating datasets effectively improved the performance of all three recognition models. Since the real datasets contain only few samples, we employed data augmentation when training PARSeq with the real dataset to enhance its performance and denote it using \(*\).
#### 4.3.2 Qualititive comparison
In order to verify the capability of the proposed method to generate realistic scene text images, the visual comparison is shown in Figure 4 and Figure 5. For each column of images, the first row shows the text content of generated images. The second rows to the penultimate row show the results generated by different methods. The last row shows the scene text image that provides the background and font styles. SRNet does not perform well in real-world images because it is trained on synthetic datasets. Besides, The images generated by SRNet are prone to blur due to the overlap between original content and new content. The images generated by TUNIT are often unreadable, and cannot be used in training the scene text recognition models. DGFont generates readable text images in easy cases, but it fails in complex cases which contain long text or diverse text styles. In contrast, the proposed method is able to generate realistic scene text images with clear content and similar font styles.
### Ablation study
DGFont[36] and DGFont++[7] have proven that inserting two deformable convolutions as skip connection helps maintain complete content of generate images. In this part, we remove different parts from the model successively and explore the influence, including the global attention3, local attention1, and typeface classifier. For each language, we conduct the ablation study on the data set of 1 \(M\) synthetic images by our methods. Quantitative comparisons and quality comparisons are shown in Table 4 and Figure 6.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Methods & Kazakh(Acc) & Kazakh(ED) & Korean(Acc) & Korean(ED) \\ \hline Full model & 71.319 & 0.873 & 64.837 & 0.806 \\ w/o global attention3 & 69.044 & 0.861 & 63.100 & 0.792 \\ w/o local attention1 & 63.509 & 0.837 & 55.596 & 0.739 \\ w/o typeface classifier & 59.440 & 0.815 & 54.202 & 0.736 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study.
Figure 6: Ablation study.
**(1)Effectiveness of global attention3.** We first remove the global attention3 from the full model. We can see that the quantitative results decrease especially in Kazakh. Figure 6 shows that without the global attention3, the generated images are prone to lose details, which is obvious in alphabetical languages like Kazakh whose scripts are simple. For Korean whose text is more complex than Kazakh, the decrease of the accuracy and edit distance are less.
**(2)Effectiveness of local attention1.** Local attention help learn local relationships between content images and generated images. When we further remove the local attention, the generated images suffer from blurring and artifacts around text strokes. For the recognition tasks, the performance of accuracy and edit distance drops significantly in both Kazakh and Korean.
**(3)Effectiveness of typeface classifier.** Typeface classifier is introduced to bridge the script style gap between different languages. Without the typeface classifier, the model cannot generate reliable images.
## 5 Conclusion
In this paper, We propose a weakly supervised scene text generation method. By incorporating high-resource languages into the training process, the method enables the cross-language generation and is able to generate a large number of scene texts with different backgrounds and font styles for low-resource languages. Our method disentangles the content and style features of scene text images and preserves the complete content structure of generated images using an integrated attention module. Moreover, we incorporate a pre-trained font classifier to bridge the style gap between different languages. We evaluate our method using state-of-the-art scene text recognition models and show that our generated scene text significantly improves scene text recognition accuracy, and can complement other methods to achieve even higher accuracy. Our method shows promise in alleviating the problem of collecting sufficient datasets for low-resource languages, as it requires only a limited number of recognition-level labels for low-resource languages.
## 6 Acknowledgements
This work was supported by the National Key Research and Development Program of China under Grant No. 2020AAA0107903.
## CRediT authorship contribution statement
**Yangchen Xie:** Conceptualization, Software, Writing - original draft, Writing - review & editing. **Xinyuan Chen:** Supervision, Validation, Writing - review & editing. **Hongjian Zhan:** Supervision, Validation, Writing - review & editing. **Palaiahankote Shivakumara:** Supervision, Validation, Writing - review & editing. **Bing Yin:** Supervision, Validation, Writing - review & editing. **Cong Liu:** Supervision, Validation, Writing - review & editing. **Yue Lu:** Supervision, Validation, Writing - review & editing.
|
2301.06812 | An uncountable number of proofs of Pythagoras Theorem | We give an infinite number of proofs of Pythagoras theorem. Some can be
classified as `fractal proofs'. | Gaurav Bhatnagar, Sagar Shrivastava | 2023-01-17T11:14:04Z | http://arxiv.org/abs/2301.06812v2 | # An uncountable number of proofs of Pythagoras Theorem
###### Abstract.
We give an infinite number of proofs of Pythagoras theorem. Some can be classified as 'fractal proofs'.
Key words and phrases:Pythagoras theorem, fractal 2020 Mathematics Subject Classification: Primary 00A05 ; Secondary 51-01
## 1. Introduction
Virtually anyone who is someone has given a proof of the Pythagoras theorem; we recommend two recent histories given by Maor [4] and Veljan [6], and an interesting set of proofs and commentary by Alsina and Nelsen [2]. The classic compendium and a classification of proofs is by Loomis [3], who says:
(there are) many purely geometric demonstrations of this famous theorem accessible to the teacher, as well as an unlimited number of proofs based on the algebraic method of geometric investigation.
Loomis said this in 1927, so we can say that it has long been suspected that there were an infinite number of proofs by the algebraic method of the Pythagoras theorem. However, the proofs of Pythagoras theorem presented by Loomis and others are still finite in number. The objective of this paper is to give an infinite set of proofs of this famous theorem by what Loomis calls the 'algebraic method'.
We begin by presenting two proofs. One of them is perhaps one of the oldest ever; the other we believe is new. Infinite sequences of proofs are obtained by iteration. Taking the limit, we find proofs that may be called 'fractal proofs' of Pythagoras theorem. We could not find this type of proof in the literature.
The reader may enjoy working out the details of the proofs given in Figure 1. The picture on the left is a proof due to Bhaskara (XIIth century) [5, p. 4], and the one on the right the corresponding 'fractal proof'.
In this paper, we motivate and provide two other proofs of this nature. Next, we show how to modify our approach to obtain an uncountable number of algebraic proofs. Previously, Alsina and Nelsen [1, p. 47] (see also [2, p. 257]) have also indicated how to obtain an uncountable number of proofs of Pythagoras theorem via tiling.
Finally, we note that we can reverse-engineer one of the proofs to obtain the formula for the geometric sum assuming the Pythagoras theorem which leads to a nice surprise for a Calculus student.
## 2. Two proofs of the Pythagorean theorem
We begin with two related proofs of the Pythagorean theorem. The first proof is based on a figure (see Figure 2) from the earliest known proof of the theorem [5, p. 3]. This has been credited to Chou-pei Suan-ching (circa 250 B.C.) by [6]. The second appears to be new.
**Proof # 1**.: We begin with the picture in Figure 2. Consider any right-angled triangle with perpendicular \(a_{1}\) and base \(b_{1}\) with the hypotenuse of length \(c_{1}\). We begin with a square \(ABCD\) of side length \(a_{1}+b_{1}\). Let \(E\) be the point on the edge \(AB\) such that \(|AE|=a_{1}\) and \(|EB|=b_{1}\). Similarly \(F,G,H\) be points on \(BC,CD,DA\) respectively, with lengths as shown.
It is easy to verify (using the fact that the sum of angles in a triangle is \(180^{\circ}\)) that the quadrilateral \(EFGH\) is a square. Now computing the area of the square of side \(a_{1}+b_{1}\) in two different ways, we have
\[(a_{1}+b_{1})^{2}=4\times\frac{1}{2}a_{1}b_{1}+c_{1}^{2},\]
from where we obtain
\[a_{1}^{2}+b_{1}^{2}=c_{1}^{2}.\]
This completes a proof of the Pythagoras theorem.
**Proof # 2**.: Next we modify the picture of the first proof as shown in Figure 3 to obtain another proof of the Pythagorean theorem. This proof appears to be new.
Let \(I\) be the point on the edge \(EF\) such that \(\frac{|EI|}{|IF|}=\frac{b_{1}}{a_{1}}\). We denote by \(a_{2}\) and \(b_{2}\) the lengths \(|IF|\) and \(|EI|\).
Similarly we have points \(J,K,L\) on \(FG,GH,HE\), such that
\[\frac{|FJ|}{|JG|}=\frac{|GK|}{|KH|}=\frac{|HL|}{|LE|}=\frac{b_{1}}{a_{1}}.\]
Figure 1. Bhäskara’s proof of the Pythagoras theorem and the corresponding fractal proof
It is easy to see that the triangles \(\triangle ELI,\triangle FIJ,\triangle GJK,\triangle HKL\) are all right-angled triangles, and congruent to each other. Let \(c_{2}\) denote the length of the hypotenuse of these triangles.
The following are easy to prove.
Figure 3. Proof # 2.
Figure 2. Proof # 1: Chou-pei Suan-ching’s proof.
* The triangle \(\triangle ELI\) is similar to the bigger triangles from the first proof, namely \(\triangle AEH\), \(\triangle BFE,\triangle CGF,\triangle DHG\). So too are \(\triangle FIJ\), \(\triangle GJK\) and \(\triangle HKL\).
* The quadrilateral \(IJKL\) is a square.
* The edges \(AB\) and \(LI\) are parallel.
* The lengths \(a_{2}\), \(b_{2}\) and \(c_{2}\) are given by \[a_{2}=\frac{a_{1}c_{1}}{a_{1}+b_{1}};\;\;b_{2}=\frac{b_{1}c_{1}}{a_{1}+b_{1}}; \;\;c_{2}=\frac{c_{1}^{2}}{a_{1}+b_{1}}.\]
Let \(EM\) be a the perpendicular dropped from \(E\) on \(LI\); similarly \(GN\) is the perpendicular dropped from \(G\) on \(JK\). As these are both perpendiculars dropped in the same orientation of congruent triangles, they have same length, say \(d_{1}=|EM|=|GN|\). Thus
\[a_{1}+b_{1}=|BC|=|EM|+|IJ|+|GN|=2d_{1}+c_{2} \tag{1}\]
To find the value of \(d_{1}\), compute the area of \(\triangle ELI\) in two ways:
\[\text{area}(\triangle ELI)=\frac{1}{2}|EL||EI|=\frac{c_{1}^{2}a_{1}b_{1}}{2(a_ {1}+b_{1})^{2}};\]
and,
\[\text{area}(\triangle ELI)=\frac{1}{2}|LI||EM|=\frac{c_{1}^{2}d_{1}}{2(a_{1}+b _{1})}.\]
Equating the two gives us
\[d_{1}=\frac{a_{1}b_{1}}{a_{1}+b_{1}}.\]
Putting these values in (1), we get:
\[a_{1}+b_{1}=\frac{c_{1}^{2}}{a_{1}+b_{1}}+\frac{2a_{1}b_{1}}{a_{1}+b_{1}}.\]
Multiplying the equation by \(a_{1}+b_{1}\) gives us \(a_{1}^{2}+b_{1}^{2}=c_{1}^{2}\). This completes the second proof of Pythagoras theorem.
## Fractal proofs of the Pythagoras theorem
An iterate of the first proof is hidden in Figure 3. This leads to an infinite sequence of proofs.
Consider Figure 4. On the one hand, the area of the larger square is \((a_{1}+b_{1})^{2}\). On the other hand, from Figure 4, we see that it equals
\[4\times \frac{1}{2}a_{1}b_{1}+4\times\frac{1}{2}a_{2}b_{2}+c_{2}^{2}\] \[=2a_{1}b_{1}+c_{1}^{2}.\]
The last step follows by noting that \(c_{2}^{2}+2a_{2}b_{2}=c_{1}^{2}\), a fact that is obvious from the figure. Now this reduces to the earlier proof indicated by Figure 2.
**Proof # n+1.** Now we iterate. Suppose that we make \(n\) squares (inside the square of size \(a_{1}+b_{1}\)) by repeating the above steps. Let \(a_{n}\), \(b_{n}\) and \(c_{n}\) be the relevant lengths. We find that
\[(a_{1}+b_{1})^{2} =2a_{1}b_{1}+2a_{2}b_{2}+\cdots+2a_{n-1}b_{n-1}+2a_{n}b_{n}+c_{n}^ {2} \tag{2}\] \[=2a_{1}b_{1}+2a_{2}b_{2}+\cdots+2a_{n-1}b_{n-1}+c_{n-1}^{2}\] \[\cdots\] \[=2a_{1}b_{1}+c_{1}^{2}.\]
In fact, the above gives an infinite (but still countable) set of proofs of Pythagoras theorem.
However, if we iterate endlessly, we get another proof--this time obtaining the fractal image given in Figure 5.
### Fractal proof #1
We now begin the proof from Figure 5. We note that the lengths \(a_{n}\), \(b_{n}\) and \(c_{n}\) (for \(n>1\)) are given by
\[a_{n+1}=\frac{a_{n}c_{n}}{a_{n}+b_{n}};\;\;b_{n+1}=\frac{b_{n}c_{n}}{a_{n}+b_{ n}};\;\;c_{n+1}=\frac{c_{n}^{2}}{a_{n}+b_{n}}.\]
In addition, recall that \(c_{n}=a_{n+1}+b_{n+1}\). From these we obtain:
\[a_{n}=a_{1}\Big{(}\frac{c_{1}}{a_{1}+b_{1}}\Big{)}^{n-1};b_{n}=b_{1}\Big{(} \frac{c_{1}}{a_{1}+b_{1}}\Big{)}^{n-1};c_{n}=c_{1}\Big{(}\frac{c_{1}}{a_{1}+b _{1}}\Big{)}^{n-1}. \tag{3}\]
We know the area of this square is \((a_{1}+b_{1})^{2}\). The other way is to consider the sum of the areas of the individual triangles making up the square. The area of the outermost four
Figure 4. The third proof
triangles is \(4\times a_{1}b_{1}/2=2a_{1}b_{1}\); the area of the next four \(2a_{2}b_{2}\), and so on. The total area is evidently
\[\text{Area of square} =2a_{1}b_{1}+2a_{2}b_{2}+2a_{3}b_{3}+\cdots\] \[=2a_{1}b_{1}+2a_{1}b_{1}\left(\frac{c_{1}}{a_{1}+b_{1}}\right)^{2 }+2a_{1}b_{1}\left(\frac{c_{1}}{a_{1}+b_{1}}\right)^{4}+\cdots\] \[=\frac{2a_{1}b_{1}}{1-(\frac{c_{1}}{a_{1}+b_{1}})^{2}},\]
where we have used (3) and summed the geometric series. Equating the two areas, we get:
\[\left(a_{1}+b_{1}\right)^{2}=\frac{2a_{1}b_{1}}{1-(\frac{c_{1}}{a_{1}+b_{1}})^ {2}}\implies\left(a_{1}+b_{1}\right)^{2}-c_{1}^{2}=2a_{1}b_{1}\implies a_{1}^{ 2}+b_{1}^{2}=c_{1}^{2}.\]
This completes the proof. Interestingly, to use the sum of the geometric series we need to know that the sum of the lengths of any two sides of a triangle are bigger than the third, and so \(c_{1}/(a_{1}+b_{1})<1\).
### Fractal proof #2
If we iterate Figure 3, we will get another countable sequence of proofs, and in the limit, a fractal proof of Pythagoras theorem (see Figure 6). It is easy to see that the iterates of \(d_{1}\) satisfy
\[d_{n}=\frac{a_{2n-1}b_{2n-1}}{a_{2n-1}+b_{2n-1}}.\]
Figure 5. Fractal proof #1 of the Pythagoras theorem
The proof is obtained from
\[a_{1}+b_{1}=2d_{1}+2d_{2}+2d_{3}+\cdots.\]
### Fractal proof #3
Another classical proof of Pythagoras theorem uses the figure on the left in Figure 1 in the introduction. We leave it to the reader to complete the details of the fractal proof (given on the right) arising from this proof. The details are similar to those in the proofs above.
## 3. Uncountably many proofs of the Pythagoras theorem
There are uncountably many pictures which can be obtained from variations of Figure 5 which all lead to a proof of the Pythagoras theorem. At each step, one can place the inner square in two ways. We associate a binary string with each figure. The digits \(0\) and \(1\) are attached to the figures shown in Figure 7.
Thus Figure 2 is associated to the binary string \(1\), Figure 4 to \(10\) and Figure 5 to \(000000\dots\). Figure 8 shows the figure corresponding to the binary string \(010101\).
We remark that there can be further uncountable families of such proofs, two being provided by analogous variations in Figures 1 and 6.
Figure 6. Fractal proof #2 of the Pythagoras theorem
## 4. Pythagoras theorem implies the geometric sum
This paper has many proofs of the Pythagoras theorem. If we assume the Pythagoras theorem, we can prove the formula for the sum of the geometric series, by reverse-engineering the \((n+1)\)th proof of given in SS2.
From equations (2) and (3) we have
\[(a_{1}+b_{1})^{2}\,=\,2a_{1}b_{1}+2a_{1}b_{1}\Big{(}\frac{c_{1}}{a_{1}+b_{1}} \Big{)}^{2}+\cdots+2a_{1}b_{1}\Big{(}\frac{c_{1}}{a_{1}+b_{1}}\Big{)}^{2n-2}+c_ {1}^{2}\Big{(}\frac{c_{1}}{a_{1}+b_{1}}\Big{)}^{2n-2}\]
or
\[\frac{(a_{1}+b_{1})^{2}-c_{1}^{2n}/(a_{1}+b_{1})^{2n-2}}{2a_{1}b_{1}}=1+\Big{(} \frac{c_{1}}{a_{1}+b_{1}}\Big{)}^{2}+\cdots+\Big{(}\frac{c_{1}}{a_{1}+b_{1}} \Big{)}^{2n-2}.\]
Figure 8. The figure corresponding to 010101
Figure 7. The correspondence of binary digit with the orientation of square
By the Pythagoras theorem, \(2a_{1}b_{1}=(a_{1}+b_{1})^{2}-c_{1}^{2}\), and this expression can be written as
\[\frac{1-\big{(}\frac{c_{1}}{a_{1}+b_{1}}\big{)}^{2n}}{1-\big{(}\frac{c_{1}}{a_{1 }+b_{1}}\big{)}^{2}}=1+\Big{(}\frac{c_{1}}{a_{1}+b_{1}}\Big{)}^{2}+\cdots+ \Big{(}\frac{c_{1}}{a_{1}+b_{1}}\Big{)}^{2n-2}.\]
We have obtained the formula for the geometric sum, but with the restriction that \(a_{1}\), \(b_{1}\) and \(c_{1}\) form the sides of a right-angled triangle. That restriction can be removed by considering it once again as an algebraic identity. Replace \(c_{1}\) by \(\sqrt{q}(a_{1}+b_{1})\), to obtain the geometric sum:
\[1+q+q^{2}+\cdots+q^{n-1}=\frac{1-q^{n}}{1-q}.\]
Calculus students will recognize in this a way to find the derivative of \(f(q)=q^{n}\) at \(q=1\), by taking the limit as \(q\to 1\); or, in other words from Pythagoras theorem by taking \(b_{1}\to 0\), so that \(a_{1}\to c_{1}\) and thus \(q\to 1\). Geometrically, this corresponds to squashing the right-angled triangle until it becomes a straight line of length \(c_{1}\).
|
2307.04353 | On Sufficient Graphical Models | We introduce a sufficient graphical model by applying the recently developed
nonlinear sufficient dimension reduction techniques to the evaluation of
conditional independence. The graphical model is nonparametric in nature, as it
does not make distributional assumptions such as the Gaussian or copula
Gaussian assumptions. However, unlike a fully nonparametric graphical model,
which relies on the high-dimensional kernel to characterize conditional
independence, our graphical model is based on conditional independence given a
set of sufficient predictors with a substantially reduced dimension. In this
way we avoid the curse of dimensionality that comes with a high-dimensional
kernel. We develop the population-level properties, convergence rate, and
variable selection consistency of our estimate. By simulation comparisons and
an analysis of the DREAM 4 Challenge data set, we demonstrate that our method
outperforms the existing methods when the Gaussian or copula Gaussian
assumptions are violated, and its performance remains excellent in the
high-dimensional setting. | Bing Li, Kyongwon Kim | 2023-07-10T05:30:14Z | http://arxiv.org/abs/2307.04353v1 | # On Sufficient Graphical Models
###### Abstract
We introduce a sufficient graphical model by applying the recently developed nonlinear sufficient dimension reduction techniques to the evaluation of conditional independence. The graphical model is nonparametric in nature, as it does not make distributional assumptions such as the Gaussian or copula Gaussian assumptions. However, unlike a fully nonparametric graphical model, which relies on the high-dimensional kernel to characterize conditional independence, our graphical model is based on conditional independence given a set of sufficient predictors with a substantially reduced dimension. In this way we avoid the curse of dimensionality that comes with a high-dimensional kernel. We develop the population-level properties, convergence rate, and variable selection consistency of our estimate. By simulation comparisons and an analysis of the DREAM 4 Challenge data set, we demonstrate that our method outperforms the existing methods when the Gaussian or copula Gaussian assumptions are violated, and its performance remains excellent in the high-dimensional setting.
2000 1-482000 4/00; Published 10/00
on 1
onjoined conditional covariance operator, generalized sliced inverse regression, nonlinear sufficient dimension reduction, reproducing kernel Hilbert space
## 1 Introduction
In this paper we propose a new nonparametric statistical graphical model, which we call the sufficient graphical model, by incorporating the recently developed nonlinear sufficient dimension reduction techniques to the construction of the distribution-free graphical models.
Let \(\mathcal{G}=(\Gamma,\mathcal{E})\) be an undirected graph consisting of a finite set of nodes \(\Gamma=\{1,\ldots,p\}\) and set of edges \(\mathcal{E}\subseteq\{(i,j)\in\Gamma\times\Gamma:i\neq j\}.\) Since \((i,j)\) and \((j,i)\) represent the same edge in an undirected graph, we can assume without loss of generality that \(i>j\). A statistical graphical model links \(\mathcal{G}\) with a random vector \(X=(X^{1},\ldots,X^{p})\) by the conditional independence:
\[(i,j)\notin\mathcal{E}\Leftrightarrow X^{i}\,\hbox to 0.0pt{1}\mskip 2.5mu \hbox{$ \perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.49986pt \hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$}\kern-7.499886pt\hbox{$\perp$} \kern-7.
encoded in the precision matrix \(\Theta=\Sigma^{-1}\) in the following sense
\[X^{i}\,\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.0pt{$\perp$}\hbox to 0.0pt{$ \perp$}\hbox to 0.
## 2 Sufficient graphical model
In classical sufficient dimension reduction, we seek the lowest dimensional subspace \(\mathcal{S}\) of \(\mathbb{R}^{p}\), such that, after projecting \(X\in\mathbb{R}^{p}\) on to \(\mathcal{S}\), the information about the response \(Y\) is preserved; that is, \(Y\mathrel{\hbox to 0.0pt{\perp}\mskip-12.0mu {\perp}}X|P_{S}X\), where \(P_{S}\) is the projection onto \(\mathcal{S}\). This subspace is called the central subspace, written as \(\mathcal{S}_{Y|X}\). See, for example, Li (1991), Cook (1994), and Li (2018b). Li et al. (2011) and Lee et al. (2013) extended this framework to the nonlinear setting by considering the more general problem: \(Y\mathrel{\hbox to 0.0pt{\perp}\mskip-12.0mu {\perp}}X|\mathcal{G}\), where \(\mathcal{G}\) a sub-\(\sigma\) field of the \(\sigma\)-field generated by \(X\). The class of functions in a Hilbert space that are measurable with respect to \(\mathcal{G}\) is called the central class, written as \(\mathfrak{S}_{Y|X}\). Li et al. (2011) introduced the Principal Support Vector Machine, and Lee et al. (2013) generalized the Sliced Inverse Regression (Li, 1991) and the Sliced Average Variance Estimate (Cook and Weisberg, 1991) to estimate the central class. Precursors of this theory include Bach and Jordan (2002), Wu (2008), and Wang (2008).
To link this up with the statistical graphical model, let \((\Omega,\mathcal{F},P)\) be a probability space, \((\Omega_{X},\mathcal{F}_{X})\) a Borel measurable space with \(\Omega_{X}\subseteq\mathbb{R}^{p}\), and \(X:\Omega\to\Omega_{X}\) a random vector with distribution \(P_{X}\). The \(i\)th component of \(X\) is denoted by \(X^{i}\) and its range denoted by \(\Omega_{X^{i}}\). We assume \(\Omega_{X}=\Omega_{X^{1}}\times\cdots\times\Omega_{X^{p}}\). Let \(X^{(i,j)}=(X^{i},X^{j})\) and \(X^{-(i,j)}\) be as defined in the Introduction. Let \(\sigma(X^{-(i,j)})\) be the \(\sigma\)-field generated by \(X^{-(i,j)}\). We assume, for each \((i,j)\in\Gamma\times\Gamma\), there is a proper sub \(\sigma\)-field \(\mathcal{G}^{-(i,j)}\) of \(\sigma(X^{-(i,j)})\) such that
\[X^{(i,j)}\mathrel{\hbox to 0.0pt{\perp}\mskip-12.0mu {\perp}}X^{-(i,j)}| \mathcal{G}^{-(i,j)}. \tag{3}\]
Without loss of generality, we assume \(\mathcal{G}^{-(i,j)}\) is the smallest sub \(\sigma\)-field of \(\sigma(X^{-(i,j)})\) that satisfies the above relation; that is, \(\mathcal{G}^{-(i,j)}\) is the central \(\sigma\)-field for \(X^{(i,j)}\) versus \(X^{-(i,j)}\). There are plenty examples of joint distributions of \(X\) for which the condition (3) holds for every pair \((i,j)\): see Section S10 of the Supplementary Material. Using the properties of conditional independence developed in Dawid (1979) (with a detailed proof given in Li (2018b)), we can show that (3) implies the following equivalence.
**Theorem 1**: _If \(X^{(i,j)}\mathrel{\hbox to 0.0pt{\perp}\mskip-12.0mu {\perp}}X^{-(i,j)}| \mathcal{G}^{-(i,j)}\), then_
\[X^{i}\mathrel{\hbox to 0.0pt{\perp}\mskip-12.0mu {\perp}}X^{j}|X^{-(i,j)} \;\Leftrightarrow\;X^{i}\mathrel{\hbox to 0.0pt{\perp}\mskip-12.0mu {\perp}}X^{j}|\mathcal{G}^{-(i,j)}.\]
This equivalence motivates us to use \(X^{i}\mathrel{\hbox to 0.0pt{\perp}\mskip-12.0mu {\perp}}X^{j}| \mathcal{G}^{-(i,j)}\) as the criterion to construct the graph \(\mathcal{G}\) after performing nonlinear sufficient dimension reduction of \(X^{(i,j)}\) versus \(X^{-(i,j)}\) for each \((i,j)\in\Gamma\times\Gamma\), \(i>j\).
**Definition 2**: _Under condition (3), the graph defined by_
\[(i,j)\notin\mathcal{E}\Leftrightarrow X^{i}\mathrel{\hbox to 0.0pt{\perp} \mskip-12.0mu {\perp}}X^{j}|\mathcal{G}^{-(i,j)}\]
_is called the sufficient graphical model._
## 3 Estimation: population-level development
The estimation of the sufficient graphical model involves two steps: the first step is to use nonlinear sufficient dimension reduction to estimate \(\mathcal{G}^{-(i,j)}\); the second is to construct a graph \(\mathcal{G}\) based on reduced data
\[\{(X^{(i,j)},\mathcal{G}^{-(i,j)}):(i,j)\in\Gamma\times\Gamma,i>j\}.\]
In this section we describe the two steps at the population level. To do so, we need some preliminary concepts such as the covariance operator between two reproducing kernel Hilbert spaces, the mean element in an reproducing kernel Hilbert spaces, the inverse of an operator, as well as the centered reproducing kernel Hilbert spaces. These concepts are defined in the Supplementary Material, Section S1.2. A fuller development of the related theory can be found in Li (2018b). The symbols \(\operatorname{ran}(\cdot)\) and \(\operatorname{\overline{ran}}(\cdot)\) will be used to denote the range and the closure of the range of a linear operator.
### Step 1: Nonlinear dimension reduction
We use the generalized sliced inverse regression Lee et al. (2013), (Li, 2018b) to perform the nonlinear dimension reduction. For each pair \((i,j)\in\Gamma\times\Gamma\), \(i>j\), let \(\Omega_{X^{-(i,j)}}\) be the range of \(X^{-(i,j)}\), which is the Cartesian product of \(\Omega_{X^{1}},\ldots,\Omega_{X^{p}}\) with \(\Omega_{X^{i}}\) and \(\Omega_{X^{j}}\) removed. Let
\[\kappa_{X}^{-(i,j)}:\,\Omega_{X^{-(i,j)}}\times\Omega_{X^{-(i,j)}}\to\mathbb{R}\]
be a positive semidefinite kernel. Let \(\mathscr{H}_{X}^{-(i,j)}\) be the centered reproducing kernel Hilbert space generated by \(\kappa_{X}^{-(i,j)}\). Let \(\Omega_{X^{(i,j)}}\), \(\kappa_{X}^{(i,j)}\), and \(\mathscr{H}_{X}^{(i,j)}\) be the similar objects defined for \(X^{(i,j)}\).
**Assumption 1**: \[E[\kappa_{X}^{-(i,j)}(X^{-(i,j)},X^{-(i,j)})]<\infty,\quad E[\kappa_{X}^{(i,j )}(X^{(i,j)},X^{(i,j)})]<\infty.\]
This is a very mild assumption that is satisfied by most kernels. Under this assumption, the following covariance operators are well defined:
\[\Sigma_{X^{-(i,j)}X^{(i,j)}}:\mathscr{H}_{X}^{(i,j)}\to\mathscr{H}_{X}^{-(i,j )},\quad\Sigma_{X^{-(i,j)}X^{-(i,j)}}:\mathscr{H}_{X}^{-(i,j)}\to\mathscr{H}_{ X}^{-(i,j)}.\]
For the formal definition of the covariance operator, see S1.2. Next, we introduce the regression operator from \(\mathscr{H}_{X}^{(i,j)}\) to \(\mathscr{H}_{X}^{-(i,j)}\). For this purpose we need to make the following assumption.
**Assumption 2**: \(\operatorname{ran}(\Sigma_{X^{-(i,j)}X^{(i,j)}})\subseteq\operatorname{ran} (\Sigma_{X^{-(i,j)}X^{-(i,j)}})\)_._
As argued in Li (2018b), this assumption can be interpreted as a type of collective smoothness in the relation between \(X^{(i,j)}\) and \(X^{-(i,j)}\): intuitively, it requires the operator \(\Sigma_{X^{-(i,j)}X^{(i,j)}}\) sends all the input functions to the low-frequency domain of the operator \(\Sigma_{X^{-(i,j)}X^{-(i,j)}}\). Under Assumption 2, the linear operator
\[R_{X^{-(i,j)}X^{(i,j)}}=\Sigma_{X^{-(i,j)}X^{-(i,j)}}^{-1}\Sigma_{X^{-(i,j)}X^ {(i,j)}}\]
is defined, and we call it the regression operator from \(\mathscr{H}_{X}^{(i,j)}\) to \(\mathscr{H}_{X}^{-(i,j)}\). The meaning of the inverse \(\Sigma_{X^{-(i,j)}X^{-(i,j)}}^{-1}\) is defined in Section S1.2 in the Supplementary Material. The regression operator in this form was formally defined in Lee et al. (2016a), but earlier forms existed in Fukumizu et al. (2004); see also Li (2018a).
**Assumption 3**: \(R_{X^{-(i,j)}X^{(i,j)}}\) _is a finite-rank operator, with rank \(d_{ij}\)._
Intuitively, this assumption means that \(R_{X^{-(i,j)}X^{(i,j)}}\) filters out the high frequency functions of \(X^{(i,j)}\), so that, for any \(f\in\mathscr{H}^{(i,j)}\), \(R_{X^{-(i,j)}X^{(i,j)}}f\) is relatively smooth. It will be violated, for example, if one can find an \(f\in\mathscr{H}^{(i,j)}\) that makes \(R_{X^{-(i,j)}X^{(i,j)}}f\) arbitrarily choppy. The regression
operator plays a crucial role in nonlinear sufficient dimension reduction. Let \(L_{2}(P_{X^{-(i,j)}})\) be the \(L_{2}\)-space with respect to the distribution \(P_{X^{-(i,j)}}\) of \(X^{-(i,j)}\). As shown in Lee et al. (2013), the closure of the range of the regression operator is equal to the central subspace; that is,
\[\overline{\text{ran}}(R_{X^{-(i,j)}X^{(i,j)}})=\mathfrak{S}_{X^{(i,j)}|X^{-(i,j)}} \tag{4}\]
under the following assumption.
**Assumption 4**:
1. \(\mathscr{H}_{X}^{-(i,j)}\) _is dense in_ \(L_{2}(P_{X^{-(i,j)}})\) _modulo constants; that is, for any_ \(f\in L_{2}(P_{X^{-(i,j)}})\) _and any_ \(\epsilon>0\)_, there is a_ \(g\in\mathscr{H}_{X}^{-(i,j)}\) _such that_ \(\operatorname{var}[f(X^{-(i,j)})-g(X^{-(i,j)})]<\epsilon\)_;_
2. \(\mathfrak{S}_{X^{(i,j)}|X^{-(i,j)}}\) _is a sufficient and complete._
The first condition essentially requires the kernel \(\kappa_{X}^{-(i,j)}\) to be a universal kernel with respect to the \(L_{2}(P_{X^{-(i,j)}})\)-norm. It means \(\mathscr{H}^{-(i,j)}\) is rich enough to approximate any \(L_{2}(P_{X^{-(i,j)}})\)-function arbitrarily closely. For example, it is satisfied by the Gaussian radial basis function kernel, but not by the polynomial kernel. For more information on universal kernels, see Sriperumbudur, Fukumizu, and Lanckriet (2011). The completeness in the second condition means
\[E[g(X^{-(i,j)})|X^{(i,j)}]=0\text{ almost surely }\Rightarrow\text{$g(X^{-(i,j)})=0 $ almost surely}.\]
This concept is defined in Lee, Li, and Chiaromonte (2013), and is similar to the classical definition of completeness treating \(X^{-(i,j)}\) as the parameter. Lee, Li, and Chiaromonte (2013) showed that completeness is a mild condition, and is satisfied by most nonparametric models.
A basis of the central class \(\mathfrak{S}_{X^{(i,j)}|X^{-(i,j)}}\) can be found by solving the generalized eigenvalue problem: for \(k=1,\ldots,d_{ij}\),
\[\begin{array}{ll}\text{maximize}&\langle f,\Sigma_{X^{-(i,j)}X^{(i,j)}}A \Sigma_{X^{(i,j)}X^{-(i,j)}}f\rangle_{-(i,j)}\\ \text{subject to}&\begin{cases}\langle f_{k},\Sigma_{X^{-(i,j)}X^{-(i,j)}}f_{k} \rangle_{-(i,j)}=1\\ \langle f_{k},\Sigma_{X^{-(i,j)}X^{-(i,j)}}f_{\ell}\rangle_{-(i,j)},\text{ for }\ell=1,\ldots,k-1\end{cases}\end{array} \tag{5}\]
where \(A:\mathscr{H}_{X}^{(i,j)}\rightarrow\mathscr{H}_{X}^{(i,j)}\) is any nonsingular and self adjoint operator, and \(\langle\cdot,\cdot\rangle_{-(i,j)}\) is the inner product in \(\mathscr{H}_{X}^{(i,j)}\). That is, if \(f_{1}^{ij},\ldots f_{d_{ij}}^{ij}\) are the first \(d_{ij}\) eigenfunctions of this eigenvalue problem, then they span the central class. This type of estimate of the central class is called generalized sliced inverse regression. Convenient choices of \(A\) are the identity mapping \(I\) or the operator \(\Sigma_{X^{(i,j)}X^{(i,j)}}^{-1}\). If we use the latter, then we need the following assumption.
**Assumption 5**: \(\operatorname{ran}(\Sigma_{X^{(i,j)}X^{-(i,j)}})\subseteq\operatorname{ran}( \Sigma_{X^{(i,j)}X^{(i,j)}})\)_._
This assumption has the similar interpretation as Assumption 2; see Section S11 in the Supplementary Material. At the population level, choosing \(A\) to be \(\Sigma_{X^{-(i,j)}X^{-(i,j)}}^{-1}\) achieves better scaling because it down weights those components of the output of \(\Sigma_{X^{-(i,j)}X^{(i,j)}}\) with larger variances. However, if the sample size is not sufficiently large, involving an estimate of \(\Sigma_{X^{-(i,j)}X^{(i,j)}}^{-1}\) in the procedure could incur extra variations that overwhelm the benefit brought by \(\Sigma_{X^{-(i,j)}X^{(i,j)}}^{-1}\). In this case, a nonrandom operator such as \(A=I\) is preferable. In this paper we use \(A=\Sigma_{X^{(i,j)}X^{(i,j)}}^{-1}\). Let \(U^{ij}\) denote the random vector \((f_{1}^{ij}(X^{-(i,j)}),\ldots f_{d_{ij}}^{ij}(X^{-(i,j)})).\) The set of random vectors \(\{U^{ij}:(i,j)\in\Gamma\times\Gamma,i>j\}\) is the output for the nonlinear sufficient dimension reduction step.
### Step 2:Estimation of sufficient graphical model
To estimate the edge set of the sufficient graphical model we need to find a way to determine whether \(X^{i}\mathrel{\hbox to 0.0pt{\hbox{\kern 2.0pt\hbox{\lower 4.0pt\hbox{$ \sim$}}}\hbox{$\kern-3.0pt{\rm L}$}}}X^{j}|U^{ij}\) is true. We use a linear operator introduced by Fukumizu et al. (2008) to perform this task, which is briefly described as follows. Let \(U\), \(V\), \(W\) be random vectors taking values in measurable spaces \((\Omega_{U},\mathcal{F}_{U})\), \((\Omega_{V},\mathcal{F}_{V})\), and \((\Omega_{W},\mathcal{F}_{W})\). Let \(\Omega_{UW}=\Omega_{U}\times\Omega_{W}\), \(\Omega_{VW}=\Omega_{V}\times\Omega_{W}\), \(\mathcal{F}_{UW}=\mathcal{F}_{U}\times\mathcal{F}_{V}\), and \(\mathcal{F}_{VW}=\mathcal{F}_{V}\times\mathcal{F}_{W}\). Let
\[\kappa_{UW}:\Omega_{UW}\times\Omega_{UW}\to\mathbb{R},\quad\kappa_{VW}:\Omega_ {VW}\times\Omega_{VW}\to\mathbb{R},\quad\kappa_{W}:\Omega_{W}\times\Omega_{W} \to\mathbb{R}\]
be positive kernels. For example, for \((u_{1},w_{1}),(u_{2},w_{2})\in\Omega_{UW}\times\Omega_{UW}\), \(\kappa_{UW}\) returns a real number denoted by \(\kappa_{UW}[(u_{1},w_{1}),(u_{2},w_{2})]\). Let \(\mathcal{H}_{UW}\), \(\mathcal{H}_{VW}\), and \(\mathcal{H}_{W}\) be the centered reproducing kernel Hilbert space's generated by the kernels \(\kappa_{UW}\), \(\kappa_{VW}\), and \(\kappa_{W}\). Define the covariance operators
\[\begin{split}&\Sigma_{(UW)(VW)}:\mathcal{H}_{VW}\to\mathcal{H}_{ UW},\quad\Sigma_{(UW)W}:\mathcal{H}_{W}\to\mathcal{H}_{UW},\\ &\Sigma_{(VW)W}:\mathcal{H}_{W}\to\mathcal{H}_{VW},\quad\Sigma_{ WW}:\mathcal{H}_{W}\to\mathcal{H}_{W}\end{split} \tag{6}\]
as before. The following definition is due to Fukumizu et al. (2008). Since it plays a special role in this paper, we give it a name - "conjoined conditional covariance operator" that figuratively depicts its form.
**Definition 3**: _Suppose_
1. _If_ \(S\) _is_ \(W\)_, or_ \((U,W)\)_, or_ \((V,W)\)_, then_ \(E[\kappa_{S}(S,S)]<\infty\)_;_
2. \(\mathrm{ran}(\Sigma_{W(VW)})\subseteq\mathrm{ran}(\Sigma_{WW})\)_,_ \(\mathrm{ran}(\Sigma_{W(UW)})\subseteq\mathrm{ran}(\Sigma_{WW})\)_._
_Then the operator \(\Sigma_{\hat{U}\hat{V}|W}=\Sigma_{(UW)(VW)}-\Sigma_{(UW)W}\Sigma_{WW}^{-1} \Sigma_{W(VW)}\) is called the conjoined conditional covariance operator between \(U\) and \(V\) given \(W\)._
The word "conjoined" describes the peculiar way in which \(W\) appears in \(\Sigma_{(UW)W}\) and \(\Sigma_{W(VW)}\), which differs from an ordinary conditional covariance operator, where these operators are replaced by \(\Sigma_{UW}\) and \(\Sigma_{WV}\). The following proposition is due to Fukumizu et al. (2008), a proof of a special case of which is given in Fukumizu et al. (2004).
**Proposition 4**: _Suppose_
1. \(\mathcal{H}_{UW}\otimes\mathcal{H}_{VW}\) _is probability determining;_
2. _for each_ \(f\in\mathcal{H}_{UW}\)_, the function_ \(E[f(U,W)|W=\cdot]\) _belongs to_ \(\mathcal{H}_{W}\)_;_
3. _for each_ \(g\in\mathcal{H}_{VW}\)_, the function_ \(E[g(V,W)|W=\cdot]\) _belongs to_ \(\mathcal{H}_{W}\)_;_
_Then \(\Sigma_{\hat{U}\hat{V}|W}=0\) if and only if \(U\mathrel{\hbox to 0.0pt{\hbox{\kern 2.0pt\hbox{\lower 4.0pt\hbox{$ \sim$}}}\hbox{$\kern-3.0pt{\rm L}$}}}V|W\)._
The notion of probability determining in the context of reproducing kernel Hilbert space was defined in Fukumizu et al. (2004). For a generic random vector \(X\), an reproducing kernel Hilbert space \(\mathcal{H}_{X}\) based on a kernel \(\kappa_{X}\) is probability determining if and only if the mapping \(P\mapsto E_{P}[\kappa_{X}(\cdot,X)]\) is injective. Intuitively, this requires the family of expectations \(\{E_{P}f(X):f\in\mathcal{H}_{X}\}\) to be rich enough to identify \(P\). For example, the Gaussian radial basis function is probability determining, but
a polynomial kernel is not. We apply the above proposition to \(X^{i},X^{j},U^{ij}\) for each \((i,j)\in\Gamma\times\Gamma\), \(i>j\). Let
\[\kappa_{XU}^{i,ij}:(\Omega_{X^{i}}\times\Omega_{U^{ij}})\times(\Omega_{X^{i}} \times\Omega_{U^{ij}})\rightarrow\mathbb{R}\]
be a positive definite kernel, and \(\mathscr{H}_{XU}^{i,ij}\) the centered reproducing kernel Hilbert space generated by \(\kappa_{XU}^{i,ij}\). Similarly, let \(\kappa_{U}^{ij}:\Omega_{U^{ij}}\times\Omega_{U^{ij}}\rightarrow\mathbb{R}\) be a positive kernel, and \(\mathscr{H}_{U}^{ij}\) the centered reproducing kernel Hilbert space generated by \(\kappa_{U}^{ij}\).
**Assumption 6**: _Conditions (1) and (2) of Definition 3 and conditions (1), (2), and (3) of Proposition 4 are satisfied with \(U\), \(V\), and \(W\) therein replaced by \(X^{i}\), \(X^{j}\), and \(U^{ij}\), respectively, for each \((i,j)\in\Gamma\times\Gamma\) and \(i>j\)._
Under this assumption, the conjoined conditional covariance operator \(\Sigma_{\hat{X}^{i}\hat{X}^{j}|U^{ij}}\) is well defined and has the following property.
**Corollary 5**: _Under Assumption 6, we have \((i,j)\notin\mathcal{E}\Leftrightarrow\Sigma_{\hat{X}^{i}\hat{X}^{j}|U^{ij}}=0\)._
This corollary motivates us to estimate the graph by thresholding the norm of the estimated conjoined conditional covariance operator.
## 4 Estimation: sample-level implementation
### Implementation of step 1
Let \((X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\) be an i.i.d. sample of \((X,Y)\). At the sample level, the centered reproducing kernel Hilbert space \(\mathscr{H}_{X}^{-(i,j)}\) is spanned by the functions
\[\{\kappa_{X}^{-(i,j)}(\cdot,X_{a}^{-(i,j)})-E_{n}[\kappa_{X}^{-(i,j)}(\cdot,X^ {-(i,j)})]:a=1,\ldots,n\}, \tag{7}\]
where \(\kappa_{X}^{-(i,j)}(\cdot,X^{-(i,j)})\) stands for the function \(u\mapsto\kappa_{X}^{-(i,j)}(u,X^{-(i,j)})\), and \(E_{n}[\kappa_{X}^{-(i,j)}(\cdot,X^{-(i,j)})]\) the function \(u\mapsto E_{n}[\kappa_{X}^{-(i,j)}(u,X^{-(i,j)})]\).
We estimate the covariance operators \(\Sigma_{X^{-(i,j)}X^{(i,j)}}\) and \(\Sigma_{X^{-(i,j)}X^{-(i,j)}}\) by
\[\hat{\Sigma}_{X^{-(i,j)}X^{(i,j)}} =E_{n}\{[\kappa_{X}^{-(i,j)}(\cdot,X^{-(i,j)})-E_{n}\kappa_{X}^{- (i,j)}(\cdot,X^{-(i,j)})]\] \[\otimes[\kappa_{X}^{(i,j)}(\cdot,X^{(i,j)})-E_{n}\kappa_{X}^{(i,j )}(\cdot,X^{(i,j)})]\}\] \[\hat{\Sigma}_{X^{-(i,j)}X^{-(i,j)}} =E_{n}\{[\kappa_{X}^{-(i,j)}(\cdot,X^{-(i,j)})-E_{n}\kappa_{X}^{- (i,j)}(\cdot,X^{-(i,j)})]\] \[\otimes[\kappa_{X}^{-(i,j)}(\cdot,X^{-(i,j)})-E_{n}\kappa_{X}^{- (i,j)}(\cdot,X^{-(i,j)})]\},\]
respectively. We estimate \(\Sigma_{X^{(i,j)}X^{(i,j)}}^{-1}\) by the Tychonoff-regularized inverse \((\hat{\Sigma}_{X^{(i,j)}X^{(i,j)}}+\epsilon_{X}^{(i,j)}I)^{-1}\), where \(I:\mathscr{H}_{X}^{(i,j)}\rightarrow\mathscr{H}_{X}^{(i,j)}\) is the identity operator. The regularized inverse is used to avoid over fitting. It plays the same role as ridge regression (Hoerl and Kennard, 1970) that alleviates over fitting by adding a multiple of the identity matrix to the sample covariance matrix before inverting it.
At the sample level, the generalized eigenvalue problem (5) takes the following form: at the \(k\)th iteration,
\[\begin{array}{ll}\text{maximize}&\langle f,\hat{\Sigma}_{{}_{X^{-(i,j)}}{}_{X^ {(i,j)}}}(\hat{\Sigma}_{{}_{X^{(i,j)}}{}_{X^{(i,j)}}}+\epsilon_{X}^{(i,j)}I)^{ -1}\hat{\Sigma}_{{}_{X^{(i,j)}}{}_{X^{-(i,j)}}}f\rangle_{{}_{-(i,j)}}\\ \text{subject to}&\begin{cases}\langle f,\hat{\Sigma}_{{}_{X^{-(i,j)}}{}_{X^{-(i, j)}}}f\rangle_{{}_{-(i,j)}}=1,\\ \langle f,\hat{\Sigma}_{{}_{X^{-(i,j)}}{}_{X^{-(i,j)}}}f\rangle_{{}_{-(i,j)}}= 0,\quad\ell=1,\ldots,k-1,\end{cases}\end{array} \tag{8}\]
where \(f_{1},\ldots,f_{k-1}\) are the maximizers in the previous steps. The first \(d_{ij}\) eigenfunctions are an estimate of a basis in the central class \(\mathfrak{S}_{{}_{X^{(i,j)}}{}_{|X^{-(i,j)}}}\).
Let \(K_{{}_{X^{-(i,j)}}}\) be the \(n\times n\) matrix whose \((a,b)\)th entry is \(\kappa_{X}^{-(i,j)}(X_{a}^{-(i,j)},X_{b}^{-(i,j)})\), \(Q=I_{n}-1_{n}1_{n}^{\mathsf{T}}/n\), and \(G_{{}_{X^{-(i,j)}}}=QK_{{}_{X^{-(i,j)}}}Q\). Let \(a^{1},\ldots,a^{aij}\) be the first \(d_{ij}\) eigenvectors of the matrix
\[(G_{{}_{X^{-(i,j)}}}+\epsilon_{X}^{-(i,j)}I_{n})^{-1}G_{{}_{X^{-(i,j)}}}G_{{}_ {X^{(i,j)}}}(G_{{}_{X^{(i,j)}}}+\epsilon_{X}^{(i,j)}I_{n})^{-1}G_{{}_{X^{-(i,j) }}}(G_{{}_{X^{-(i,j)}}}+\epsilon_{X}^{-(i,j)}I_{n})^{-1}.\]
Let \(b^{r}=(G_{{}_{X^{-(i,j)}}}+\epsilon_{X}^{-(i,j)}I_{n})^{-1}a^{r}\) for \(r=1,\ldots,d_{ij}\). As shown in Section S12.2, the eigenfunctions \(f_{1}^{ij},\ldots,f_{d_{ij}}^{ij}\) are calculated by
\[f_{r}^{ij}=\sum_{a=1}^{n}b_{a}^{r}\{\kappa_{X}^{-(i,j)}(\cdot,X_{a}^{-(i,j)}) -E_{n}[\kappa_{X}^{-(i,j)}(\cdot,X^{-(i,j)})]\}.\]
The statistics \(\hat{U}_{a}^{ij}=(f_{1}^{ij}(X_{a}^{-(i,j)}),\ldots,f_{d_{ij}}^{ij}(X_{a}^{-(i,j)}))\), \(a=1,\ldots,n\), will be used as the input for the second step.
### Implementation of step 2
This step consists of estimating the conjoined conditional covariance operator's for each \((i,j)\) and thresholding their norms. At the sample level, the centered reproducing kernel Hilbert space's generated by the kernels \(\kappa_{X\!U}^{i,ij}\), \(\kappa_{X\!U}^{i,ij}\), and \(\kappa_{U}^{ij}\) are
\[\begin{array}{ll}\mathscr{H}_{X\!U}^{i,ij}=&\mathrm{span}\{\kappa_{X\!U}^{i,ij}(\cdot,(X_{a}^{i},U_{a}^{ij}))-E_{n}[\kappa_{X\!U}^{i,ij}(\cdot,(X^{i},U ^{ij}))]:a=1,\ldots,n\},\\ \mathscr{H}_{X\!U}^{ij,ij}=&\mathrm{span}\{\kappa_{X\!U}^{i,ij}(\cdot,(X_{a}^ {j},U_{a}^{ij}))-E_{n}[\kappa_{X\!U}^{ij,ij}(\cdot,(X^{j},U^{ij}))]:a=1, \ldots,n\},\\ \mathscr{H}_{U}^{ij}=&\mathrm{span}\{\kappa_{U}^{ij}(\cdot,U_{a}^{ij})-E_{n}[ \kappa_{U}^{ij}(\cdot,U^{ij})]:a=1,\ldots,n\},\end{array}\]
where, for example, \(\kappa_{X\!U}^{i,ij}(\cdot,(X_{a}^{i},U_{a}^{ij}))\) denotes the function
\[\Omega_{X^{i}}\times\Omega_{U^{ij}}\to\mathbb{R},\quad(x^{i},u^{ij})\mapsto \kappa_{X\!U}^{i,ij}((x^{i},u^{ij}),(X_{a}^{i},U_{a}^{ij}))\]
and \(E_{n}[\kappa_{X\!U}^{i,ij}(\cdot,(X^{i},U^{ij}))]\) denotes the function
\[\Omega_{X^{i}}\times\Omega_{U^{ij}}\to\mathbb{R},\quad(x^{i},u^{ij})\mapsto E _{n}[\kappa_{X\!U}^{i,ij}((x^{i},u^{ij}),(X^{i},U^{ij}))].\]
We estimate the covariance operators \(\Sigma_{(X^{i}U^{ij})(X^{i}U^{ij})}\), \(\Sigma_{(X^{i}U^{ij})U^{ij}}\), \(\Sigma_{X^{j}(X^{j}U^{ij})}\), and \(\Sigma_{U^{ij}U^{ij}}\) by
\[\begin{split}\hat{\Sigma}_{(X^{i}U^{ij})(X^{j}U^{ij})}=& \,E_{n}\{[\kappa^{i,ij}_{XU}(\cdot,(X^{i},U^{ij}))-E_{n}\kappa^{i,ij}_{XU}( \cdot,(X^{i},U^{ij}))]\\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
For each \((i,j)\), we have five \(\gamma\)'s to determine: \(\gamma_{X}^{(i,j)}\) for the kernel \(\kappa_{X}^{(i,j)}\), \(\gamma_{X}^{-(i,j)}\) for \(\kappa_{X}^{-(i,j)}\), \(\gamma_{XU}^{i,ij}\) for \(\kappa_{XU}^{i,ij}\), \(\gamma_{XU}^{j,ij}\) for \(\kappa_{XU}^{i,ij}\), and \(\gamma_{U}^{ij}\) for \(\kappa_{U}^{ij}\), which are chosen by the following formula (see, for example, Li (2018b))
\[1/\sqrt{\gamma}=\binom{n}{2}^{-1}\sum_{a<b}\|s_{a}-s_{b}\|, \tag{11}\]
where \(s_{1},\ldots,s_{n}\) are the sample of random vectors corresponding to the mentioned five kernels. For example, for the kernel \(\kappa_{XU}^{j,ij}\), \(s_{a}=(X_{a}^{j},U_{a}^{ij})\). For the tuning parameters in Tychonoff regularization, we use the following generalized cross validation scheme (GCV; see Golub et al. (1979)):
\[\text{GCV}(\epsilon)=\operatorname{argmin}_{\epsilon}\sum_{i<j}\frac{\|G_{1}- G_{2}^{\intercal}[G_{2}+\epsilon\,\lambda_{\max}(G_{2})]^{-1}G_{1}\|_{ \text{F}}}{\frac{1}{n}\text{tr}\{I_{n}-G_{2}^{\intercal}[G_{2}+\epsilon\, \lambda_{\max}(G_{2})]^{-1}\}}, \tag{12}\]
where \(G_{1},G_{2}\in\mathbb{R}^{n\times n}\) are positive semidefinite matrices, and \(\lambda_{\max}(G_{2})\) is the largest eigenvalue of \(G_{2}\). The matrices \(G_{1}\) and \(G_{2}\) are the following matrices for the three tuning parameters:
1. \(G_{1}=G_{X^{-(i,j)}}\), \(G_{2}=G_{X^{(i,j)}}\) for \(\epsilon_{X}^{(i,j)}\),
2. \(G_{1}=G_{X^{(i,j)}}\), \(G_{2}=G_{X^{-(i,j)}}\) for \(\epsilon_{X}^{-(i,j)}\),
3. \(G_{1}=G_{X^{(i,j)}}\), \(G_{2}=G_{U^{ij}}\) for \(\epsilon_{U}^{(i,j)}\),
We minimize (12) over a grid to choose \(\epsilon\), as detailed in Section 6.
We also use generalized cross validation to determine the thresholding parameter \(\rho_{n}\). Let \(\hat{\mathcal{E}}(\rho)\) be the estimated edge set using a threshold \(\rho\), and, for each \(i\in\Gamma\), let \(C^{i}(\rho)=\{X^{j}:\,j\in\Gamma,\,(i,j)\in\hat{\mathcal{E}}(\rho)\}\) be the subset of components of \(X\) at the neighborhood of the node \(i\) in the graph \((\Gamma,\hat{E}(\rho))\). The basic idea is to apply the generalized cross validation to the regression of the feature of \(X^{i}\) on the feature of \(C^{i}(\rho)\). The generalized cross validation for this regression takes the form
\[\text{GCV}(\rho)=\sum_{i=1}^{p}\frac{\|G_{X^{i}}-G_{C^{i}(\rho)}^{\intercal}[G _{C^{i}(\rho)}+\epsilon\,\lambda_{\max}(G_{C^{i}(\rho)})I_{n}]^{-1}G_{X^{i}} \|_{\text{F}}}{\frac{1}{n}\text{tr}\{I_{n}-G_{C^{i}(\rho)}^{\intercal}[G_{C^{ i}(\rho)}+\epsilon\,\lambda_{\max}(G_{C^{i}(\rho)})I_{n}]^{-1}\}}, \tag{13}\]
where \(G_{C^{i}(\rho)}=QK_{C^{i}(\rho)}Q\), and \(K_{C^{i}(\rho)}\) is the \(n\times n\) kernel matrix for the sample of \(C^{i}(\rho)\). We minimize \(\text{GCV}(\rho)\) over the grid \(\rho\in\{\ell\times 10^{-2}:\ell=2,\ldots,7\}\) to determine the optimal threshold \(\rho_{n}\).
Regarding the selection of the dimension of \(U^{ij}\), to our knowledge there has been no systematic procedure available to determine the dimension of the central class for nonlinear sufficient dimension reduction. While some recently developed methods for order determination for linear sufficient dimension reduction, such as the ladle estimate and predictor augmentation estimator (Luo and Li, 2016, 2020), may be generalizable to the nonlinear sufficient dimension reduction setting, we will leave this topic to future research. Our experiences and intuitions indicate that a small dimension, such as 1 or 2, for the central class would be sufficient in most cases. For example, in the classical nonparametric regression problems \(Y=f(X)+\epsilon\) with \(X\,\hbox to 0.0pt{\lower 3.5pt\hbox{$\mathchar 0\sim$}}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.0pt\hbox{$\mathchar 0\sim$}\raise 1.0pt\hbox{$ \mathchar 0\sim$}\raise 1.
## 5 Asymptotic theory
In this section we develop the consistency and convergence rates of our estimate and related operators. The challenge of this analysis is that our procedure involves two steps: we first extract the sufficient predictor using one set of kernels, and then substitute it into another set of kernels to get the final result. Thus we need to understand how the error propagates from the first step to the second. We also develop the asymptotic theory allowing \(p\) to go to infinity with \(n\), which is presented in the Supplementary Material.
### Overview
Our goal is to derive the convergence rate of
\[\left\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}\right\|_{\mathrm{HS}}- \left\|\Sigma_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}\right\|_{\mathrm{HS}}\right|,\]
as \(\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}\|_{\mathrm{HS}}\) is the quantity we threshold to determine the edge set. By the triangular inequality,
\[\left\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}\right\|_{ \mathrm{HS}} -\left\|\Sigma_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}\right\|_{ \mathrm{HS}}\right|\leq\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}- \Sigma_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}\|_{\mathrm{HS}}\] \[\leq\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}-\hat{\Sigma }_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}\|_{\mathrm{HS}}+\|\hat{\Sigma}_{\hat{X}^ {i}\hat{X}^{j}|^{U^{ij}}}-\Sigma_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}\|_{ \mathrm{HS}}.\]
So we need to derive the convergence rates of the following quantities:
\[\text{(i)}\quad\|\hat{U}^{ij}-U^{ij}\|_{\mathscr{H}^{-(i,j)}(X)^{ d}i_{ij}}, \tag{14}\] \[\text{(ii)}\quad\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}} -\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}\|_{\mathrm{HS}},\] \[\text{(iii)}\quad\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}} -\Sigma_{\hat{X}^{i}\hat{X}^{j}|^{U^{ij}}}\|_{\mathrm{HS}},\]
where, to avoid overly crowded subscripts, we have used \(\mathscr{H}^{-(i,j)}(X)\) to denote \(\mathscr{H}^{-(i,j)}_{X}\) when it occurs as a subscript. The first and third convergence rates can be derived using the asymptotic tools for linear operators developed in Fukumizu et al. (2007), Li and Song (2017), Lee et al. (2016a), and Solea and Li (2020). The second convergence rate is, however, a new problem, and it will also be useful in similar settings that require constructing estimators based on predictors extracted by sufficient dimension reduction. In some sense, this is akin to the post dimension reduction problem considered in Kim et al. (2020).
In the following, if \(\{a_{n}\}\) and \(\{b_{n}\}\) are sequences of positive numbers, then we write \(a_{n}\prec b_{n}\) if \(a_{n}/b_{n}\to 0\). We write \(a_{n}\asymp b_{n}\) if \(0<\liminf_{n}(b_{n}/a_{n})\leq\limsup_{n}(b_{n}/a_{n})<\infty\). We write \(b_{n}\preceq a_{n}\) if either \(b_{n}\prec a_{n}\) or \(b_{n}\asymp a_{n}\). Because \((i,j)\) is fixed in the asymptotic development, and also to emphasize the dependence on \(n\), in the rest of this section we denote \(\epsilon_{X}^{(i,j)}\), \(\epsilon_{X}^{-(i,j)}\), and \(\epsilon_{U}^{(i,j)}\) by \(\epsilon_{n}\), \(\eta_{n}\), and \(\delta_{n}\), respectively.
### Transparent kernel
We first develop what we call the "transparent kernel" that passes information from step 1 to step 2 efficiently. Let \(\Omega\) be a nonempty set, and \(\kappa:\Omega\times\Omega\to\mathbb{R}\) a positive kernel.
**Definition 6**: _We say that \(\kappa\) is a transparent kernel if, for each \(t\in\Omega\), the function \(s\mapsto\kappa(s,t)\) is twice differentiable and_
1. \(\partial\kappa(s,t)/\partial s|_{s=t}=0\)_;_
2. _the matrix_ \(H(s,t)=\partial^{2}\kappa(s,t)/\partial s\partial s^{\intercal}\) _has a bounded operator norm; that is, there exist_ \(-\infty<C_{1}\leq C_{2}<\infty\) _such that_ \[C_{1}\leq\lambda_{\min}(H(s,t))\leq\lambda_{\max}(H(s,t))<C_{2}\] _for all_ \((s,t)\in\Omega\times\Omega\)_, where_ \(\lambda_{\min}(\cdot)\) _and_ \(\lambda_{\max}(\cdot)\) _indicate the largest and smallest eigenvalues._
For example, the Gaussian radial basis function kernel is transparent, but the exponential kernel \(\kappa(u,v)=\tau^{2}\exp(-\gamma\|u-v\|)\) is not. This condition implies a type of Lipschitz continuity in a setting that involves two reproducing kernels \(\kappa_{0}\) and \(\kappa_{1}\), where the argument of \(\kappa_{1}\) is the evaluation of a member of the reproducing kernel Hilbert space generated by \(\kappa_{0}\).
**Theorem 7**: _Suppose \(\mathscr{H}_{0}\) is the reproducing kernel Hilbert space generated by \(\kappa_{0}\), \(\mathscr{H}_{0}^{d}\) is the \(d\)-fold Cartesian product of \(\mathscr{H}_{0}\) with inner product defined by_
\[\langle U,V\rangle_{\mathscr{H}_{0}^{d}}=\langle u_{1},v_{1}\rangle_{\mathscr{ H}_{0}}+\cdots+\langle u_{d},v_{d}\rangle_{\mathscr{H}_{0}}\]
_where \(U=(u_{1},\ldots,u_{d})\) and \(V=(v_{1},\ldots,v_{d})\) are members of \(\mathscr{H}_{0}^{d}\), \(\mathscr{H}_{1}\) is the reproducing kernel Hilbert space generated by \(\kappa_{1}\). Then:_
1. _for any_ \(U,V\in\mathscr{H}_{0}^{d},\ a\in\Omega\)_, we have_ \[\|U(a)-V(a)\|_{\mathbb{R}^{d}}\leq[\kappa_{0}(a,a)]^{1/2}\,\|U-V\|_{\mathscr{H }_{0}^{d}};\]
2. _if_ \(\kappa_{1}(s,t)\) _is a transparent kernel, then there exists a_ \(C>0\) _such that, for each_ \(U,V\in\mathscr{H}_{0}^{d}\) _and_ \(a\in\Omega\)_,_ \[\|\kappa_{1}(\cdot,U(a))-\kappa_{1}(\cdot,V(a))\|_{\mathscr{H}_{1}}\leq C\,[ \kappa_{0}(a,a)]^{1/2}\,\|U-V\|_{\mathscr{H}_{0}^{d}}.\]
A direct consequence of this theorem is that, if \(\hat{U}\) is an estimate of some \(U\), a member of \(\mathscr{H}_{0}^{d}\), with \(\|\hat{U}-U\|_{\mathscr{H}_{0}^{d}}=O_{P}(b_{n})\) for some \(0<b_{n}\to 0\), \(\hat{\Sigma}(\hat{U})\) is a linear operator estimated from the sample \(\hat{U}_{1},\ldots,\hat{U}_{n}\) (and perhaps some other random vectors), and \(\hat{\Sigma}(U)\) is a linear operator estimated from the sample \(U_{1},\ldots,U_{n}\), then,
\[\|\hat{\Sigma}(\hat{U})-\hat{\Sigma}(U)\|_{\mathrm{HS}}=O_{P}(b_{n}). \tag{15}\]
This result is somewhat surprising, because sample estimates such as \(\hat{\Sigma}(\hat{U})\) can be viewed as \(E_{n}\mathbb{G}(X,\hat{U})\), where \(\hat{U}\) is an estimate of a function \(U\) in a functional space with norm \(\|\cdot\|\) and \(\mathbb{G}\) is an operator-valued function. If \(\|\hat{U}-U\|=O_{P}(b_{n})\) for some \(b_{n}\to 0\), then it is not necessarily true that
\[\|E_{n}\mathbb{G}(X,\hat{U})-E_{n}\mathbb{G}(X,U)\|=O_{P}(b_{n}),\]
particularly when \(U\) is an infinite dimensional object. Yet relation (15) states exactly this. The reason behind this is that the reproducing kernel property separates the function \(\hat{U}\) and its argument \(X_{a}\) (i.e. \(\hat{U}(x)=\langle\hat{U},\kappa(\cdot,x)\rangle\)), which implies a type of uniformity among \(\hat{U}(X_{1}),\ldots,\hat{U}(X_{n})\). This point will be made clear in the proof in the Supplementary Material. Statement (15) is made precise by the next theorem.
**Theorem 8**: _Suppose conditions (1) and (2) of Definition 3 are satisfied with \(U\), \(V\), \(W\) therein replaced by \(X^{i}\), \(X^{j}\), and \(U^{ij}\). Suppose, furthermore:_
1. \(\kappa_{U}^{ij}\)_,_ \(\kappa_{XU}^{i,ij}\)_, and_ \(\kappa_{XU}^{j,ij}\) _are transparent kernels;_
2. \(\|\hat{U}^{ij}-U^{ij}\|_{[\mathscr{H}^{-(i,j)}(X)]^{d}ij}=O_{P}(b_{n})\) _for some_ \(0<b_{n}\to 0\)_._
_Then_
1. \(\|\hat{\Sigma}_{\hat{U}^{ij}\hat{U}^{ij}}-\hat{\Sigma}_{U^{ij}U^{ij}}\|_{\rm HS }=O_{P}(b_{n})\)_;_
2. \(\|\hat{\Sigma}_{(X^{i}\hat{U}^{ij})\hat{U}^{ij}}-\hat{\Sigma}_{(X^{i}U^{ij})U ^{ij}}\|_{\rm HS}=O_{P}(b_{n})\)_;_
3. \(\|\hat{\Sigma}_{(X^{i}\hat{U}^{ij})(X^{j}\hat{U}^{ij})}-\hat{\Sigma}_{(X^{i}U ^{ij})(X^{j}U^{ij})}\|_{\rm HS}=O_{P}(b_{n})\)_._
Using Theorem 8 we can derive the convergence rate of \(\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|\hat{U}^{ij}}-\hat{\Sigma}_{\hat{X}^{i} \hat{X}^{j}|U^{ij}}\|_{\rm HS}\).
**Theorem 9**: _Suppose conditions in Theorem 8 are satisfied and, furthermore,_
1. \(\Sigma_{U^{ij}U^{ij}}^{-1}\Sigma_{U^{ij}(X^{i}U^{ij})}\) _and_ \(\Sigma_{U^{ij}U^{ij}}^{-1}\Sigma_{U^{ij}(X^{j}U^{ij})}\) _are bounded linear operators;_
2. \(b_{n}\preceq\delta_{n}\prec 1\)_._
_Then \(\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|\hat{U}^{ij}}-\hat{\Sigma}_{\hat{X}^{i} \hat{X}^{j}|U^{ij}}\|_{\rm HS}=O_{P}(b_{n})\)._
Note that, unlike in Theorem 8, where our assumptions imply
\[\Sigma_{X^{-(i,j)}X^{-(i,j)}}^{-1}\Sigma_{X^{-(i,j)}X^{(i,j)}}\]
is a finite-rank operator, here, we do not assume \(\Sigma_{U^{ij}(U^{ij})}^{-1}\Sigma_{U^{ij}(X^{j}U^{ij})}\) to be a finite-rank (or even Hilbert-Schmidt) operator; instead, we assume it to be a bounded operator. This is because \((X^{j},U^{ij})\) contains \(U^{ij}\), which makes it unreasonable to assume \(\Sigma_{U^{ij}U^{ij}}^{-1}\Sigma_{U^{ij}(X^{j}U^{ij})}\) to be finite-rank or Hilbert Schmidt. For example, when \(X^{j}\) is a constant, \(\Sigma_{U^{ij}(X^{j}U^{ij})}\) is the same as \(\Sigma_{U^{ij}U^{ij}}\) and \(\Sigma_{U^{ij}U^{ij}}^{-1}\Sigma_{U^{ij}U^{ij}}\) is not a Hilbert Schmidt operator, though it is bounded. Theorem 9 shows that convergence rate of (ii) in (14) is the same as the convergence rate of (i) in (14); it now remains to derive the convergence rate of (i) and (iii).
### Convergence rates of (i) and (iii) in (14)
We first present the convergence rate of \(\hat{U}^{ij}\) to \(U^{ij}\). The proof is similar to that of Theorem 5 of Li and Song (2017) but with two differences. First, Li and Song (2017) took \(A\) in (5) to be \(I\), whereas we take it to be \(\Sigma_{YY}\). In particular, the generalized sliced inverse regression in Li and Song (2017) only has one tuning parameter \(\eta_{n}\), but we have two tuning parameters \(\eta_{n}\) and \(\epsilon_{n}\). Second, Li and Song (2017) defined (in the current notation) \(f_{r}^{ij}\) to be the eigenfunctions of
\[\Sigma_{X^{-(i,j)}X^{-(i,j)}}^{-1}\Sigma_{X^{-(i,j)}X^{(i,j)}}\Sigma_{X^{(i,j) }X^{(i,j)}}^{-1}\Sigma_{X^{(i,j)}X^{-(i,j)}}\Sigma_{X^{-(i,j)}X^{-(i,j)}}^{-1},\]
which is different from the generalized eigenvalue problem (5). For these reasons we need to re-derive the convergence rate of \(\hat{U}^{ij}\).
**Theorem 10**: _Suppose_
1. _Assumption 1 is satisfied;_
2. \(\Sigma_{X^{-(i,j)}X^{(i,j)}}\) _is a finite-rank operator with_ \[\operatorname{ran}(\Sigma_{X^{-(i,j)}X^{(i,j)}})\subseteq \operatorname{ran}(\Sigma_{X^{-(i,j)}X^{-(i,j)}}^{2}),\] \[\operatorname{ran}(\Sigma_{X^{(i,j)}X^{-(i,j)}})\subseteq \operatorname{ran}(\Sigma_{X^{(i,j)}X^{(i,j)}});\]
3. \(n^{-1/2}\prec\eta_{n}\prec 1\)_,_ \(n^{-1/2}\prec\epsilon_{n}\prec 1\)_;_
4. _for each_ \(r=1,\ldots,d_{ij}\)_,_ \(\lambda_{1}^{ij}>\cdots>\lambda_{dij}^{ij}\)_._
_Then, \(\|\hat{U}^{ij}-U^{ij}\|_{[\mathscr{H}^{-(i,j)}(X)]^{d_{ij}}}=O_{P}(\eta_{n}^{ -3/2}\epsilon_{n}^{-1}n^{-1}+\eta_{n}^{-1}n^{-1/2}+\eta_{n}+\epsilon_{n})\)._
An immediate consequence is that, under the transparent kernel assumption, the \(b_{n}\) in Theorem 9 is the same as this rate. We next derive the convergence rate in (iii) of (14). This rate depends on the tuning parameter \(\delta_{n}\) in the estimate of conjoined conditional covariance operator, and it reaches \(b_{n}\) for the optimal choice of \(\delta_{n}\).
**Theorem 11**: _Suppose conditions (1) and (2) of Definition 3 are satisfied with \(U\), \(V\), \(W\) therein replaced by \(X^{i}\), \(X^{j}\), and \(U^{ij}\). Suppose, furthermore,_
1. \(\Sigma_{U^{ij}U^{ij}}^{-1}\Sigma_{U^{ij}(X^{i}U^{ij})}\) _and_ \(\Sigma_{U^{ij}U^{ij}}^{-1}\Sigma_{U^{ij}(X^{j}U^{ij})}\) _are bounded linear operators;_
2. \(b_{n}\preceq\delta_{n}\prec 1\)_._
_Then \(\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|U^{ij}}-\Sigma_{\hat{X}^{i}\hat{X}^{j}| U^{ij}}\|_{\rm HS}=O_{P}(\delta_{n})\). Consequently, if \(\delta_{n}\asymp b_{n}\), then_
\[\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|U^{ij}}-\Sigma_{\hat{X}^{i}\hat{X}^{j}| U^{ij}}\|_{\rm HS}=O_{P}(b_{n}).\]
Finally, we combine Theorem 9 through Theorem 11 to come up with the convergence rate of \(\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|U^{ij}}\). Since there are numerous cross references among the conditions in these theorems, to make a clear presentation we list all the original conditions in the next theorem, even if they already appeared. These conditions are of two categories: those for the step 1 that involves sufficient dimension reduction of \(X^{(i,j)}\) versus \(X^{-(i,j)}\), and those for the step 2 that involves the estimation of the conjoined conditional covariance operator. We refer to them as the first-level and second-level conditions, respectively.
**Theorem 12**: _Suppose the following conditions hold:_
1. _(First-level kernel)_ \(E[\kappa(S,S)]<\infty\) _for_ \(\kappa=\kappa_{X}^{(i,j)}\) _and_ \(\kappa=\kappa_{X}^{-(i,j)}\)_;_
2. _(First-level operator)_ \(\Sigma_{X^{-(i,j)}X^{(i,j)}}\) _is a finite-rank operator with rank_ \(d_{ij}\) _and_ \[\operatorname{ran}(\Sigma_{X^{-(i,j)}X^{(i,j)}})\subseteq \operatorname{ran}(\Sigma_{X^{-(i,j)}X^{-(i,j)}}^{2}),\] \[\operatorname{ran}(\Sigma_{X^{(i,j)}X^{-(i,j)}})\subseteq \operatorname{ran}(\Sigma_{X^{(i,j)}X^{(i,j)}});\] _all the nonzero eigenvalues of_ \(\Sigma_{X^{(i,j)}X^{-(i,j)}}\Sigma_{X^{-(i,j)}X^{-(i,j)}}^{-1}\Sigma_{X^{-(i, j)}X^{(i,j)}}\) _are distinct;_
_._
3. _(First-level tuning parameters)_ \(n^{-1/2}\prec\eta_{n}\prec 1\)_,_ \(n^{-1/2}\prec\epsilon_{n}\prec 1\)_,_ \(\eta_{n}^{-3/2}\epsilon_{n}^{-1}n^{-1}+\eta_{n}^{-1}n^{-1/2}+\eta_{n}^{1/2}+ \epsilon_{n}\prec 1\)_;_
4. _(Second-level kernel)_ \(E[\kappa(S,S)]<\infty\) _is satisfied for_ \(\kappa=\kappa_{U}^{ij}\)_,_ \(\kappa_{XU}^{i,ij}\)_, and_ \(\kappa_{XU}^{j,ij}\)_; furthermore, they are transparent kernels;_
5. _(Second-level operators)_ \(\Sigma_{U^{ij}U^{ij}}^{-1}\Sigma_{U^{ij}(X^{i}U^{ij})}\) _and_ \(\Sigma_{U^{ij}U^{ij}}^{-1}\Sigma_{U^{ij}(X^{j}U^{ij})}\) _are bounded linear operators;_
6. _(Second-level tuning parameter)_ \(\delta_{n}\asymp\eta_{n}^{-3/2}\epsilon_{n}^{-1}n^{-1}+\eta_{n}^{-1}n^{-1/2}+ \eta_{n}+\epsilon_{n}\)_._
_Then_
\[\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|U^{ij}}-\Sigma_{\hat{X}^{i}\hat{X}^{j}| U^{ij}}\|_{\rm HS}=O_{P}(\eta_{n}^{-3/2}\epsilon_{n}^{-1}n^{-1}+\eta_{n}^{-1}n^{- 1/2}+\eta_{n}+\epsilon_{n}). \tag{16}\]
Using this result we immediately arrive at the variable selection consistency of the Sufficient Graphical Model.
**Corollary 13**: _Under the conditions in Theorem 12, if_
\[\eta_{n}^{-3/2}\epsilon_{n}^{-1}n^{-1}+\eta_{n}^{-1}n^{-1/2}+\eta _{n}+\epsilon_{n}\prec\rho_{n}\prec 1,\ \ \mbox{and}\] \[\hat{\mathcal{E}}=\{(i,j)\in\Gamma\times\Gamma:\ i>j,\ \|\hat{ \Sigma}_{\hat{X}^{i}\hat{X}^{j}|\hat{U}^{ij}}\|_{\rm HS}<\rho_{n}\}\]
_then \(\lim_{n\to\infty}P(\hat{\mathcal{E}}=\mathcal{E})\to 1\)._
### Optimal rates of tuning parameters
The convergence rate in Theorem 12 depends on \(\epsilon_{n}\) and \(\eta_{n}\) explicitly, and \(\delta_{n}\) implicitly (in the sense that \(\delta_{n}\asymp\eta_{n}^{-3/2}\epsilon_{n}^{-1}n^{-1}+\eta_{n}^{-1}n^{-1/2}+ \eta_{n}+\epsilon_{n}\) is optimal for fixed \(\epsilon_{n}\) and \(\eta_{n}\)). Intuitively, when \(\epsilon_{n}\), \(\eta_{n}\), and \(\delta_{n}\) increase, the biases increase and variances decrease; when they decrease, the biases decrease and the variances increase. Thus there should be critical rates for them that balance the bias and variance, which are the optimal rates.
**Theorem 14**: _Under the conditions in Theorem 12, if \(\epsilon_{n}\), \(\eta_{n}\), and \(\delta_{n}\) are of the form \(n^{a}\), \(n^{b}\), and \(n^{c}\) for some \(a>0\), \(b>0\), and \(c>0\), then_
1. _the optimal rates the tuning parameters are_ \[n^{-3/8}\preceq\epsilon_{n}\preceq n^{-1/4},\quad\eta_{n}\asymp n^{-1/4}, \quad\delta_{n}\asymp n^{-1/4};\]
2. _the optimal convergence rate of the estimated conjoined conditional covariance operator is_ \[\|\hat{\Sigma}_{\hat{X}^{i}\hat{X}^{j}|\hat{U}^{ij}}-\Sigma_{\hat{X}^{i}\hat{ X}^{j}|U^{ij}}\|_{\rm HS}=O_{P}(n^{-1/4}).\]
Note that there is a range of \(\epsilon_{n}\) are optimal, this is because the convergence rate does not have a unique minimizer. This also means the result is not very sensitive to this tuning parameter.
In the above asymptotic analysis, we have treated \(p\) as fixed when \(n\to\infty\). We have also developed the consistency and convergence rate in the scenario where the dimension of \(p_{n}\) of \(X\) goes to infinity with \(n\), which is placed in the Supplementary Material (Section S9) due to limited space.
## 6 Simulation
In this section we compare the performance of our sufficient graphical model with previous methods such as Yuan and Lin (2007), Liu et al. (2009), Voorman et al. (2013), Fellinghauer et al. (2013), Lee et al. (2016b), and a Naive method which is based on the conjoined conditional covariance operator without the dimension reduction step.
By design, the sufficient graphical model has advantages over these existing methods under the following circumstances. First, since the sufficient graphical model does not make any distributional assumption, it should outperform Yuan and Lin (2007) and Liu et al. (2009) when the Gaussian or copula Gaussian assumptions are violated; second, due to the sufficient dimension reduction in sufficient graphical model, it avoids the curse of dimensionality and should outperform Voorman et al. (2013), Fellinghauer et al. (2013), and a Naive method in the high-dimensional setting; third, since sufficient graphical model does not require additive structure, it should outperform Lee et al. (2016b) when there is severe nonadditivity in the model. Our simulation comparisons will reflect these aspects.
For the sufficient graphical model, Lee et al. (2016b), and the Naive method, we use the Gaussian radial basis function as the kernel. The regularization constants \(\epsilon_{X}^{(i,j)}\), \(\epsilon_{X}^{-(i,j)}\), and \(\epsilon_{U}^{(i,j)}\) are chosen by the generalized cross validation criterion described in Section 4.3 with the grid \(\{10^{-\ell}:\ell=-1,0,1,2,3,4\}\). The kernel parameters \(\gamma_{X}^{(i,j)}\), \(\gamma_{X}^{-(i,j)}\), \(\gamma_{XU}^{i,ij}\), \(\gamma_{XU}^{j,ij}\), and \(\gamma_{U}^{ij}\) are chosen according to (11). Because the outcomes of tuning parameters are stable, for each model, we compute the generalized cross validation for the first five samples and use their average value for the rest of the simulation. The performance of each estimate is assessed using the averaged receiver operating characteristic curve as a function of the threshold \(\rho\). The accuracy of a method across all \(\rho\) is measured by the area under the receiver operating characteristic curve.
To isolate the factors that affect accuracy, we first consider two models with relatively small dimensions and large sample sizes, which are
Model I : \[X^{1} =\epsilon_{1},\;X^{2}=\epsilon_{2},\;X^{3}=\text{sin}(2X^{1})+ \epsilon_{3}\] \[X^{4} =(X^{1})^{2}+(X^{2})^{2}+\epsilon_{4},\;X^{5}=\epsilon_{5},\] \[\text{Model II}: \quad X^{1} =\epsilon_{1},\;X^{2}=X^{1}+\epsilon_{2},\;X^{3}=\epsilon_{3},\;X ^{4}=(X^{1}+X^{3})^{2}+\epsilon_{4},\] \[X^{5} =\text{cos}(2X^{2}X^{3})+\epsilon_{5},\;X^{6}=X^{4}+\epsilon_{6},\]
where \(\epsilon_{i}\), \(i=1,\ldots,p\) are from independent and identically distributed standard normal distribution. The edge sets of the two models are
Model I : \[\mathcal{E}=\{(1,3),(1,4),(2,4),(1,2)\}\] Model II : \[\mathcal{E}=\{(1,2),(1,4),(3,4),(1,3),(2,5),(3,5),(2,3),(4,6)\}.\]
We use \(n=100,1000\) for each model, and for each \(n\), we generate 50 samples to compute the averaged receiver operating characteristic curves. The dimension \(d_{ij}\) for sufficient graphical model is taken to be 2 for all cases (we have also used \(d_{ij}=1\) and the results are very similar to those presented here). The plots in the first row of Figure 1 show the averaged receiver operating characteristic curves for the seven methods, with the following plotting symbol assignment:
From these figures we see that the two top performers are clearly sufficient graphical model and Lee et al. (2016b), and their performances are very similar. Note that none of the two models satisfies the Gaussian or copula Gaussian assumption, which explains why sufficient graphical model and Lee et al. (2016b) outperform Yuan and Lin (2007) and Liu et al. (2009). Sufficient graphical model and Lee et al. (2016b) also outperform Voorman et al. (2013), Fellinghauer et al. (2013), and Naive method, indicating that curse of dimensionality already takes effect on the fully nonparametric methods. The three nonparametric estimators have similar performances. Also note that Model I has an additive structure, which explains the slight advantage of Lee et al. (2016b) over sufficient graphical model in subfigure (a) of Figure 1; Model II is not additive, and the advantage of Lee et al. (2016b) disappears in subfigure (b) of Figure 1.
We next consider two models with relatively high dimensions and small sample sizes. A convenient systematic way to generate larger networks is via the hub structure. We choose \(p=200\), and randomly generate ten hubs \(h_{1},\ldots,h_{10}\) from the 200 vertices. For each \(h_{k}\), we randomly select a set \(H_{k}\) of 19 vertices to form the neighborhood of \(h_{k}\). With the network structures thus specified,
Figure 1: Averaged receiver operating characteristic curves of four models. For Model I and II. Left panel: \(n=100\); right panel: \(n=1000\). For Model III and IV, Left panel: \(n=50\); right panel: \(n=100\).
our two probabilistic models are
\[\text{Model III}:\quad X^{i}=1+|X^{h_{k}}|^{2}+\epsilon_{i},\quad \text{where}\quad i\in H_{k}\setminus h_{k},\] \[\text{Model IV}:\quad X^{i}=\sin((X^{h_{k}})^{3})\epsilon_{i}, \quad\text{where}\quad i\in H_{k}\setminus h_{k},\]
and \(\epsilon_{i}\)'s are the same as in Models I and II. Note that, in Model III, the dependence of \(X_{i}\) on \(X_{h_{k}}\) is through the conditional mean \(E(X_{i}|X_{h_{k}})\), whereas in Model IV, the dependence is through the conditional variance \(\text{var}(X_{i}|X_{h_{k}})\). For each model, we choose two sample sizes \(n=50\) and \(n=100\). The averaged receiver operating characteristic curves (again averaged over 50 samples) are presented in the second row in Figure 1. From the figures we see that, in the high-dimensional setting with \(p>n\), sufficient graphical model substantially outperforms all the other methods, which clearly indicates the benefit of dimension reduction in constructing graphical models.
We now consider a Gaussian graphical model to investigate any efficiency loss incurred by sufficient graphical model. Following the similar structure used in Li et al. (2014), we choose \(p=20\), \(n=100,200\), and the model
\[\text{Model V}:X\sim N(0,\Theta^{-1}),\]
where \(\Theta\) is \(20\times 20\) precision matrix with diagonal entries 1, 1, 1, 1.333, 3.010, 3.203, 1.543, 1.270, 1.544, 3, 1, 1, 1.2, 1, 1, 1, 1, 3, 2, 1, and nonzero off-diagonal entries \(\theta_{3,5}=1.418\), \(\theta_{4,10}=-0.744\), \(\theta_{5,9}=0.519\), \(\theta_{5,10}=-0.577\), \(\theta_{13,17}=0.287\), \(\theta_{17,20}=0.542\), \(\theta_{14,15}=0.998\). As expected, Figure 2 shows that Yuan and Lin (2007), Liu et al. (2009), and Lee et al. (2016b) perform better than sufficient graphical model in this case. However, sufficient graphical model still performs reasonably well and significantly outperforms the fully nonparametric methods.
Finally, we conducted some simulation on the generalized cross validation criterion (13) for determining the threshold \(\rho_{n}\). We generated samples from Models I through V as described above, produced the receiver operating characteristic curves using sufficient graphical model, and determined the threshold \(\rho_{n}\) by (13). The results are presented in Figure S1 in the Supplementary Material. In each penal, the generalized cross validation-determined threshold \(\rho_{n}\) are represented by the black dots on the red receiver operating characteristic curves.
Figure 2: Averaged receiver operating characteristic curves for Model V. Left panel: \(n=100\); right panel: \(n=200\).
## 7 Application
We now apply sufficient graphical model to a data set from the DREAM 4 Challenge project and compare it with other methods. The goal of this Challenge is to recover gene regulation networks from simulated steady-state data. A description of this data set can be found in Marbach et al. (2010). Since Lee et al. (2016b) already compared their method with Yuan and Lin (2007), Liu et al. (2009), Voorman et al. (2013), Fellinghauer et al. (2013), and Naive method for this dataset and demonstrated the superiority of Lee et al. (2016b) among these estimators, here we will focus on the comparison of the sufficient graphical model with Lee et al. (2016b) and the champion method for the DREAM 4 Challenge.
The data set contains data from five networks each of dimension of 100 and sample size 201. We use the Gaussian radial basis function kernel for sufficient graphical model and Lee et al. (2016b) and the tuning methods described in Section 4.3. For sufficient graphical model, the dimensions \(d_{ij}\) are taken to be 1. We have also experimented with \(d_{ij}=2\) but the results (not presented here) show no significant difference. Because networks are available, we can compare the receiver operating characteristic curves and their areas under the curve's, which are shown in Table 1.
As we can see from Table 1, sufficient graphical model has the same areas under the receiver operating characteristic curve values as Lee et al. (2016b) for Networks 2, 3, and 4, performs better than Lee et al. (2016b) for Network 5, but trails slightly behind Lee et al. (2016b) for Network 1; sufficient graphical model has the same areas under the curve as the champion method, performs better for Network 5 and worse for Network 1. Overall, sufficient graphical model and Lee et al. (2016b) perform similarly in this dataset, and they are on a par with the champion method. We should point out that sufficient graphical model and Lee et al. (2016b) are purely empirical; they employ no knowledge about the underlying physical mechanism generating the gene expression data. However, according to Pinna et al. (2010), the champion method did use a differential equation that reflects the underlying physical mechanism. The results for threshold determination are presented in Figure S2 in the Supplementary Material.
## 8 Discussion
This paper is a first attempt to take advantage of the recently developed nonlinear sufficient dimension reduction method to nonparametrically estimate the statistical graphical model while avoiding the curse of dimensionality. Nonlinear sufficient dimension reduction is used as a module and applied repeatedly to evaluate conditional independence, which leads to a substantial gain in accuracy in the high-dimensional setting. Compared with the Gaussian and copula Gaussian methods, our
\begin{table}
\begin{tabular}{c c c c c c} & Network 1 & Network 2 & Network 3 & Network 4 & Network 5 \\ Sufficient graphical model & 0.85 & 0.81 & 0.83 & 0.83 & 0.79 \\ Lee et al. (2016b) & 0.86 & 0.81 & 0.83 & 0.83 & 0.77 \\ Champion & 0.91 & 0.81 & 0.83 & 0.83 & 0.75 \\ Naive & 0.78 & 0.76 & 0.78 & 0.76 & 0.71 \\ \end{tabular}
\end{table}
Table 1: Comparison of sufficient graphical model, Lee et al. (2016b), Naive and the champion methods in DREAM 4 Challenge
method is not affected by the violation of the Gaussian and copula Gaussian assumptions. Compared with the additive method (Lee et al., 2016), our method does not require an additive structure and retains the conditional independence as the criterion to determine the edges, which is a commonly accepted criterion. Compared with fully nonparametric methods, sufficient graphical model avoids the curse of dimensionality and significantly enhances the performance.
The present framework opens up several directions for further research. First, the current model assumes that the central class \(\mathfrak{S}_{X^{(i,j)}|X^{-(i,j)}}\) is complete, so that generalized sliced inverse regression is the exhaustive nonlinear sufficient dimension reduction estimate. When this condition is violated, generalized sliced inverse regression is no longer exhaustive and we can employ other nonlinear sufficient dimension reduction methods such as the generalized sliced averaged variance estimation (Lee et al., 2013; Li, 2018) to recover the part of the central class that generalized sliced inverse regression misses. Second, though we have assumed that there is a proper sufficient sub-\(\sigma\)-field \(\mathcal{G}^{-(i,j)}\) for each \((i,j)\), the proposed estimation procedure is still justifiable when no such sub-\(\sigma\)-field exists. In this case, \(U^{ij}\) is still the most important set of functions that characterize the statistical dependence of \(X^{(i,j)}\) on \(X^{-(i,j)}\) - even though it is not sufficient. Without sufficiency, our method may be more appropriately called the Principal Graphical Model than the sufficient graphical model. Third, the current method can be extended to functional graphical model, which are common in medical applications such as EEG and fMRI. Several functional graphical models have been proposed recently, by Zhu et al. (2016), Qiao et al. (2019), Li and Solea (2018), and Solea and Li (2020). The idea of a sufficient graph can be applied to this setting to improve efficiency.
This paper also contains some theoretical advances that are novel to nonlinear sufficient dimension reduction. For example, it introduces a general framework to characterize how the error of nonlinear sufficient dimension reduction propagates to the downstream analysis in terms of convergence rates. Furthermore, the results for convergence rates of various linear operators allowing the dimension of the predictor to go to infinity are the first of its kind in nonlinear sufficient dimension reduction. These advances will benefit the future development of sufficient dimension reduction in general, beyond the current context of estimating graphical models.
## Acknowledgments
Bing Li's research on this work was supported in part by the NSF Grant DMS-1713078. Kyongwon Kim's work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No.2021R1F1A1046976, RS-2023-00219212), basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (2021R1A6A1A10039823).
## Supplementary Material
Supplementary material includes proofs of all theorems, lemmas, corollaries, and propositions in the paper, asymptotic development for the high-dimensional setting, some additional simulation plots for threshold determination. |
2305.12348 | Social Context-aware GCN for Video Character Search via Scene-prior
Enhancement | With the increasing demand for intelligent services of online video
platforms, video character search task has attracted wide attention to support
downstream applications like fine-grained retrieval and summarization. However,
traditional solutions only focus on visual or coarse-grained social information
and thus cannot perform well when facing complex scenes, such as changing
camera view or character posture. Along this line, we leverage social
information and scene context as prior knowledge to solve the problem of
character search in complex scenes. Specifically, we propose a
scene-prior-enhanced framework, named SoCoSearch. We first integrate multimodal
clues for scene context to estimate the prior probability of social
relationships, and then capture characters' co-occurrence to generate an
enhanced social context graph. Afterwards, we design a social context-aware GCN
framework to achieve feature passing between characters to obtain robust
representation for the character search task. Extensive experiments have
validated the effectiveness of SoCoSearch in various metrics. | Wenjun Peng, Weidong He, Derong Xu, Tong Xu, Chen Zhu, Enhong Chen | 2023-05-21T04:53:12Z | http://arxiv.org/abs/2305.12348v1 | # Social Context-aware GCN for Video Character Search via Scene-prior Enhancement
###### Abstract
With the increasing demand for intelligent services of online video platforms, video character search task has attracted wide attention to support downstream applications like fine-grained retrieval and summarization. However, traditional solutions only focus on visual or coarse-grained social information and thus cannot perform well when facing complex scenes, such as changing camera view or character posture. Along this line, we leverage social information and scene context as prior knowledge to solve the problem of character search in complex scenes. Specifically, we propose a scene-prior-enhanced framework, named SoCoSearch. We first integrate multimodal clues for scene context to estimate the prior probability of social relationships, and then capture characters' co-occurrence to generate an enhanced social context graph. Afterwards, we design a social context-aware GCN framework to achieve feature passing between characters to obtain robust representation for the character search task. Extensive experiments have validated the effectiveness of SoCoSearch in various metrics.
Person Search, Person Re-identification, Multimodal Learning
## I Introduction
As an indispensable technology for supporting various video applications, video character search plays an important role in downstream tasks such as clip retrieval, video summarization, etc. For example, many video platforms, such as Tencent and YouKu Video, provide an "only look at him" function by searching the clips of specific characters in videos to meet fans' needs, which enhances user experience and brings significant business benefits. However, the manual collection of clips for specific characters requires a lot of human resources.
To this end, automatic character search attaches much concentration from academia and industry. Nevertheless, most mainstream video character search methods mainly focus on visual information or coarse-grained social information to identify characters. For example, classic video character search methods, such as [1, 2], only focus on extracting visual clues and ignoring other meta information. [3] tries to leverage social information, but they only use heuristic rules based on co-occurrence and fail to fully use the scene's context to extract the prior probability of relationships, thus limiting their effectiveness. Considering that, with the plot processing, the characters may develop many social relationships. Estimating the impact of these relationships on co-occurrence behavior without discrimination will seriously weaken the differentiation of relationships. Let us take Figure 1 as an example. There are two characters in this film scene, where the female is easy to identify because of her clear visual clues. But as for the male one, on the one hand, his back cannot provide enough visual clues to recognize him. On the other hand, the numerous relationships of the female also make it difficult to reveal its co-occurrence character. However, with carefully checking the fine-grained information, such as the subtitles and the kinds of social relationships between characters, we can reasonably infer the identity of this male. Obviously, the subtitles and visual clues of the background imply two characters are a "couple". And the most likely person, dating with the female, is her husband.
Based on the above observations, in this paper, we propose a scene prior knowledge based **S**ocial **C**ontext-aware **C**a**f**emark **S**e**r**hram **R**a**e**r**hram **S**o**o**S**earch**, which can integrate fine-grained social information and scene context to search video characters effectively. Specifically, SoCoSearch first generates a prior probability for different relationship types to obtain a weighted social graph through integrated multimodal semantic clues. At the same time, we use the characters' visual features to select an anchor node and mine characters' co-occurrence. The co-occurrence serves as an extra clue to
Fig. 1: A film scene and its social relation graph.
enhance the weighted social graph into a social context graph, which helps to learn the candidate set of potential matching characters. Furthermore, we employ a social context-aware GCN to achieve feature passing between graph nodes and obtain characters' social features. Finally, we use anchor node similarity to adjust the weight of visual and social features adaptively. The technical contributions of this article can be summarized as follows:
* We further discussed the assistance of social relationships on the video character search task, and took the lead in using the scene context as prior knowledge to enhance the model's performance on the task.
* We proposed a novel neural-based framework, which effectively integrates multiple clues and automatically adjusts the weight of various features.
* The experiments on real video datasets prove that our method outperforms several state-of-the-art baselines in terms of mAP, mINP and R1.
## II Related Works
### _Person Search._
The main challenge of traditional methods is extracting robust and discriminative visual features. [1, 4, 5] extract multiple level features from earlier layers, and fuse them as the final representation to enhance the performance of models. To guarantee that the model has better generalization ability across domains, [6] introduces normalization modules by learning domain styles to improve the model's performance across different cameras. [7] designs two novel modules named jigsaw patch module and side information embeddings, which can improve the robustness and discrimination ability of the model by rearranging patches and adding domain information.
Nevertheless, traditional datasets are often gathered from a few nearby cameras in a short time, leading to the appearance of the same identity often similar, with slight variance in posture, clothing or occlusion. To solve the problem, [8] utilizes social context to make models adaptable to the video character search task in unconstrained environments. [3] takes a step further and uses high-level social relations to identify characters. Unfortunately, this method relies on heuristic modeling and fails to reveal the prior probability of the relationship in different scenes, thus limiting the guidance capability of characters' occurrence.
### _Multimodal Learning._
Recently, some studies have applied multimodal learning techniques to video applications. [9] proposes a novel framework for character-oriented video summarization by combining visual and textual information for the first time. [10] introduces a multi-stream architecture to recognize relations in videos. [11] proposes a model that considers visual and dialogue clues to recognize interactions and social relations jointly. Inspired by the above studies, we design a dynamic relation weight module to predict the prior probability of relationships in different scenes via multimodal clues.
## III Technical Framework
### _Framework Overview_
The video character search task aims to retrieve all gallery RoIs belonging to a specific query character in a social context-aware condition. In order to better address this task, as illustrated in Figure 2, we propose a framework that includes a dynamic relation weight module (DRWM), a co-occurrence miner (COM), and a Social Context-aware Graph Convolutional Network (GCN). Firstly, DRWM generates the prior probability for different social relationships to get a weighted social relation graph. At the same time, COM uses visual features to strengthen the social relation graph. Finally, the social features of gallery characters are learned by social context-aware GCN, and integrated with visual features to get the final representation.
### _Dynamic Relation Weight Module_
Given the visual and textual context as scene prior knowledge, the Dynamic Relation Weight Module (DRWM) aims to predict the probabilities of different relationships between the characters in the scene. These probabilities are assigned to the corresponding edges of the predefined social relation graph to construct a weighted social relation graph.
Specifically, we first leverage TSM [12] and Bert [13] to extract visual and textual features, respectively. \(h_{v}\) is denoted as the visual features, and \(h_{d}\) is denoted as the textual features. We concatenate \(h_{v}\) with \(h_{d}\), and feed them into a linear layer to obtain the relation weight \(\hat{y}_{r}\). We assign the relation weight to the corresponding edge in the predefined social graph to get the weighted social graph. For the example shown in Figure 2 (DRWM Part), the relationship between \(q_{1}\) and \(q_{2}\) is "\(couple\)" and the corresponding relation weight output from DRWM is 0.6, so the edge weight of \(e_{ij}(``couple\)") is assigned to 0.6. We formulate the relation recognition as a multi-label classification task, and the binary cross-entropy loss \(\mathcal{L}_{\mathcal{R}}\) is expressed as:
\[\mathcal{L}_{\mathcal{R}}=-y_{r}\cdot log(\hat{y_{r}})-(1-y_{r})\cdot log(1- \hat{y_{r}}), \tag{1}\]
where \(y_{r}\) is the true relationships between each pair of gallery characters in the scene, and \(\hat{y_{r}}\) is the prediction of the DRWM.
### _Co-occurrence Miner_
Given the RoIs of the query and gallery, Co-occurrence Miner (COM) aims to reveal characters' co-occurrence. In this module, we first calculate the similarity between the RoIs of the query and gallery by euclidean distance. Then, we convert the similarity to the possibility of characters' co-occurrence.
Specifically, we employ TransReID [7] as our visual backbone to learn the visual features of characters. Afterwards, we calculate the euclidean distance of each query-gallery features pair and then convert it to a co-occurrence probability. The process can be expressed as follows:
\[d(g_{i},q_{j})=||\mathbf{h_{q_{i}}}-\mathbf{h_{q_{j}}}||_{2}^{2}, \tag{2}\]
\[s(g_{i},q_{j})=\frac{\bar{d}-d(g_{i},q_{j})}{\delta(d)}, \tag{3}\]
\[p(g_{i},q_{j})=\frac{exp(s(g_{i},q_{j}))}{\sum_{j}exp(s(g_{i},q_{j}))}, \tag{4}\]
where \(\mathbf{h_{gi}}\) and \(\mathbf{h_{q_{j}}}\) are visual features of gallery \(g_{i}\) and query \(q_{j}\), respectively. \(d\) is the average distance and \(\delta(d)\) is the standard deviation of all distances. \(p(g_{i},q_{j})\) is the identity probability, representing the probability of \(g_{i}\) and \(q_{j}\) belonging to the same character. Then, COM calculates the co-occurrence probability of each pair. For example, in Figure 2 (COM Part), given five query characters \(\{q_{j}|j=1,2,3,4,5\}\), two gallery characters \(\{g_{i}|i=1,2\}\) and their respective probability distributions of similarity, \(g_{1}\) is regarded as the anchor character since the highest confidence value 0.6. The distribution of \(p(g_{1},q_{j})\) serves as the possibility of co-occurrence of \(p(g_{2},q_{j})\), which consists of the edge weight of the social context graph. Specifically, \(g_{1}\) and \(g_{2}\) appear simultaneously, and \(g_{1}\) is probably \(q_{j}\). Therefore, it can be considered that \(q_{j}\) is likely to co-occur with \(g_{2}\), and the similarity between \(g_{1}\) and \(q_{j}\) is the co-occurrence probability of \(q_{j}\) and \(g_{2}\). Thus, we connect \(g_{2}\) with all \(q_{j}\) and assign co-occurrence probability to the corresponding edge's weight to represent characters' co-occurrence. For scenes with more than two characters, we choose the character with the highest similarity as the anchor.
### _Social Context-aware GCN_
In order to make full use of characters' relationships and co-occurrence simultaneously, this module combines the weighted social relation graph obtained from DRWM and the characters' co-occurrence obtained from COM to generate the social context graph. Then it performs message passing on the graph to extract social features of the gallery, which are weighted and combined with visual features to learn the final representation.
It is noting that, in this graph, the second-order neighbors of each gallery node constitute a candidate character set, which makes the model pay more attention to the characters who are likely to exist in the scene. As mentioned before, DRWM assigns weights to different relationship edges to determine the most likely social pairs in the scene. COM mines the characters' co-occurrence based on visual similarity to determine the most likely co-occurrence pairs in this scene. Finally, as shown in Figure 2 (Social Context-aware GCN part), we can get meta paths according to social and co-occurrence pairs and obtain the gallery character candidates.
Therefore, in the social co-occurrence graph, we need to aggregate the feature of the second-order neighbors into the corresponding gallery node to reduce the difference between them. In addition, we remove the feature transformation weight \(W\) and activation function \(f\) to obtain pure features of second-order neighbors. The specific operations are as follows:
\[\mathbf{h^{(l+1)}}=\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}}\tilde{\mathbf{D}}^{- \frac{1}{2}}\mathbf{h^{(l)}}, \tag{5}\]
where \(\tilde{A}\in\mathbb{R}^{N\times N}\) is the adjacency matrix of the graph, and \(\tilde{D}\in\mathbb{R}^{N\times 1}\) is the degree matrix of nodes, \(h^{(l)}\in\mathbb{R}^{N\times d}\) is the output from \(l\) layer. Suppose \(h^{(2)}\) is the social feature output from 2-layer social content-aware GCN, \(h^{(0)}\) is the visual feature output from the visual backbone, the final representation of the gallery node is:
\[w_{s}=\frac{p_{a}}{\sum\hat{y_{r}}}, \tag{6}\]
\[\mathbf{h_{f}}=w_{s}\mathbf{h^{(2)}}+(1-w_{s})\mathbf{h^{(0)}}, \tag{7}\]
where \(p_{a}\) donates the highest anchor's identity probability, \(\hat{y_{r}}\) donates the relationships' probability predicted by DRWM, which is used to control the weight of social features. Because when the identity probability of the anchor is very low or there are many relationships in a scene, the possibility that the second-order neighbor of the gallery node is the corresponding
Fig. 2: The overall structure of our framework.
query character will also decrease. Therefore, we need to reduce the weight of social feature \(h^{(2)}\).
We formulate the cross entropy loss \(\mathcal{L_{ID}}\) and triplet loss \(\mathcal{L_{T}}\) to optimize COM and Social Context-aware GCN, which is expressed as:
\[\hat{y}_{id}=\mathbf{W_{c}}*\mathbf{h_{f}}+\mathbf{b_{c}}, \tag{8}\]
\[\mathcal{L_{ID}}=\sum-y_{id}\cdot log(\hat{y}_{id}), \tag{9}\]
\[\mathcal{L_{T}}=log[1+exp(||\mathbf{h_{f}}-\mathbf{h_{p}}||_{2}^{2}-||\mathbf{h_{f}}-\mathbf{ h_{n}}||_{2}^{2})], \tag{10}\]
where \(W_{c}\) and \(b_{c}\) is weight and bias of classification layer. \(\hat{y}_{id}\), \(y_{id}\) are the prediction and groundtruth of the character's ID respectively. \(\mathbf{h_{f}}\) is the representation of a gallery character \(g_{i}\), \(\mathbf{h_{p}}\) is the representation of the positive sample that has same character ID with \(g_{i}\), \(\mathbf{h_{n}}\) is the representation of the negative sample that has different character ID with \(g_{i}\). Finally, we combine three loss to get the overall loss function:
\[\mathcal{L}=\mathcal{L_{R}}+\mathcal{L_{ID}}+\mathcal{L_{T}}. \tag{11}\]
## IV Experiments
### _Datasets and Pre-processing_
To the best of our knowledge, there is no off-the-shelf dataset available for the video character search task that contains RoIs of characters and their corresponding video frames, textual information, and relation graph. Therefore, following [3], we constructed two datasets named Social-Bilibili and Social-MovieNet based on the Bilibili [9] and MovieNet [14] datasets, respectively.
**Social-Bilibili Dataset**. The dataset contains 70 films with an average length of 2 hours, 375 main characters, related subtitles and screen-bullet comments. All characters' relationships are grouped into five categories, which are working, kinship, hostile, friend and couple respectively. Based on the dataset, we sampled 325 characters, a total of 11478 images, for train, and the remaining 50 characters, 1989 images, for test. Following [3], we selected a clear image for each character as the query image, which constitutes our query set. In addition, we collect the video frames and textual information with time window \(T_{t}=[t-12,t+12]\) as the context.
**Social-MovieNet Dataset**. We collected social relation graphs for 16 films from MovieNet, with an average length of 1.7 hours, 79 main characters and related subtitles. To be consistent with the Social-Bilibili dataset, we also group all relationships into five categories, which are working, kinship, hostile, friend and couple respectively. We sampled 59 characters, a total of 14462 images, for train, and the remaining 20 characters, 5991 images, for test. Similarly, we selected a clear image for each character as a query image, which constitutes our query set. Since MovieNet has split movies into shots, we set the time window to \(T_{t}=[t-2,t+2]\), where \(t\) is the id of the shot. All frames and subtitles in the time window \(T_{t}\) will be collected as the context.
### _Experiment Settings_
Following the setting of TransReID [7], all training images are resized to 256\(\times\)128 and augmented with random horizontal flipping, padding, random cropping and random erasing [15]. The batch size is set to 64 with 16 images per character ID for triplet loss. Our model is optimized by SGD with a 0.9 momentum setting. During the first 120 epochs, we only train the visual backbone with the initial learning rate of 0.08 and weight decay of 1e-4. During the last 20 epochs, DRWM and social context-aware GCN are added to train with visual search backbone parameters frozen. The initial learning rate is set to 1e-4 and weight decay is set to 1e-5. All experiments are conducted on two GeForce RTX 3090 with PyTorch and Torchvison toolboxes.
### _Baselines_
We compare our model with the following state-of-the-art methods to evaluate its performance:
* **HACNN**[16], which contains two branches, one branch learns local features, and the other learns global features. At the same time, a attention selection module is adopted to learn a set of complementary attention maps to optimize the representation.
* **MLFN**[5], which contains multiple factor modules and factor selection modules to catch the semantic information at different level, and dynamic select the information extracted from multiple factor modules to explain content of each image.
* **ResNet**[17], which is widely used in computer vision tasks [12, 18, 19] and can solve the learning degradation problem in traditional CNN-based networks.
* **ResNet-mid**[4], which extracts the mid-level features and fuses them with the output of the last layer as the final representation, making itself capable to learn discriminative features at different semantic levels.
* **OSNet**[1], which contains a unified aggregation gate that can fuse multi-scale features caught by omni-scale residual blocks to learn homogeneous and heterogeneous information to improve its performance.
* **OSNet-AIN**[20], which introduces instance normalization layers to adapt the styles across domains on the basis of OSNet. In addition, an architecture search algorithm is performed to find the best locations for instance normalization layers.
* **NFormer**[21], which introduces a neighbor transformer network to eliminate outlier features and generate more reliable representations by modeling the relations between all the input images.
* **ViT**[22], which splits the image into N patches, and then uses a linear projection function to map them into 1-D tokens. In this way, the image has converted into a sequence of tokens, which can be processed by transformer.
* **TransReID**[7], which introduces jigsaw patch module and side information embeddings to strengthen the connection between different patches and learn domain style, enhancing the robustness of the model.
* **TP**[3], which is proposed as baseline in [3]. Following their ideas, we use LSTM to extract text features and calculate the similarity, then combine textual and visual similarity to obtain the final measurement. TP-ResNet and TP-ViT represent as their visual backbones.
* **SCPS**[3], which introduces a relation-aware framework to identify characters by using visual information and social relations revealed by multi-modal context. SCPS-L and SCPS-P represent as linear and prudent label update strategies, respectively.
### _Overall Performance_
In Table I, we show the overall performance of our model and the baselines. Following the re-ID community, we select mAP and CMC-Rank@1(R1) as our evaluation metrics. In addition, because the video character search is used as the upstream task of idols summarization and the enthusiastic fans usually don't want to miss any shots of their idols, we need to evaluate the cost of searching for the last correct image. Therefore, we introduce mINP [23] to measure the ability to retrieve the hardest match of models.
Overall, our social context-aware method performs better than traditional methods. Taking TransReID as an example, on the Social-Bilibili dataset, our method reaches 82.8%, 56.3% and 94.0%, surpassing TransReID 2.9%, 5.1% and 4.0% on the terms of mAP, mINP and R1 respectively. This is because TransReID only uses the visual features and treats each character as an independent instance, while our method can integrate characters' relationships and co-occurrence to assist the character search. However, on the Social-MovieNet dataset, the mINP decreased by 0.1% compared with TransReID. Because as the mAP of the visual search backbone decreases (the visual search task on the Social-MovieNet dataset is more challenging), the anchor node selection becomes less accurate, which makes it difficult to propagate features of the corresponding query node to the gallery node.
In addition, we compared the different methods of using social clues. Take SCPS-P as an example, on Social-Bilibili dataset, our model outperformed 1.8%, 3.2% and 4.0% on mAP, mINP and R1 respectively. Compared with our method, SCPS doesn't consider the role of the prior probability of the relationship between the characters in different scenes, thus limiting the model performance. We can observe similar results on the Social-MovieNet dataset.
### _Ablation Study_
**Different Backbones:** In this section, we discuss the performance of our model with different visual backbones. The experimental results are shown in Table II. We can observe that both CNN-based methods and transformer-based methods have improved after equipped social context-aware GCN on Social-Bilibili Dataset. In addition, the improvement of transformer-based methods is more significant than CNN-based methods'. Because SoCoSearch need to select anchors before building the social context graph, and the mAP of transformer-based methods are significantly higher than CNN-based methods', which makes this kind of method can select anchors more accurately. The correct anchors make social-context aware GCN more likely to aggregate the features of corresponding query characters to the gallery characters, thereby improving the model performance.
**Different Modalities:** We also compare the performance of our model with different modal information. The experimental results are shown in Table III. For models that neither uses visual nor textual context, it can be regarded as the TransReID, which only depends on the image features to match characters. Hence, it can not be well applied to the video characters search task. However, our model uses visual and textual information as clues to construct a social co-occurrence graph, which improves the similarity between the corresponding gallery and query nodes. It can be seen that when just using visual or textual information, the model is still effective, but the performance will be slightly decreased compared with the model with both modalities. Because a single modality may introduce bias, which leads to the DRWM being unable to allocate weights correctly, thus decreasing the model performance. Thus, it is necessary to integrate multimodal clues for better weights allocation.
**Different Balance Strategies:** Finally, to verify our adaptive strategy's effectiveness, we compared different methods of balancing social features. Table IV shows three strategies to control social weight: mean, random, and adaptive weight. According to the mean and random strategy results, we can observe that inappropriate social weight will damage the original performance. However, our adaptive strategy is well-adapted to different scenes. It can adjust the social weight based on multimodal clues, making the model perform better.
## V Conclusion
In this paper, we proposed a social context-aware framework named SoCoSearch to deal with the video character search task. SoCoSearch can integrate prior knowledge and characters' co-occurrence to construct a social context graph. With the help of the graph, we can get a set of potential matching characters. And it employed a social context-aware GCN to pass the candidate node's features to the corresponding gallery characters to improve the similarity between the query-gallery pairs and complete the matching. Extensive experiments on the two real-world datasets demonstrated an improvement over several baselines, proving our method's superiority in the video character search task.
|
2307.12531 | A Few-Degree Calorimeter for the future Electron-Ion Collider | Measuring the region $0.1 < Q^{2} < 1.0$ GeV$^{2}$ is essential to support
searches for gluon saturation at the future Electron-Ion Collider. Recent
studies have revealed that covering this region at the highest beam energies is
not feasible with current detector designs, resulting in the so-called $Q^{2}$
gap. In this work, we present a design for the Few-Degree Calorimeter (FDC),
which addresses this issue. The FDC uses SiPM-on-tile technology with tungsten
absorber and covers the range of $-4.6 < \eta < -3.6$. It offers fine
transverse and longitudinal granularity, along with excellent time resolution,
enabling standalone electron tagging. Our design represents the first concrete
solution to bridge the $Q^{2}$ gap at the EIC. | Miguel Arratia, Ryan Milton, Sebouh J. Paul, Barak Schmookler, Weibin Zhang | 2023-07-24T05:35:02Z | http://arxiv.org/abs/2307.12531v1 | # A Few-Degree Calorimeter for the future Electron-Ion Collider
###### Abstract
Measuring the region \(0.1<Q^{2}<1.0\) GeV\({}^{2}\) is essential to support searches for gluon saturation at the future Electron-Ion Collider. Recent studies have revealed that covering this region at the highest beam energies is not feasible with current detector designs, resulting in the so-called \(Q^{2}\) gap. In this work, we present a design for the Few-Degree Calorimeter (FDC), which addresses this issue. The FDC uses SiPM-on-tile technology with tungsten absorber and covers the range of \(-4.6<\eta<-3.6\). It offers fine transverse and longitudinal granularity, along with excellent time resolution, enabling standalone electron tagging. Our design represents the first concrete solution to bridge the \(Q^{2}\) gap at the EIC.
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †:
Footnote †:
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †: journal:
Footnote †:
Footnote †: journal: NIMA
+
Footnote †:
Footnote †: journal: NIMA
+
Footnote †: journal: NIMA
+
Footnote †:
Footnote
which will cover the range of \(-4.6<\eta<-3.6\).
Figure 1 illustrates the coverage in the \(x\) vs. \(Q^{2}\) phase-space within the nominal range of the FDC for the top-energy settings for \(eA\) collisions. The FDC has the potential to significantly extend acceptance into a key kinematic phase-space and enable the study of the predicted gluon-saturation transition in inclusive DIS [7], as well as many other observables such as exclusive vector-meson production [8; 9].
## 2 Design Constraints and Requirements
### Location and Acceptance
The primary challenge in measuring electron scattering at small angles lies in effectively instrumenting the region near the beampipe while minimizing the material in front of the detector.
One possibility is to position the FDC behind the crystal ECAL and in front of the backward HCAL, as illustrated in Fig. 2. In the current ePIC design [2], a potential location is at \(z=-307\) cm, which would leave space for a compact calorimeter and about 10 cm gap before the HCAL.
Figure 3 shows that at \(z=-307\) cm, the electron beampipe for IP6 has a radius of 4.5 cm, while the hadron beampipe has a radius of 1.8 cm and is shifted 8.3 cm in the \(x\) direction. Assuming a 5 mm clearance to the beampipe, similar to the ZEUS BPC [5], a calorimeter with an outer perimeter of \(30\times 40\) cm\({}^{2}\) could cover the region \(-4.6<\eta<-3.6\) with non-uniform azimuthal coverage.
The shaded region on the FDC in Fig. 3 represents the area where electrons would encounter part of the crystal ECAL or its support structure before reaching the FDC1. This region, which is a few cm wide, would serve as a veto area.
Footnote 1: We approximate the path of the electron as a straight line and assume that the flat cables servicing the ECAL SiPMs can be arranged to avoid the hole area.
Figure 4 shows projections of the possible detector layout, including both the \(yz\) and \(xz\) views. The ECAL hole is currently assumed to have a height of 14.7 cm and a width of 20.5 cm,
Figure 1: Coverage in the \(x\) vs. \(Q^{2}\) phase–space for EIC top-energy \(eA\) collisions with the nominal range of the FDC (blue) spanning \(-4.6<\eta<-3.6\), and the crystal ECAL (orange) spanning \(\eta>-3.6\). The colored curves indicate the expected saturation scales, \(Q_{z}(x)\), for various nuclei [1; 10].
Figure 3: Transverse view of the FDC, assuming a location at \(z=-307\) cm, with rings of constant \(\eta\) superimposed. The electron and hadron beampipes are represented by their transverse cut, along with a \(10\times 10\) mm\({}^{2}\) grid for reference. The non-shaded area is a projection from the ECAL hole.
Figure 2: A potential detector layout showcasing the FDC, crystal ECAL, and endcap HCAL. The upper panel provides an upstream view, with the electron beam moving from left to right, while the bottom panel presents a downstream view. The Sketch of the EACAL and HCAL can be found in Ref. [2]. The support structures to the floor are not shown for clarity.
taking into account the current version of the "micro-flange" (with a cam shape that is 15.2 cm wide and 11.1 cm tall) and a required clearance of 1.8 cm, 3.6 cm, 1.8 cm, and 1.7 cm between the flange and the ECAL's inner support structure on the top, right, bottom, and left sides when looking downstream [3].
### Dead Material in Front of FDC
The main challenge faced by a detector located in a high pseudorapidity range is that particles can encounter a significant amount of material as they graze the beampipe walls. While converted electrons and photons can be identified through shower-shape information, reducing the amount of material helps by improving efficiency and minimizing background.
Figure 5 illustrates the number of radiation lengths of beampipe material encountered by electrons before reaching the FDC, as a function of \(\eta\). Within most of the FDC acceptance, the total number of radiation lengths ranges from 0.5 to 1.2. Approximately half a radiation length is contributed by the flange at \(z=-120\) cm within the range of \(-4.2<\eta<-3.5\). The significant increase in the total number of \(X_{0}\) traversed at around \(\eta=-4.0\) is attributed to the use of aluminum instead of beryllium in the beampipe for \(z<-80\) cm.
The HERA experiments successfully addressed this challenge by implementing a beampipe "exit window" made of thin aluminum, which reduced the total dead material to less than 1 \(X_{0}\)[6]. Alternatively, one could use a beryllium section.
Figure 5 illustrates that incorporating a beryllium section, within the range of \(-205<z<-80\) cm, would effectively decrease the overall material budget in the \(-4.7<\eta<-4.1\) region to below 0.5 \(X_{0}\). As an alternative, implementing a 1.5 mm aluminum layer would yield a reduction to less than 1 \(X_{0}\) for \(\eta>-4.5\).
Another effective method for mitigating the impact of dead material is to use shower shapes to tag converted electrons or photons that originated further upstream in the beampipe or flanges [5].
### Acceptance Limit
To enable accurate FDC measurements at small angles, it is crucial to minimize energy leakage into the beampipe. Figure 6 shows the distance to the beampipe surface as a function of \(\eta\). The target value of \(\eta=-4.6\) corresponds to about 18 mm from the electron beampipe at \(z=-307\) cm. Assuming a 5 mm clearance, this leaves 13 mm between the edge of the detector and \(\eta=-4.6\). For reference, the ZEUS BPC measured 95% of the energy from 5 GeV electrons at 8 mm from the detector's edge in test beams [5].
### Energy Range
The main objective of the FDC is to identify and measure the energy and angle of electrons in the \(0.1<Q^{2}<1.0\) GeV\({}^{2}\)
Figure 4: A potential location for the FDC is behind the backward ECAL in the current ePIC design. This diagram illustrates the dimensions and locations of ECAL and HCAL as per Ref. [2]; the flanges and clearance dimensions as per Ref. [3], and ECAL support structure as per Ref. [11]. The blue and red lines indicate the electron and hadron beampipes, respectively.
Figure 5: Radiation lengths traversed in the electron-beampipe material at \(\phi=180^{\circ}\) as a function of \(\eta\). It includes the IP6 beampipe model (solid blue), a modified version with thinner aluminum (orange dot-dashed), and another scenario with beryllium (green dashed). The contribution from the flanges is represented by the red dotted curve.
range. Figure 7 illustrates the minimum electron energy as a function of \(\eta\) for various \(Q^{2}\) values. The minimum energy required for \(-4.6<\eta<-3.6\) falls within the range of 2-13 GeV, whereas the maximum is the beam energy, 18 GeV. Thus, the target energy range is 2-18 GeV.
### Background Rejection
The main background for inclusive DIS measurements originates from events with small \(Q^{2}\), where the scattered electron is not detected, but an electron candidate is observed in the FDC. Figure 8 shows the expected particle spectra obtained using Pythia6 to simulate \(ep\) scattering without a \(Q^{2}\) cut and without considering detector effects. In the absence of charge tagging, both electrons and positrons from semi-leptonic decays contribute to the background. Similarly, the charged-pion background includes both charges. The photon background primarily arises from neutral-pion decays.
We estimate the background rejection power of approaches employed at HERA [12; 5] and the EIC YR. Specifically, we use the far-backward detectors as a veto for photoproduction and select events based on their energy-momentum imbalance2 with a loose selection of \(E-p_{z}>18\) GeV [1].
Footnote 2: The \(E-p_{z}\) distribution peaks at twice the electron-beam energy when the scattered electron is correctly identified.
Figure 9 illustrates the impact of these cuts on the \(e/\pi\) ratio. The far-backward veto has a modest effect due to its small acceptance [1]. Similarly, the \(E-p_{z}\) cut has a modest impact, since for background events the scattered electron has low energy. Overall, the resulting \(e/\pi\) ratio is about \(e/\pi\approx 0.4\) between 1-6 GeV, \(e/\pi\approx 1\) around 10 GeV, and increases rapidly at higher energies. Similarly, the resulting \(e/\gamma\) ratio ranges from 0.1 to 1 for \(E<6\) GeV and increases at higher energies. The background of positrons and electrons from semi-leptonic decays reaches 10% at 1 GeV, decreasing to less than 1% at 10 GeV. Since there is no magnetic field near the FDC, this background cannot be subtracted using reverse-field runs. Instead, Monte-Carlo studies would be employed to estimate and subtract it. Alternatively, electron-isolation criteria might reduce this background further.
The FDC measurements should aim for a purity greater than 90% to achieve background uncertainties at the few percent level. A stretch goal of 99% purity would result in a negligible uncertainty compared to the expected luminosity uncertainty of 1% [1]. Hence, the necessary rejection power falls within the
Figure 8: Particle spectra per fb\({}^{-1}\) of integrated luminosity, estimated using Pythia6 with no \(Q^{2}\) cut.
Figure 6: Distance from the electron beampipe as a function of \(\eta\) at various \(z\) locations, including the proposed FDC position (blue) and the location of the ECAL endcap (red).
Figure 7: Minimum energy of electrons for a given \(Q^{2}\) as a function of \(\eta\), for an electron-beam energy of 18 GeV (independent of hadron or ion-beam energy).
range of 10-25 for \(\pi^{+}\) and 10-100 for \(\gamma\), with a factor of 10 higher for the stretch goal.
The standalone FDC's \(\pi^{+}\) rejection power will rely on its shower-shape capabilities. Longitudinal segmentation can play a crucial role in discriminating hadrons, as they are more likely to interact deeper within the detector, while electrons tend to exhibit showers starting primarily in the initial layers. Moreover, transverse segmentation also helps, as electrons typically produce narrower and more regular showers compared to hadronic ones. This approach is expected to work well above a few GeV.
Additional background rejection can be achieved through auxiliary systems, such as a scintillator layer for tagging MIPs to reject unconverted photons, or a timing layer to reject low-energy hadrons. Figure 10 demonstrates the potential of TOF and illustrates that a time resolution of about 50 ps would be necessary to achieve a \(2\sigma\)\(e/\pi\) separation below 1 GeV.
## 3 Design
The FDC design draws inspiration from the ZEUS BPC [5] and H1 VLQ [6] calorimeters, incorporating modern enhancements in optical readout and photosensors. It leverages recent advancements in high-granularity calorimetry [13], led by the CALICE collaboration. These developments offer the potential for substantial improvements in granularity at a reasonable cost for a small detector such as the FDC.
The CALICE collaboration has tested a scintillator-tungsten ECAL with wavelength-shifting fibers coupled to SiPMs [14]. The tested strips had widths of 10 mm. By applying a "split-strip algorithm" [15], this prototype can achieve an effective granularity close to 10\(\times\)10 mm\({}^{2}\). The test-beam data resulted in an energy resolution of \(12\%/\sqrt{E}\oplus 1.2\%\). More recently, this design has been superseded by a "SiPM-on-tile" approach, where the SiPM is air-coupled to the scintillator strip [16; 17]3.
Footnote 3: This approach is also used in the ePIC calorimeter insert [18; 19].
Table 1 summarizes our target design in comparison with the ZEUS BPC and H1 VLQ designs.
Figure 11 shows the FDC design which includes alternating layers of vertical and horizontal scintillators that are wrapped in reflective foil and read out using SiPMs (HPK 14160-1315PS). The scintillator strips measure \(50\times 10\times 2\) mm (length, width, thickness) and feature a dimple at the center for air-coupling with the SiPM.
Each tungsten layer is 3.5 mm (1 \(X_{0}\), and \(0.035\lambda\)). The total FDC consists of 20 layers, for a total of 20 \(X_{0}\). Up to about \(\eta=-4.1\), the detector has full coverage in \(\phi\), whereas at larger negative \(\eta\), the hole for the hadronic beampipe removes up to \(60^{\circ}\) of acceptance.
The FDC is divided into two parts, allowing one half to be retracted to the left and the other half to the right for maintenance purposes of the SiPM boards, particularly in the case of unexpected radiation loads. However, it is worth noting that SiPM
\begin{table}
\begin{tabular}{l l l l} \hline & EIC FDC & ZEUS BPC & H1 VLQ \\ \hline Depth & 20 X\({}_{0}\) & 24 X\({}_{0}\) & 16.7 X\({}_{0}\) \\ W/Sc thickness & 3.5/2 mm & 3.5/2.6 mm & 2.5/3 mm \\ Moliere Radius & 15 mm & 13 mm & 15 mm \\ Optical readout & SiPM-on-tile & WLS bar & WLS bar \\ & & + PMT & +PIN \\ Trans. granularity & 10\(\times\)50 mm\({}^{2}\) & 7.9\(\times\)150 mm\({}^{2}\) & 5\(\times\)120 mm\({}^{2}\) \\ Long. granularity & every strip & none & none \\ Channels & 4500 & 31 & 336 \\ Readout & HGROC & FADC/TDC & ASIC \\ Position res. & 3.6 mm\(/\sqrt{E}\) & 2.2 mm\(/\sqrt{E}\) & 2 mm\(/\sqrt{E}\) \\ Energy res. & \(\frac{17\%}{\sqrt{E}}\oplus 2\%\) & \(\frac{17\%}{\sqrt{E}}\oplus 2\%\) & \(\frac{13\%}{\sqrt{E}}\oplus 3\%\) \\ Time resolution & \(<\)50 ps & 400 ps & \\ \hline \end{tabular}
\end{table}
Table 1: Summary description of our proposed EIC FDC, the ZEUS BPC [5], and the H1 VLQ [6].
Figure 10: Required time resolution to discriminate between electrons and charged pions using TOF as a function of the particle’s momentum. These requirements are based on a location at \(z=-307\) cm.
Figure 9: Ratio of scattered electrons to charged pions as a function of energy.
annealing should not be necessary given that at the FDC location, the neutron flux is moderate and reaches approximately \(10^{9}\) 1-MeV equivalent neutrons per cm\({}^{2}\) for a year of running at top luminosity [20].
## 4 Simulation
We used the DD4HEP framework [21] to run Geant4 [22] simulations of electrons generated with a uniform azimuthal angle at various \(\eta\) points. The simulation does not include any dead material, the effects of which we leave for future work.
### Energy Resolution
Figure 12 shows the energy resolution, which can be parameterized as \(17\%/\sqrt{E}\oplus 2\%\), and is consistent with the ZEUS BPC data [5]. We also compare it to CALICE data, which exhibits improved performance at the expense of a larger Moliere radius.
Figure 13 shows the energy resolution and scale as a function of \(\eta\). The performance remains relatively stable for \(\eta>-4.6\). For particles that hit near the edge of the detector's upstream face (\(\eta\approx-4.8\)), the energy-scale offset is \(-20\%\), and the resolution is about \(12\%\). One would expect that half the shower would be in the calorimeter (a loss of \(50\%\)), but because the hole is a cylinder, the center of the shower moves further away from the hole as it goes through the calorimeter.
### Position Resolution
The polar-angle resolution is determined by considering the position resolution of the FDC and the resolution of the vertex position. The electron vertex position can be precisely determined by tracking other particles in the event using the main detectors, as was done in HERA [5; 6]. Thus, only the FDC position resolution is relevant.
We reconstructed the \(x\) and \(y\) values following the method described in Ref. [23]:
\[x=\frac{\sum\limits_{i\in\,layers}w_{X,i}x_{i}}{\sum\limits_{i\in\,layers}w_{ X,i}},y=\frac{\sum\limits_{i\in\,layers}w_{X,i}y_{i}}{\sum\limits_{i\in\,layers}w_{ Y,i}} \tag{1}\]
where the weights \(w_{X,i}\) and \(w_{Y,i}\) are determined by
\[w_{X,i}=\max\left(0,w_{0}+\log\frac{E_{i}}{\sum\limits_{j\in\, layers}E_{j}}\right) \tag{2}\]
\[w_{Y,i}=\max\left(0,w_{0}+\log\frac{E_{i}}{\sum\limits_{j\in\,{\rm{th\,layers }}}E_{j}}\right) \tag{3}\]
Figure 11: Top: exploded view of a double layer of the FDC. Middle: dimensions of a single scintillator strip. Right: Dimensions of the FDC.
Figure 12: FDC energy resolution at \(\eta=-4.0\) compared to ZEUS BPC [5] (green triangles) and CALICE [15] (red squares) data.
Figure 13: Dependence of the energy resolution on \(\eta\). Vertical lines indicate different distances from the edge of the hole.
The "h layers" sums are over layers with horizontally aligned strips and the "v layers" sums are over layers with vertically aligned strips. The cutoff parameter \(w_{0}\) is set to 4.0.
Figure 14 shows the position resolution as a function of energy. For energies greater than 1 GeV, the position resolution is better than the strip width divided by \(\sqrt{12}\). The resolution we obtained is poorer than the ZEUS BPC resolution. This difference can be partially explained by the smaller strip width (7.9 mm vs 10 mm), but it may also include components from algorithm tuning.
### Kinematic Variable Reconstruction
The resolution in Bjorken \(x\) and \(Q^{2}\) can be derived from those of the electron energy \(\delta E_{e}^{\prime}\) and of the electron polar angle \(\delta\theta_{e}\) (using the small-angle approximation):
\[\frac{\delta Q^{2}}{Q^{2}} \approx\frac{\delta E_{e}^{\prime}}{E_{e}^{\prime}}\oplus\frac{2 }{\pi-\theta_{e}}\delta\theta_{e}, \tag{4}\] \[\frac{\delta x}{x} \approx\frac{1}{y}\frac{\delta E_{e}^{\prime}}{E_{e}^{\prime}} \oplus\left(\frac{x}{E_{e}/E_{p}}-1\right)\frac{2}{\pi-\theta_{e}}\delta \theta_{e} \tag{5}\]
In the non-divergent region (_i.e._\(y>0.1\)), the \(Q^{2}\) resolution ranges from 4% to 14% depending on kinematics, whereas the \(x\) resolution ranges from 10% to 50% with a strong \(y\) dependence. To quantify these resolutions in the context of inclusive DIS measurements, we followed the EIC YR approach and calculated the corresponding purity and stability values. In this context, purity is determined by calculating the fraction of events reconstructed in a specific bin that were also generated in that bin (i.e., \(P=N_{\rm(rec,gen)}/N_{\rm rec}\)). Stability is calculated as the fraction of events generated in a specific bin that were also reconstructed in that bin (i.e., \(S=N_{\rm(rec,gen)}/N_{\rm gen}\)). Here, \(N_{\rm(rec,gen)}\) represents the number of events where the electron is both generated and reconstructed in the same bin. For the generated events, we used the same Pythia6 simulation as described in Section 2.5.
Figure 15 shows the resulting purity and stability plot for events with \(-4.6<\eta<-3.6\). The plot has 5 bins per decade in both \(x\) and \(Q^{2}\), similar to the EIC Yellow Report [1]. Purity and stability values above 50% are observed for the phase-space covered with \(y>0.1\), with some degradation at lower values, as expected when using the electron reconstruction method exclusively. It is anticipated that the performance for \(0.01<y<0.1\) will improve by combining the electron and hadronic methods, potentially using machine-learning techniques [24].
A potential issue that can impact purity and stability is the electron beam's angular divergence, which can reach up to 200 \(\mu\)rad [25]. Although this limits the kinematic reconstruction for electrons scattered at angles less than 10 mrad [25], the reconstruction of inclusive kinematic variables will not be compromised since the FDC acceptance begins at 20 mrad.
Overall, these studies demonstrate that the FDC design can provide sufficient resolution for measuring kinematic variables in the low \(x\), low \(Q^{2}\) region, thereby bridging the \(Q^{2}\) gap.
Figure 14: Position resolution of the FDC detector. This is compared to the resolution determined via simulations for the ZEUS BPC [5] (green triangles)
Figure 15: Purity (left) and Stability (right) for reconstruction of \(x\) and \(Q^{2}\) using the FDC and the electron method, for \(ep\) collisions with 18\(\times\) 275 GeV configuration.
### Shower-shape Examples
Figure 16 shows 3D and projection views of example showers. The three scenarios depicted are: an electron reaching the FDC with no material in front of it (left), a photon that initiated showering in the beampipe (middle), and a \(\pi^{-}\). Among the three cases, the electron without pre-showering produces the narrowest shower, while the pre-showering photon and the \(\pi^{-}\) generate more irregular showers.
In terms of converted-electron tagging, the showers are expected to exhibit broader transverse profiles and anomalous longitudinal development. For hadron tagging, the fine segmentation will be primarily used to identify the starting point of the shower, which is more likely to be located at a deeper position compared to electrons. Moreover, the hadron showers will also have a different time development, with hits at later times compared to electron showers.
This highlights the potential of 5D shower shape analysis for background rejection. In our future work, we intend to explore this potential by applying machine learning techniques to tag electrons/photons, hadron showers, and beam-gas background. Some promising emerging work, which can handle the complex geometry of the FDC, are point-cloud networks [26].
## 5 Summary and Conclusions
We have outlined the design of a small electromagnetic calorimeter, the Few-Degree Calorimeter (FDC), which is designed to cover the range of \(-4.6<\eta<-3.6\). The primary objective of this detector is to tag electrons within the \(Q^{2}\) range of 0.1 to 1.0 GeV\({}^{2}\), thus enabling future research on the transition to perturbative QCD and the gluon-saturation regime.
The FDC design we present here incorporates the latest advancements in SiPM-on-tile calorimetry to create a modern and improved version of the ZEUS Beam Pipe Calorimeter and H1 Very Low \(Q^{2}\) calorimeter. The incorporation of high-granularity 5D shower measurements (position, time, and energy) offered by this technology holds great potential for background tagging.
In conclusion, this document presents the first design that has the potential to close the EIC \(Q^{2}\) gap while maintaining a compact and cost-effective solution. Considering the larger crossing-angle envisioned for the second-interaction region at the EIC, which results in a larger hole in the crystal ECAL acceptance, this design may offer further opportunities for optimization for the EIC Detector 2.
## Acknowledgements
We would like to extend our sincere appreciation to the members of the California EIC consortium for their valuable feedback on the design and physics that motivated the FDC, with special recognition to Oleg Tsai, Farid Salazar, and Zhongbo Kang. Additionally, we are grateful to Elke-Caroline Aschenauer for her guidance regarding the current acceptance of the crystal ECAL in ePIC.
This work was supported by MRPI program of the University of California Office of the President, award number 00010100. This work was supposed by DOE grant award number DE-SC0022324. S.P also acknowledges support from the Jefferson Lab EIC Center Fellowship. M.A acknowledges support through DOE Contract No. DE-AC05-06OR23177 under which Jefferson Science Associates, LLC operates the Thomas Jefferson National Accelerator Facility.
|
2306.12618 | Mixed-model Sequencing with Stochastic Failures: A Case Study for
Automobile Industry | In the automotive industry, the sequence of vehicles to be produced is
determined ahead of the production day. However, there are some vehicles,
failed vehicles, that cannot be produced due to some reasons such as material
shortage or paint failure. These vehicles are pulled out of the sequence, and
the vehicles in the succeeding positions are moved forward, potentially
resulting in challenges for logistics or other scheduling concerns.
This paper proposes a two-stage stochastic program for the mixed-model
sequencing (MMS) problem with stochastic product failures, and provides
improvements to the second-stage problem. To tackle the exponential number of
scenarios, we employ the sample average approximation approach and two solution
methodologies. On one hand, we develop an L-shaped decomposition-based
algorithm, where the computational experiments show its superiority over
solving the deterministic equivalent formulation with an off-the-shelf solver.
Moreover, we provide a tabu search algorithm in addition to a greedy heuristic
to tackle case study instances inspired by our car manufacturer partner.
Numerical experiments show that the proposed solution methodologies generate
high quality solutions by utilizing a sample of scenarios. Particularly, a
robust sequence that is generated by considering car failures can decrease the
expected work overload by more than 20\% for both small- and large-sized
instances. | I. Ozan Yilmazlar, Mary E. Kurz, Hamed Rahimian | 2023-06-22T01:09:18Z | http://arxiv.org/abs/2306.12618v1 | # Mixed-model Sequencing with Stochastic Failures: A Case Study for Automobile Industry
###### Abstract
In the automotive industry, the sequence of vehicles to be produced is determined ahead of the production day. However, there are some vehicles, failed vehicles, that cannot be produced due to some reasons such as material shortage or paint failure. These vehicles are pulled out of the sequence, and the vehicles in the succeeding positions are moved forward, potentially resulting in challenges for logistics or other scheduling concerns. This paper proposes a two-stage stochastic program for the mixed-model sequencing (MMS) problem with stochastic product failures, and provide improvements to the second-stage problem. To tackle the exponential number of scenarios, we employ the sample average approximation approach and two solution methodologies. On one hand, we develop an L-shaped decomposition-based algorithm, where the computational experiments show its superiority over solving the deterministic equivalent formulation with an off-the-shelf solver. Moreover, we provide a tabu search algorithm in addition to a greedy heuristic to tackle case study instances inspired by our car manufacturer partner. Numerical experiments show that the proposed solution methodologies generate high quality solutions by utilizing a sample of scenarios. Particularly, a robust sequence that is generated by considering car failures can decrease the expected work overload by more than 20% for both small- and large-sized instances.
keywords: Stochastic programming, Mixed-model sequencing, Branch-and-Benders-cut, Heuristics, Tabu Search
## 1 Introduction
Mixed-model assembly lines (MMAL) are capable of producing several configurations, models, of a product. The number of models increases drastically as the complexity and customizability of the product expand. The number of theoretical configurations of vehicles from a German car manufacturer is up to \(10^{24}\)(Pil and Holweg, 2004). Different configurations require distinct tasks at each station which induces high variation in the processing times, though each station has a fixed maximum time available. In fact, station workload is distributed through line balancing such that each station's _average_ workload conforms to this maximum time. When a station has more work allocated to it for a particular model (_work overload_), interventions are
needed to maintain the flow of products in the assembly line, thereby avoiding line stoppages. Interventions can be considered in advance, through sequencing decisions, or at the time of disruption, through utility workers. When these interventions fail, the line may stop production until the situation is resolved. Thus, it is essential to distribute the high work-load models along the planning horizon to avoid line stoppages.
The mixed-model sequencing (MMS) problem sequences products in an MMAL to minimize work overload at the stations. Data from our car manufacturing partner shows that the variation in processing times is high when customization appears on a main part, e.g., engine type: electric, diesel, gasoline, or hybrid. Car manufacturers have adapted their assembly lines for the mixed-model production of vehicles with diesel and gasoline engines. However, the assembly of electric vehicles (EV) in the same line has brought new challenges, while not eliminating the production of vehicles with diesel or gasoline engines. Unlike other vehicles, electric and hybrid vehicles have large batteries which causes a huge difference in tasks, e.g., at the station where the battery is loaded. As the proportion of electric and hybrid vehicles grows in a manufacturer's mix, the impact of supply problems increases. Sometimes, a part is delayed from a supplier, so a designed sequence of vehicles will have a missing vehicle. Even if this vehicle has a gasoline or diesel engine, its absence may impact the battery-intensive stations. As a manufacturer's mix of vehicles grows more specialized with more time-consuming content for a large subset, without alternative tasks for the vehicles without the specialized content, the impact of missing vehicles on a carefully designed sequence grows.
Some vehicles in a production sequence may not be ready for assembly on the production day for various reasons, such as the body not being ready, paint quality issues, or material shortage. Such vehicles, referred to as _failed vehicles_, need to be pulled out of the sequence. The resulting gap is closed by moving the succeeding vehicles forward. This process and the resulting additional work overload occurrence is illustrated in Figure 1 for a battery loading station. The processing time at this station is longer than the cycle time for EVs and shorter than the cycle time for non-EVs, and assume that back-to-back EVs cause work overload. We schedule five vehicles, two electric and three non-electric. One of the non-EVs (third in both scheduled sequences) has a high failure probability. The initial sequences with no failures, while different, both will lead to no work overload. Assuming the third vehicle fails, we have different consequences for the resultant sequence of vehicles. In the non-robust sequence, removing the failed non-EV results in two EVs in a row, which will cause a work overload. However, the robust sequence, which is composed of the same vehicles in a different order, can withstand the failure of the third vehicle without causing a work overload. We refer to this sequence as the "robust" sequence because no work overload occurs when the vehicle with high failure probability is pulled out of the sequence.
In this study, we generate robust sequences that consider the vehicles' potential failures to reduce additional work overloads. We focus on the final assembly line, assuming that vehicles follow the same sequence as they arrive from the paint shop and resequencing is not an option; when a vehicle is removed from the sequence, the following vehicles close the gap. The contributions of this study are as follows:
* We provide a two-stage stochastic program for a MMS problem with stochastic product
failures, and we provide improvements to the second-stage problem. To the best of our knowledge, this is the first study that considers stochastic failures of products in MMS.
* We adopt the sample average approximation (SAA) approach to tackle the exponential number of scenarios. The numerical experiments show that we can generate robust solutions with an optimality gap of less than 1% and 5% by utilizing a sample of scenarios, for the small-sized and industry-sized instances, respectively.
* We develop an L-shaped decomposition-based algorithm to solve small-sized instances. The numerical results show that the L-shaped algorithm outperforms an off-the-shelf solver, solving the deterministic equivalent formulation (DEF), in terms of both quality and computational time.
* To solve industry-sized instances, we propose a greedy heuristic and a tabu search (TS) algorithm that is accelerated in convergence with problem-specific tabu rules, and in objective reevaluation each time a new solution is visited.
* We conduct a case study with the data inspired by our car manufacturer industrial partner. The numerical experiments show that we can reduce the work overload by more than 20% by considering stochastic car failures and solving the corresponding problem with the proposed solution methodologies.
The remainder of this paper is structured as follows. MMS related literature is reviewed in Section 2. The tackled problem is defined, and the mathematical formulation of the proposed problem is presented in Section 3. Exact and heuristic solution approaches in addition to the SAA approach are presented in Section 4. In Section 5, we execute numerical experiments to analyze the performance of proposed solution methodologies and present the results. Finally, a summary of our findings and a discussion about future work are given in Section 6.
## 2 Related Work
Manufacturers use various design configurations of MMALs to maximize their revenue. The optimization process of the assembly line sequencing takes these design configurations into account. The first paper that articulates the MMS was presented by Kilbridge (1963). The researchers tackle the MMS with varied characteristics which required a systematic categorization of the components and the operating system of MMS problems. Dar-El (1978) categorizes the MMAL into four categories based on their main characteristics of assembly lines: product
Figure 1: Illustration of a non-robust and robust sequence to stochastic failures
transfer system, product mobility on the conveyor, accessibility among adjacent stations, and the attribute of the launching period. An analytic framework for the categorization of Dar-El (1978) is given by Bard et al. (1992). Later, a survey is presented by Boysen et al. (2009), where they define tuple notation for the sequencing problems based on more detailed characteristics of assembly lines, including work overload management, processing time, concurrent work, line layout, and objective in addition to the main characteristics.
Several objectives are employed to evaluate the performance of the assembly line sequence. The most common objective in the literature, also adopted in this study, is minimizing the total work overload duration, proposed by Yano and Rachamadugu (1991). Tsai (1995) describes hiring utility workers to execute tasks so that production delays are avoided, which leads to the objective of minimizing the total utility work duration. Fattahi and Salehi (2009) minimize the total idle time in addition to utility work. Boysen et al. (2011) propose minimizing the number of utility workers instead of the total utility work duration in order to improve utility worker management.
A few exact solution methods are proposed in the literature to solve the deterministic MMS problem. Scholl et al. (1998) proposes a decomposition approach that uses patterns of different sequences, called pattern-based vocabulary building. They use a column generation method to solve the linear relaxation of the formulation and an informed tabu search is adapted to determine the pattern sequence. Bolat (2003) propose a job selection problem that is solved prior to the sequencing problem. They employ a due date-oriented cost function as an objective, and the work overload is restricted as a hard constraint. They develop a branch-and-bound (B&B) algorithm that is improved with some dominance criteria, a procedure to compare sequences based on the quality, which can select 50 jobs out of 100 in seconds. Kim and Jeong (2007) present a B&B algorithm to solve the MMS problem with sequence-dependent setup times. They calculate a lower bound on the work overload of the current sequence and the minimum possible work overload of the unconsidered configurations. The model can solve instances with up to five stations and 30 configurations. Boysen et al. (2011) integrate a skip policy for utility work into the MMS and formulate the new problem as an mixed-integer linear program (MILP). They propose a B&B algorithm with improved lower bound calculations and dominance rules.
There are several heuristic and meta-heuristic approaches related to the MMS. The most popular algorithm is the genetic algorithm (GA) which is adapted in several ways to solve MMS problems (Hyun et al., 1998; Kim et al., 1996; Ponnambalam et al., 2003; Leu et al., 1996; Celano et al., 1999; Akgunduz and Tunah, 2010; Zhang et al., 2020). Akgunduz and Tunah (2011) review GA-based MMS solution approaches. Other popular evolutionary algorithms that are used to solve MMS include ant colony optimization (Akpunar et al., 2013; Zhu and Zhang, 2011; Kucukkoc and Zhang, 2014), particle swarm (Rahimi-Vahed et al., 2007a; Wei-qi et al., 2011), scatter search algorithm (Rahimi-Vahed et al., 2007b; Cano-Belman et al., 2010; Liu et al., 2014), and imperialist competitive algorithm (Zandieh and Moradi, 2019).
While the majority of the MMS literature focuses on models with deterministic parameters, there are a few studies that consider stochastic parameters on either processing times or demand. The seminal study with stochastic processing times is proposed by Zhao et al. (2007). They provide a Markov chain based approach that minimizes the expected work overload duration.
This approximation is done by generating sub-intervals of possible positions of workers within the stations. The expected work overload is calculated based on the lower and upper bounds of the intervals. Mosadegh et al. (2017) propose a heuristic approach, inspired by Dijkstra's algorithm, to tackle a single-station MMS with stochastic processing times. They formulate the problem as a shortest path problem. Mosadegh et al. (2020) formulate a multiple-station MMS with stochastic processing times as an MILP. They provide a Q-learning-based simulated annealing (SA) heuristic to solve industry-sized problems and show that the expected work overload is decreased compared to the deterministic problem. Brammer et al. (2022) propose a reinforcement learning approach to solve MMS by negatively rewarding work overload occurrences. They show that the proposed approach provides at least 7% better solutions than SA and GA. Moreover, stochastic parameters are considered in integrated mixed-model balancing and sequencing problems as well: in processing times (Agrawal and Tiwari, 2008; Ozcan et al., 2011; Dong et al., 2014) and in demand (Sikora, 2021).
Although numerous studies have been conducted on the sequencing problems, only Hottenrott et al. (2021) consider the product failures in sequence planning, yet in the car sequencing structure. To the best of our knowledge, there is no research available that establishes robust sequences in the MMS structure that can withstand work overloads caused by product failures.
## 3 Problem Statement and Mathematical Formulation
In Section 3.1, we define the MMS with stochastic failures and illustrate the problem with an example. Then, in Section 3.2, we provide a two-stage stochastic program for our problem.
### Problem Statement
In an MMAL, a set of workstations are connected by a conveyor belt. Products, launched with a fixed rate, move along the belt at a constant speed of one time unit (TU). The duration between two consecutive launches is called the cycle time \(c\), and we define the station length \(l_{k}\geq c\) in TU as the total TU that the workpiece requires to cross the station \(k\in K\). Operators work on the assigned tasks and must finish their job within the station length, otherwise, the line is stopped or a so-called utility worker takes over the remaining job. The excess work is called _work overload_. The sequence of products therefore has a great impact on the efficiency of the assembly line. MMS determines the sequence of a given set of products \(V\) by assigning each product \(v\in V\) to one of the positions \(t\in T\).
Formulating the MMS problem based on the vehicle configurations instead of vehicles is usual (Bolat and Yano, 1992; Bard et al., 1992; Scholl et al., 1998), however, automobile manufacturers offer billions of combinations of options (Pil and Holweg, 2004). When this high level of customization is combined with a short lead time promised to the customers, each vehicle produced in a planning horizon becomes unique. In this study, the vehicles are sequenced instead of configurations since the case study focuses on an automobile manufacturing facility with a high level of customization. In order to do so, we define a binary decision variable \(x_{vt}\), which takes value of 1 if vehicle \(v\in V\) is assigned to position \(t\in T\). The processing time of vehicle \(v\in V\) at station \(k\in K\) is notated by \(p_{kv}\). The starting position and work overload of the vehicle at position \(t\in T\) for station \(k\in K\) are represented by \(z_{kt}\) and \(w_{kt}\), respectively. Table 1 lists all
the parameters and decision variables used in the proposed model. While second-stage decision variables are scenario-dependent, we drop such dependency for notation simplicity throughout the paper unless it is needed explicitly for clarity.
In this paper, we adopt the side-by-side policy as a work overload handling procedure. A utility worker is assumed to work with the regular worker side-by-side, enabling work to be completed within the station borders. The objective of MMS with the side-by-side policy is to minimize the total duration of work overloads, i.e., the total duration of the remaining tasks that cannot be completed within the station borders. The regular operator stops working on the piece at the station border so they can start working on the next workpiece at position \(l_{k}-c\) in the same station.
We illustrate an MMAL with the side-by-side policy in Figure 2 which represents a station that processes five vehicles. The left and right vertical bold lines represent the left and right borders of the station. Assume that the cycle time \(c\) is \(7\) and the station length \(l_{k}\) is \(10\) TU, i.e., it takes \(10\) TU for the conveyor to flow through the station. This specific station processes two different configurations of vehicles: configurations A and B. While configurations A requires option \(1\), configurations B does not, so the processing times of configurations A and B are \(9\) and \(5\), respectively. Figure 2 illustrates the first five vehicles in the sequence which is [A, B, B, A, A]. The diagonal straight lines represent the position of the vehicle in the station. The worker always starts working on the first vehicle at position zero, left border of the station. The second vehicle is already at position \(2=9-c\) when the worker completes working on the first vehicle. Note that the next vehicle enters the station borders a cycle time after the current vehicle enters the station borders. The tasks of the second vehicle are completed when the third vehicle has just entered the station. The worker has \(2\) TU of idle time when the tasks of the third vehicle are completed. The worker starts working on the fifth vehicle on position \(2\) and the processing time of the fifth vehicle is \(9\) TU which causes a work overload of \(1\) TU, \(2+9-l_{k}=1\). The job of processing these five vehicles could not be completed within the station borders but with the help of a utility worker, we assume that the job is completed at the station border at the cost of \(1\) TU work overload. The worker will continue working on the sixth vehicle at position
\begin{table}
\begin{tabular}{l l} \hline
**Sets and Index** & \\ \hline \(V,v\) & Vehicles \\ \(K,k\) & Stations \\ \(T,t\) & Positions \\ \(\Omega,\omega\) & Scenarios \\
**Parameters** & \\ \(p_{\text{br}}\) & The processing time of vehicle \(v\in V\) at station \(k\in K\) \\ \(l_{k}\) & The length of station \(k\in K\) \\ \(c\) & The cycle time \\ \(f_{v}\) & The failure probability of vehicle \(v\in V\) \\ \(e_{\text{ru}}\) & \(1\) if vehicle \(v\in V\) exists at scenario \(\omega\in\Omega\), \(0\) otherwise \\
**First-Stage Decision variables** & \\ \(x_{\text{ct}}\) & \(1\) if vehicle \(v\in V\) is assigned to position \(t\in T\), \(0\) otherwise \\
**Second-Stage Decision variables** & \\ \hline \(w_{\text{kt}}\) & The work overload at station \(k\in K\) at position \(t\in T\) \\ \(z_{\text{kt}}\) & Starting position of operator at station \(k\in K\) at the beginning of position \(t\in T\) \\ \(b_{\text{kt}}\) & The processing time at station \(k\in K\) at position \(t\in T\) \\ \end{tabular}
\end{table}
Table 1: List of parameters and decision variables used in the model
\(3=l_{k}-c\), and this process continues.
We note that the vehicles usually go through the body shop and paint shop in the scheduled sequence before the assembly process. Hence, the failed vehicles must be pulled out of the sequence, and its position cannot be compensated, i.e., resequencing is not an option. It is assumed that each vehicle \(v\in V\) (with its unique mix of configurations) has a failure probability \(f_{v}\), and failures are independent of each other. The failures are related to the specifications of the vehicle, e.g., the increased production rate of EVs may induce higher failure rates, or painting a vehicle to a specific color may be more problematic. In our numerical experiments in Section 5, we estimate the failure probabilities from the historical data by doing feature analysis and using logistic regression.
### Mathematical Model Under Uncertainty
In this section, first, we provide a two-stage stochastic program for our problem. Next, we discuss improvements of the proposed formulation.
Motivated by the dynamics of an MMAL, we formulate our problem as a two-stage stochastic program. The sequence of vehicles is decided in the first stage (here-and-now), before the car failures are realized. Once the car failures are realized, the work overload is minimized by determining the second-stage decisions (wait-and-see) given the sequence. First-stage decisions are determined by assigning each vehicle to a position such that the expected work overload in the second stage is minimized.
To formulate the problem, suppose that various realizations of the car failures are represented by a collection of finite scenarios \(\Omega\). As each vehicle either exists or fails at a scenario, we have a total of \(2^{|V|}\) scenarios. We let \(\Omega=\{\omega_{1},\ldots,\omega_{2^{|V|}}\}\), with \(\omega\) indicating a generic scenario. To denote a scenario \(\omega\), let \(e_{v\omega}=1\) if vehicle \(v\) exists and \(e_{v\omega}=0\) if vehicle \(v\) fails at scenario \(\omega\in\Omega\). We can then calculate the probability of scenario \(\omega\) as \(\rho_{\omega}=\prod_{v=1}^{|V|}f_{v}^{1-e_{v\omega}}(1-f_{v})^{e_{v\omega}}\) such that \(\sum_{\omega\in\Omega}\rho_{\omega}=1\), where \(f_{v}\) denotes the failure probability of vehicle \(v\in V\).
A two-stage stochastic program for the _full-information problem_, where all possible realiza
Figure 2: Illustration of mixed-model assembly line with five vehicles, cycle time \(c=7\), and station length \(l_{k}=10\). From bottom to top, the diagonal lines correspond to vehicle configurations A, B, B, A, A
tions are considered, is as follows:
\[\min_{x} \sum_{\omega\in\Omega}\rho_{\omega}Q(x,\omega)\] (1a) s.t. \[\sum_{v\in V}x_{vt}=1,\qquad t\in T \tag{1b}\] \[\sum_{t\in T}x_{vt}=1,\qquad v\in V\] (1c) \[x_{vt}\in\{0,1\},\qquad t\in T,\;\;v\in V \tag{1d}\]
where
\[Q(x,\omega)=\min_{z,w,b} \sum_{k\in K}\sum_{t\in T_{\omega}}w_{kt}\] (2a) s.t. \[b_{kt}=\sum_{v\in V}p_{kv}x_{vt},\qquad k\in K,\;\;t\in T_{\omega} \tag{2b}\] \[z_{kt}-z_{k(t+1)}-w_{kt}\leq c-b_{kt},\qquad k\in K,\;\;t\in T_{ \omega},\] (2c) \[z_{kt}-w_{kt}\leq l_{k}-b_{kt},\qquad k\in K,\;\;t\in T_{\omega},\] (2d) \[z_{k0}=0,\qquad k\in K,\] (2e) \[z_{k([T_{\omega}|+1)}=0,\qquad k\in K,\] (2f) \[z_{kt},w_{kt}\geq 0,\qquad k\in K,\;\;t\in T_{\omega}, \tag{2g}\]
In the first-stage problem (1), the objective function represents the expected work overload, i.e., the cost associated with the second-stage problem. Constraint sets (1b) and (1c) ensure that exactly one vehicle is assigned to each position and each position has exactly one vehicle. respectively. Constraint set (1d) presents the domain of the binary first-stage variable. The second-stage problem (2) minimizes the total work overload throughout the planning horizon, given the sequence and scenario \(\omega\in\Omega\). Note that \(T_{\omega}\) denotes the set of positions of non-failed vehicles at scenario \(\omega\in\Omega\), which is obtained by removing failed vehicles. Constraint set (2b) determines the processing time \(b_{kt}\) at station \(k\) at position \(t\). The starting position and workload of the vehicles at each station are determined by constraint sets (2c) and (2d), respectively. The constraint set (2e) ensures that the first position starts at the left border of the station. Constraint set (2f) builds regenerative production planning, in other words, the first position of the next planning horizon can start at the left border of the station. The constraint set (2g) defines the second-stage variables as continuous and nonnegative.
Several remarks are in order regarding the proposed two-stage stochastic program (1) and (2). First, the number of decision variables and the set of constraints in (2) are scenario-dependent, as the valid positions \(T_{\omega}\) are obtained based on the failure scenario \(\omega\in\Omega\). Second, the proposed two-stage stochastic program (1) and (2) has a simple recourse. That is, once the sequence is determined and the failures are realized, the work overloads are calculated from the sequence of the existing vehicles, without resequencing.
In the remainder of this section, we first provide two modified models for the second-stage problem so that the number of decision variables and the set of constraints are no longer scenario
dependent. Then, we provide two monolithic MILP formulations for the deterministic equivalent formulation (DEF) of the two-stage stochastic program of MMS with stochastic failures.
For each scenario, we modify the second-stage problem by updating the processing times of failed vehicles instead of removing the failed vehicles. In Figure 3, we demonstrate how the original model (2) and modified models represent car failures. To this end, we consider the example given in Figure 2 and assume that the vehicle at the second position fails. In the original model, the failed vehicles are removed from the sequence and the succeeding vehicles moved forward (Figure 2(a)). The proposed modified models, referred to as _standard model_ and _improved model_, are explained below. In Section 5.2, we discuss the impact of these modified models on the computational time and solution quality.
**Standard Model:** In a preprocessing step, the processing time of vehicle \(v\) is set to zero for all stations if the vehicle fails in scenario \(\omega\). Accordingly, since this modification is conducted by adding uncertainty to the processing times, the scenario index \(\omega\) is added to the processing time. That is, \(e_{v\omega}=0\Rightarrow p_{kv\omega}=0\;\;k\in K\). Based on this modification, the second-stage problem for a given scenario \(\omega\) can be presented as
\[Q(x,\omega)=\min_{w,z,b} \sum_{k\in K}\sum_{t\in T}w_{kt}\] (3a) s.t. \[b_{kt}=\sum_{v\in V}p_{kv}x_{vt},\qquad k\in K,\;\;t\in T, \tag{3b}\] \[z_{kt}+b_{kt}-w_{kt}-z_{k(t+1)}\leq c,\qquad k\in K,\;\;t\in T,\] (3c) \[z_{kt}+b_{kt}-w_{kt}\leq l_{k},\qquad k\in K,\;\;t\in T,\] (3d) \[z_{k0}=0,\qquad k\in K,\] (3e) \[z_{k(T+1)}=0,\qquad k\in K,\] (3f) \[z_{kt}-\beta_{k}b_{kt}-z_{k(t+1)}\leq 0,\qquad k\in K,\;\;t=\{1, \ldots,T-1\}\] (3g) \[z_{kT}-w_{kT}-\beta_{k}b_{kT}\leq 0,\qquad k\in K,\] (3h) \[z_{kt},w_{kt},b_{kt}\geq 0,\qquad k\in K,\;\;t\in T, \tag{3i}\]
The objective function (3a) and the constraints (3b)-(3f) are the same as in formulation (2),
Figure 3: Assembly line illustration of proposed models
except the set of positions and the length of the sequence are not scenario-dependent anymore. Constraint set (3g) guarantees that the starting position at station \(k\) at position \(t+1\) equals the starting position of position \(t\) when the vehicle assigned to position \(t\) fails. Constraint set (3h) assures that the regenerative production planning is kept in case the vehicle at the end of the sequence fails. The parameter \(\beta\) is calculated in a way that \(\beta_{k}b_{ktn}>z_{ktn}\) which makes constraint sets (3g) and (3h) non-effective for the positions that have existing vehicles. Hence, \(\beta_{k}\) equals the maximum possible starting position divided by the minimum processing time for station \(k\), \(\beta_{k}=\frac{l_{k}-c}{\min_{v\in V}\{p_{kv}\}}>0\). Note that the processing times in this calculation are the actual processing times before the preprocessing step. Also, \(\beta_{k}\) is well-defined as the the minimum processing time is strictly greater than zero. Figure 3b demonstrates that in the standard model, the processing time of the second vehicle is set to zero, so the operator starts working on the third vehicle at position two where the operator was going to start working on the second vehicle if it had not failed.
**Improved Model:** In order to reduce the size of the standard model, we modify this model as follows. During the preprocessing step, the processing time of vehicle \(v\) is set to the cycle time for all stations if a vehicle fails at scenario \(\omega\). Let us refer to the vehicles with processing time equal to cycle time for all stations as "neutral" because these vehicles do not have any impact on the schedule in terms of work overload (see Proposition 1 and its proof). In other words, we transform failed vehicles into _neutral_ vehicles, i.e., \(e_{v\omega}=0\Rightarrow p_{kv\omega}=c\;\;k\in K\).
**Proposition 1**.: _A neutral vehicle has the same starting position as its succeeding vehicle at all stations. That is, \(b_{kt}=c\;\;\Rightarrow\;\;z_{k(t+1)}=z_{kt}\)._
Proof.: The operator's starting position of the vehicle at \(t+1\) is \(z_{k(t+1)}=z_{kt}+b_{kt}-c-w_{kt}\). Assume that the vehicle at position \(t\) is a neutral vehicle. We have \(z_{k(t+1)}=z_{kt}-w_{kt}\). Hence, showing that the neutral vehicles never cause a work overload, \(w_{kt}=0\), completes the proof. We know that the maximum starting position at a station is \(\max_{t\in T}\{z_{kt}\}=l_{k}-c\), which is a result of two extreme cases: an operator finishes working on a workpiece at the right border of a station or the operator cannot finish the work so we have a work overload. The starting position is less than \(l_{k}-c\) for other cases. Therefore, a vehicle with a processing time less than or equal to \(c\) at a station cannot cause any work overload. This completes the proof.
As a result of Proposition 1, constraints (3g) and (3h) can be removed from the standard model. Hence, the problem size is reduced. Figure 3c contains an illustration for Proposition 1. The second vehicle becomes neutral when its processing time is set to cycle time so that the third vehicle starts at the same position as the second vehicle.
Using the standard or improved model, the DEF for MMS with stochastic failures can be obtained by adding the first-stage constraints (1b)-(1d) to the corresponding second-stage formulation, and by adding copies of all second-stage variables and constraints. We skip the details for brevity.
## 4 Solution Approaches
In Sections 4.1 and 4.2, we propose an L-shaped decomposition-based algorithm and a tabu search algorithm in addition to a greedy heuristic, respectively, to solve the models presented
in Section 3.2. Then, in Section 4.3, the SAA approach is motivated and a solution quality assessment scheme is presented.
### Exact Solution Approach
For the ease of exposition, we consider an abstract formulation of the two-stage stochastic program presented in Section 3.2 as follows:
\[z^{*}=\min_{x\in X}\mathbb{E}[Q(x,\xi_{\omega})], \tag{4}\]
where \(x\) denotes the first-stage decisions variables and \(X:=\{x\in\{0,1\}^{|V|\times|T|}\colon Ax=b\}\) is the feasible region of decision variables \(x\), i.e., the set of points satisfying constraints (1b) - (1d). Moreover, we represent the second-stage problem for the standard or improved model, presented in Section 3.2, as
\[Q(x,\xi_{\omega})=\min_{y}\{q^{\top}y|Dy\geq h_{\omega}-T_{\omega}x,\;y\geq 0\}, \tag{5}\]
where \(y\) represents the second-stage decision variables and \(\xi_{\omega}=(h_{\omega},T_{\omega})\). The expectation of the recourse problem becomes \(\mathbb{E}[Q(x,\xi_{\omega})]=\sum_{\omega\in\Omega}\rho_{\omega}Q(x,\xi_{ \omega})\).
The L-shaped method is a procedure that has been successfully used to solve large-scale two-stage stochastic programs. Note that for any \(\omega\in\Omega\), function \(Q(x,\xi_{\omega})\), defined in (5), is convex in \(x\) because \(x\) appears on the right-hand side of constraints. Hence, we propose to iteratively construct its underestimator. To this end, for each \(\omega\in\Omega\) and a given first-stage decision \(x\in X\), we consider a _subproblem_ that takes the form of (5). Moreover, we create a _relaxed master problem_, which contains a partial, but increasingly improving, representation of \(Q(x,\xi_{\omega})\), for each \(\omega\in\Omega\), through the so-called _Benders' cuts_. Recall that our proposed two-stage stochastic programs have a relative complete recourse, that is, for any first-stage decision \(x\), there is a feasible second-stage variable \(y\). Thus, an underestimator of \(Q(x,\xi_{\omega})\) can be constructed by only the so-called _Benders' optimality cuts_.
We now describe more details on our proposed L-shaped algorithm. We form the relaxed master problem for formulation (4) and (5) as follows:
\[\min_{x,\theta} \sum_{\omega\in\Omega}\rho_{\omega}\theta_{\omega} \tag{6a}\] \[\mathrm{s.t.} x\in X\] (6b) \[\theta_{\omega}\geq G_{\omega}^{t}x+g_{\omega}^{t},\quad\iota\in \{1,\ldots,l\},\;\;\omega\in\Omega, \tag{6c}\]
where the auxiliary variable \(\theta_{\omega}\) approximates the optimal value of the second-stage problem under scenario \(\omega\in\Omega\), i.e., \(Q(x,\xi_{\omega})\), through cuts \(\theta_{\omega}\geq G_{\omega}^{t}x+g_{\omega}^{t}\) formed up to iteration \(l\).
Let \((\hat{x}^{t},\hat{\theta}^{t})\) be an optimal solution to the relaxed master problem (6). For each scenario \(\omega\in\Omega\), we form a subproblem (5) at \(\hat{x}^{t}\). Suppose that given \(\hat{x}^{t}\), \(\hat{\pi}_{\omega}^{t}\) denotes an optimal dual vector associated with the constraints in (5). That is, \(\hat{\pi}_{\omega}^{t}\) is an optimal extreme point of the dual subproblem (DSP)
\[\max_{\pi}\{\pi_{\omega}^{\top}(h_{\omega}-T_{\omega}\hat{x}^{t})|\pi_{\omega} ^{\top}D\leq q^{\top},\;\pi_{\omega}\geq 0\}, \tag{7}\]
where \(\pi_{\omega}\) is the associated dual vector. Then, using linear programming duality, we generate an optimality cut as
\[\theta_{\omega}\geq G_{\omega}^{t}x+g_{\omega}^{t}, \tag{8}\]
where \(G_{\omega}^{t}=-(\hat{\pi}_{\omega}^{t})^{\top}T_{\omega}\) and \(g_{\omega}^{t}=(\hat{\pi}_{\omega}^{t})^{\top}h_{\omega}\).
Our proposed L-shaped algorithm iterates between solving the relaxed master problem (6) and subproblems (5) (one for each \(\omega\in\Omega\)) until a convergence criterion on the upper and lower bounds is satisfied. This algorithm results in an L-shaped method with _multiple_ cuts.
In order to exploit the specific structure of the MMS problem and to provide improvements on the dual problem, let us define variables \(\pi^{sp}\), \(\pi^{wo}\), \(\pi^{fs}\), \(\pi^{ch}\), \(\pi^{sf}\), and \(\pi^{cf}\) corresponding to starting position constraints (3c), work overload constraints (3d), first station starting position constraints (3e), regenerative production planning constraints (3f), starting position of the vehicles following a failed vehicle (3f), and regenerative production planning with failed vehicles (3g), respectively. The DSP for scenario \(\omega\in\Omega\) at a candidate solution \(\hat{x}^{t}\), obtained by solving a relaxed master problem, can be formulated as follows:
\[\max_{\pi} \sum_{k\in K}\sum_{t\in T}\pi^{sp}_{kt}(\sum_{v\in V}p_{kv}\hat{x} _{vt}-c)+\pi^{wo}_{kt}(\sum_{v\in V}p_{kv}\hat{x}_{vt}-l_{k})\] (9a) s.t. \[\pi^{sp}_{k0}+\pi^{wo}_{k0}+\pi^{fs}_{k}+\pi^{sf}_{k0}\leq 0, \qquad k\in K \tag{9b}\] \[\pi^{sp}_{kt}-\pi^{sp}_{k(t+1)}-\pi^{wo}_{k(t+1)}+\pi^{sf}_{kt}- \pi^{sf}_{k(t+1)}\leq 0,\qquad k\in K,\,\,\,t\in\{1,..,T-1\}\] (9c) \[\pi^{sp}_{kT}-\pi^{ch}_{k}\leq 0,\qquad k\in K\] (9d) \[\pi^{sp}_{kt}+\pi^{wo}_{kt}\leq 1,\qquad k\in K,\,\,\,t\in\{1,.., T-1\}\] (9e) \[\pi^{sp}_{kT}+\pi^{wo}_{kT}+\pi^{cf}_{k}\leq 1,\qquad k\in K\] (9f) \[\pi^{sp}_{kt},\pi^{wo}_{kt},\pi^{sf}_{kt},\pi^{cf}_{k}\geq 0, \qquad k\in K,\,\,\,t\in T\] (9g) \[\pi^{fs}_{k},\pi^{ch}_{k}\text{unrestricted},\qquad k\in K \tag{9h}\]
We provide improvements to the dual problem in several ways. The dual variables \(\pi_{fs}\) and \(\pi_{cf}\) are removed since the corresponding subproblem constraints (3f) and (3g) are eliminated in the improved model. The dual variables \(\pi^{fs}\) and \(\pi^{ch}\) are not in the objective function and are unrestricted, which means that we can remove these variables and the constraints with those variables from the formulation without altering the optimal value of the problem. In our preliminary computational studies, we improved the dual subproblem by removing these variables. However, we observed that most of the DSPs have multiple optimal solutions, and as the number of vehicles and stations increase, it is more likely to have multiple optimal solutions. This naturally raises the question of what optimal dual vector provides the strongest cut, if we add only one cut per iteration per scenario. One can potentially add the cuts corresponding to all optimal dual extreme points, however, this results in an explosion in the size of the relaxed master problem after just a couple of iterations. While there is no reliable way to identify the weak cuts (Rahmaniani et al., 2017), we executed experiments in order to find a pattern for strong cuts. Our findings showed that adding the cut corresponding to the optimal dual extreme point with the most non-zero variables results in the fastest convergence. Thus, we added an \(\ell_{1}\) regularization term to the objective function of the DSP, hence, the new objective
is encouraged to choose an optimal solution with the most non-zero variables. Accordingly, we propose an improved DSP formulation as follows:
\[\max_{\pi} \sum_{k\in K}\sum_{t\in T}\pi_{k}^{sp}(\sum_{v\in V}p_{kv}\hat{x}_{vt }-c+\epsilon)+\pi_{kt}^{wo}(\sum_{v\in V}p_{kv}\hat{x}_{vt}-l_{k}+\epsilon)\] (10a) s.t. \[\pi_{kt}^{sp}-\pi_{k(t+1)}^{sp}-\pi_{k(t+1)}^{wo}\leq 0,\qquad k \in K,\;\;t\in\{1,..,T-1\} \tag{10b}\] \[\pi_{kt}^{sp}+\pi_{kt}^{wo}\leq 1,\qquad k\in K,\;\;t\in T\] (10c) \[\pi_{kt}^{sp},\pi_{kt}^{wo}\geq 0,\qquad k\in K,\;\;t\in T \tag{10d}\]
### Heuristic Solution Approach
MMS is an NP-hard problem, and stochastic failures of products (cars) increases the computational burden of solving the problem drastically. Hence, it is essential to create efficient heuristic procedures in order to solve industry-sized problems. In this section, we provide a fast and easy-to-implement greedy heuristic to find a good initial feasible first-stage decision (i.e., a sequence of vehicles) and an efficient tabu search (TS) algorithm to improve the solution quality. Although all \(|V|!\) vehicle permutations are feasible, the proposed greedy heuristic aims to find a good initial feasible solution. To achieve this, a solution is generated for the deterministic counterpart of the proposed MMS problem, which excludes vehicle failures. We refer to this problem as the _one-scenario problem_, since the corresponding problem has a single scenario with no failed vehicles. Assuming that the failure probability of each vehicle is less than or equal to 0.5, the scenario with no failed vehicles has the highest probability. Once such a feasible sequence of vehicles is generated, the TS algorithm improves this solution in two parts: first, over the one-scenario problem, and then, over the full-information problem.
#### 4.2.1 Greedy Heuristic
It is important for a local search heuristic algorithm to start with a good quality solution. A naive approach to generate an initial solution (sequence) is to always select the vehicle that causes the minimum new work overload for the next position. However, this approach is myopic since it only considers the current position. We remediate this issue by decreasing future work overloads which includes considering idle times and dynamic utilization rates. Accordingly, in order to generate a good initial solution, we propose utilizing an iterative greedy heuristic that follows a priority rule based on the work overload, idle time, and weighted sum of processing time, to be defined shortly.
Before explaining our proposed greedy heuristic, let us define some technical terms. The _idle time_ refers to the duration that an operator waits for the next vehicle to enter the station borders. The weight of processing times is determined by using station _utilization rates_, which is inspired by car sequencing problem utilization rates (Solnon, 2000; Gottlieb et al., 2003). We describe the utilization rate of a station as the ratio between the average processing time on a station and the cycle time, so the utilization rate of station \(k\) is \(\sum_{v\in V}p_{kv}/(|V|*c)\). At each iteration, after a new assignment of a vehicle, the dynamic utilization rates are calculated by considering only the unassigned vehicles. Accordingly, the weighted sum of the processing time
of a vehicle \(v\) is calculated using (11):
\[\frac{\sum_{k\in K}p_{kv}\sum_{i\in\hat{V}}p_{ki}}{|K|*|\hat{V}|*c}, \tag{11}\]
where \(\hat{V}\) denotes the set of unassigned vehicles. If the utilization rate of a station is greater than 1, then the average processing time is more than the cycle time, which induces an unavoidable work overload. On the other hand, a utilization rate close to 0 indicates that the average processing time is minimal compared to the station's allocated time.
Our proposed greedy heuristic builds a sequence iteratively, one position at a time, starting from the first position and iterating over positions. We use \(t\) to denote an iteration. At each iteration, \(t=1,\ldots,T\), the unassigned vehicles that cause the minimum new work overload are determined, denoted by \(V_{t,wo}\). Ties are broken by selecting the vehicles from the set \(V_{t,wo}\), which causes the minimum new idle time, the new set of vehicles is denoted by \(V_{t,idle}\). In the case of ties, the vehicle with the highest weighted sum of processing time from the set \(V_{t,idle}\) is assigned to the position \(t\) of the sequence. Note that the first vehicle of the sequence is the vehicle with the highest weighted sum of processing time among the set \(V_{0,idle}\) since there is no work overload initially.
Finally, we enhance the proposed greedy heuristic by considering the category of the vehicles. Motivated by our case study, we categorize the vehicles based on the engine type, electric or non-electric, because the engine type is the most restrictive feature due to the high EV ratio (number of EVs divided by the number of all vehicles). Moreover, the engine type leads to different processing times on a specific station. Hence, we modify our greedy heuristic to first decide whether an EV or a non-EV should be assigned to the next position at each iteration. Accordingly, first, the EV ratio is calculated, and an EV is assigned to the first position. The procedure always follows the EV ratio. For example, if the EV ratio is \(1/3\), an EV will be assigned to the positions \(1+3t\) where \(t=\{1,\ldots,\frac{[T]}{3}-1\}\). In case of a non-integer EV ratio, the position difference between any two consecutive EVs is the integer part of the ratio plus zero or one; randomly decided based on the decimal part of the ratio. Once the vehicle category is decided throughout the entire sequence, the specific vehicle to be assigned is selected based on the above-described procedure. We note that this enhancement in the greedy heuristic may be applied for any restrictive feature that causes large variations in processing times.
To describe the greedy heuristic, consider an example with six vehicles and two stations. The processing times and engine types of vehicles are given in Table 2. The cycle time is 7 TU, and the length of the stations are 20 TU and 10 TU, respectively. The EV ratio is \(1/3\). We consider only EVs for the first position, vehicle A is designated to the first position since it causes less
\begin{table}
\begin{tabular}{c c c c} Vehicle & Engine & \(p_{1}\) & \(p_{2}\) \\ \hline A & Electric & 15 & 4 \\ B & Electric & 16 & 3 \\ C & Gasoline & 2 & 10 \\ D & Gasoline & 3 & 8 \\ E & Gasoline & 2 & 9 \\ F & Gasoline & 4 & 7 \\ \end{tabular}
\end{table}
Table 2: Illustration of greedy heuristic
idle time than vehicle B. Next, none of the non-EVs causes work overload or idle time, so we assign the vehicle with the highest weighted sum of processing times to the second position, vehicle C. The procedure continues with another non-EV, and vehicle F is assigned to the third position because it is the only vehicle that does not cause any work overload. Consistent with the 1/3 EV ratio, an EV must be assigned to the fourth position, and vehicle B is assigned to this position as it is the only EV left. Vehicle E is assigned to the fifth position due to its higher weighted sum of processing times. Finally, vehicle D is assigned to the last position. The resulting sequence is _A-C-F-B-E-D_ with a work overload of 3 TU, only at position 6 at station 2.
#### 4.2.2 Tabu Search Algorithm
This section proposes a simulation-based local search algorithm on a very large neighborhood with tabu rules. The TS algorithm starts at the initial feasible solution (sequence) generated by the iterative greedy heuristic, and improves the initial solution via iterative improvements within the designed neighborhood. At each iteration of the TS, a transformation operator is randomly selected based on operator weights and applied to the incumbent solution to visit a random neighbor, respecting the tabu rules. The candidate solution is accepted if the objective function value is non-deteriorated, i.e., the candidate solution is rejected only if it has more total work overload. Then, another random operator is applied to the incumbent solution. This process repeats until the stopping criterion is met.
As aforementioned, the TS has two parts. The first part acts as the second step of the initial solution generation procedure since it improves the solution provided by the greedy heuristic for the one-scenario problem. In our preliminary numerical experiments we observed that this step can drastically improve the initial solution quality. Hence, we conduct this step for a duration \(\tau_{one}\). Next, the algorithm makes a transition to the full-information problem and reevaluates the objective function value of the incumbent solution--the sequence generated by the first part of TS. In the second part of the TS algorithm, the objective function value corresponding to the sequence is evaluated for the full-information problem. To do this, we calculate the total work overload for all realizations \(\omega\in\Omega\), given the first-stage decision (sequence). That is, we calculate the objective function of (3) for each realization \(\omega\in\Omega\) and take the weighted sum, each multiplied by the probability of the scenario. Observe from (3) that once the first-stage decision is fixed, the problem decomposes in scenarios and stations. Accordingly, the solution evaluation process is parallelized over scenarios and stations.
The TS algorithm continues evaluating the solution for the full-information problem for a duration \(\tau_{full}\). The allocated time for the second part, \(\tau_{full}\), is much larger than that of the first part, \(\tau_{one}\), since iterating over one-scenario problem is much faster than that over a set of realizations. In the reminder of this section, we explain various components of the TS algorithm.
Objective EvaluationThe objective function of the problem for a given scenario is the same as the objective given in (3a), total work overload over all stations and positions. Evaluation of the objective, after each movement, is the bottleneck of our algorithm since the new total work overload needs to be determined. Note that the objective evaluation starts at the first position and is executed iteratively since there is a sequence dependency. Accordingly, we propose to
reduce the computational burden in two ways.
First, reevaluating the whole sequence is unnecessary since transformation operators make local changes in the sequence, i.e., some parts of the sequence remain unaltered and do not require reevaluation. Hence, we apply partial reevaluation after each movement. To explain partial reevaluation, assume that the vehicles at positions \(t_{1}\) and \(t_{2}\) are swapped. We certainly know that the subsequence corresponding to positions \([1,t_{1}-1]\) is not impacted by the swap operation; hence, we do not reevaluate these positions. Additionally, we may not have to reevaluate all the positions in \([t_{1},t_{2}-1]\) and the positions in \([t_{2},|T|]\). In each of these subsequences, there could be a _reset position_ which ensures that there is no change in the objective from that position until the end of the subsequence. Since the rest of the subsequence after the reset position is not changed, we can jump to the end of the subsequence. To highlight how a partial reevaluation may improve the objective reevaluation process, suppose that the vehicles at positions 350 and 380 are swapped. We certainly know the subsequence corresponding to positions [1, 349] is not impacted by the swap. Additionally, in the case that there is a reset point before position 380 (and \(|T|\)), we do not have to reevaluate all the positions between 350 and 380, and the positions between 380 and \(|T|\).
Second, we calculate the objective function in an accelerated way. Traditionally work overload and starting position for position \(t\) at station \(k\), respectively \(w_{kt}\) and \(z_{k(t+1)}\), are calculated as: \(w_{kt}=z_{kt}+b_{kt}-l_{k}\) and \(z_{k(t+1)}=z_{kt}+b_{kt}-w_{kt}-c\), where \(w_{kt},z_{kt}\geq 0\). Instead of calculating work overload and starting position vectors separately, we propose using a single vector to extract these information, which in fact is a different representation of the starting position vector \(z\). If there is a work overload at position \(t\), then \(z_{k(t+1)}=l_{k}-c\). Otherwise, if there is not any work overload at position \(t\), then \(z_{k(t+1)}=z_{kt}+b_{kt}-c\), or we can equivalently write \(z_{k(t+1)}=z_{k(t-1)}+b_{k(t-1)}-c+b_{kt}-c-w_{k(t-1)}\). Again, if there is a work overload at position \(t-1\), then \(z_{k(t+1)}=(l_{k}-c)+b_{kt}-c\), otherwise, if there is not any work overload at \(t-1\), then \(z_{k(t+1)}=z_{k(t-1)}+(b_{k(t-1)}-c)+(b_{kt}-c)\). Since we know that \(z_{k0}=0\), we can generalize it as \(z_{k(t+1)}=\sum_{h=1}^{t}(b_{kh}-c)\), which is the cumulative sum of vector \(\eta_{k}=b_{k}-c\) up to and including position \(t\). However, this generalization has the assumption that there is not any work overload or idle time up to position \(t\). We note that there is an idle time at position \(t+1\) when \(z_{kt}+b_{kt}-c<0\). Accordingly, we can write a general formula as \(z_{k(t+1)}=\max(0,\min(l_{k}-c,z_{kt}+\eta_{k(t+1)}))\), which is referred to as the conditional cumulative sum of \(\eta_{k}\) up to position \(t\). Intuitively, the conditional cumulative sum is defined as follows: starting from position 0, the cumulative sum is calculated iteratively within the closed range \([0,l_{k}-c]\). Whenever the cumulative sum exceeds the lower bound zero or the upper bound \(l_{k}-c\), we set the cumulative sum to the corresponding bound's value. If the cumulative sum is below the lower bound, the excess value is equal to the idle time. Otherwise, if the cumulative sum is above the upper bound, the excess value is equal to the work overload. For example, if the cumulative sum is -2 at a position, the cumulative sum is set to zero and there is a 2 TU of idle time at that position.
In light of the proposed improvements, the partial reevaluation process is executed in two subsequences, \([t_{1},t_{2})\) and \([t_{2},|T|]\), assuming that \(t_{1}\) and \(t_{2}\) are the two selected positions by any transformation operator and \(t_{1}<t_{2}\). The process starts at the first position, called position one, of the corresponding subsequence. We set \(z_{k0}=\eta_{k1}\) and we calculate the starting position,
work overload, and idle time for the positions in the subsequence iteratively as mentioned above. The reevaluation of the subsequence is completed when either a reset position is found or the whole subsequence is iterated. A reset position occurs at position \(t\) differently in two cases as follows: 1) if \(z_{k,t+1}=0\) when the processing time on the starting position \(t_{1}\) (or \(t_{2}\)) is decreased, 2) if the sum of idle time and work overload up to position \(t\) in the current subsequence exceeds the total increased processing time at the corresponding starting position \(t_{1}\) (or \(t_{2}\)) when the processing time on the starting position \(t_{1}\) (or \(t_{2}\)) is increased.
Transformation OperatorsIn this section, we explain the details of the transformation operators. We employ swap, forward and backward insertions, and inversion operators. The swap operator interchanges the positions of two randomly selected cars. Insertion removes a car from position \(i\) and inserts it at position \(j\). Insertion is applied in two different directions, backward and forward. When \(i>j\), the insertion is called a backward insertion, and all the vehicles between the positions \(j\) and \(i\) move one position to the right, i.e., scheduled later. On the contrary, forward insertion occurs when \(i<j\) and all the vehicles between the positions \(i\) and \(j\) move one position to the left, i.e., scheduled earlier. Inversion takes two randomly selected positions in the sequence, and the subsequence between the selected positions is reversed. A repetitive application of these operators creates a very large neighborhood which helps the improvement procedure to escape local optima, especially when it is combined with a non-deteriorated solution acceptance procedure. The latter enables the algorithm to move on the plateaus that consist of the solutions with the same objective function value (see Section 5.3.2 for numerical experiments).
Tabu ListWe design the tabu list in a non-traditional manner. This list includes the movements that induce undesired subsequences. Based on our observations, we define an undesired subsequence as back-to-back EVs because consecutive EVs cause a tremendous amount of work overload at the battery loading station. Accordingly, any movement that results in back-to-back EVs is a tabu. For the sake of clarity, we describe tabu movements in detail for each operator separately in A.
### SAA Approach and Solution Quality Assessment
In (4), it is assumed that the probability of each scenario is known a priori, which may not hold in practice. In addition, the exponential growth of the number of scenarios causes an explosion in the size of the stochastic program. Hence, we utilize the SAA approach to tackle these issues. Consider the abstract formulation (4). The SAA method approximates the expected value function with an identically and independently distributed (i.i.d) random sample of \(N\) realizations of the random vector \(\Omega_{N}:=\{\omega_{1},\ldots,\omega_{N}\}\subset\Omega\) as follows:
\[z_{N}=\min_{x\in X}\frac{1}{N}\sum_{\omega\in\Omega_{N}}Q(x,\xi_{\omega}). \tag{12}\]
The optimal value of (12), \(z_{N}\), provides an estimate of the true optimal value (Kleywegt et al., 2002). Let \(\hat{x}_{N}\) and \(x^{*}\) denote an optimal solution to the SAA problem (12) and the true stochastic program (4), respectively. Note that \(\mathbb{E}[Q(\hat{x}_{N},\xi_{\omega})]-\mathbb{E}[Q(x^{*},\xi_{\omega})]\) is the optimality gap
of solution \(\hat{x}_{N}\), where \(\mathbb{E}[Q(\hat{x}_{N},\xi_{\omega})]\) is the (true) expected cost of solution \(\hat{x}_{N}\) and \(\mathbb{E}[Q(x^{*},\xi_{\omega})]\) is the optimal value of the true problem (4). A high quality solution is implied by a small optimality gap. However, as \(x^{*}\) (and hence, the optimal value of the true problem) may not be known, one may obtain a statistical estimate of the optimality gap to assess the quality of the candidate solution \(\hat{x}_{N}\)(Homen-de Mello and Bayraksan, 2014). That is, given that \(\mathbb{E}[z_{N}]\leq\mathbb{E}[Q(x^{*},\xi_{\omega})]\), we can obtain an upper bound on the optimality gap as \(\mathbb{E}[Q(\hat{x}_{N},\xi_{\omega})]-\mathbb{E}[z_{N}]\). We employ the multiple replication procedure of Mak et al. (1999) in order to assess the quality of a candidate solution by estimating an upper bound on its optimality gap. A pseudo-code for this procedure is given in Algorithm 1. We utilize MRP, in Section 5, to assess the quality of solutions generated by different approaches.
```
Input: Candidate solution \(\hat{x}\), replication size \(M\), and \(\alpha\in(0,1)\). Output: A normalized \(\%100(1-\alpha)\) upper bound on the optimality gap of \(\hat{x}\). for\(m=1,2,\ldots,M\)do Draw i.d. sample \(\Omega_{N}^{m}\) of realizations \(\xi_{\omega}^{m}\), \(\omega\in\Omega_{N}^{m}\). Obtain \(z_{N}^{m}:=\min\limits_{x\in\Lambda}\frac{1}{N}\sum\limits_{\omega\in\Omega_ {N}^{m}}Q(\hat{x},\xi_{\omega}^{m})\). Estimate the out-of-sample cost of \(\hat{x}\) as \(z_{N}^{m}:=\frac{1}{N}\sum\limits_{\omega\in\Omega_{N}^{m}}Q(\hat{x},\xi_{ \omega}^{m})\). Estimate the optimality gap of \(\hat{x}\) as \(G_{N}^{m}:=z_{N}^{m}-z_{N}^{m}\). endfor Calculate the sample mean and sample variance of the gap as \(\tilde{G}_{N}=\frac{1}{M}\sum_{m=1}^{M}G_{N}^{m}\) and \(\hat{s}_{G}^{2}=\frac{1}{M-1}\sum_{m=1}^{M}(G_{N}^{m}-\tilde{G}_{N})^{2}\). Calculate a normalized \(\%100(1-\alpha)\) upper bound on the optimality gap as \(\frac{1}{\tilde{z}_{N}}\left(\tilde{G}_{N}+t_{\alpha;M-1}\frac{s_{G}}{\sqrt{M }}\right)\), where \(\tilde{z}_{N}=\frac{1}{M}\sum_{m=1}^{M}z_{N}^{m}\).
```
**Algorithm 1** Multiple Replication Procedure \(\text{MRP}_{\alpha}(\hat{x})\)
In addition, we propose an MRP integrated SAA approach for candidate solution generation and quality assessment, given in Algorithm 2. On the one hand, a candidate solution is generated by solving an SAA problem with a sample of \(N\) realizations. Then, we use the MRP to estimate an upper bound on the optimality gap of the candidate solution. If the solution is \(\epsilon\)-optimal, i.e., estimated upper bound on its optimality gap is less than or equal to a \(\epsilon\) threshold, the algorithm stops. Otherwise, the sample size increases until a good quality solution is found. The algorithm returns a candidate solution and its optimality gap.
```
Input: List of sample sizes \(N_{list}\) and \(\epsilon,\alpha\in(0,1)\). Output: Solution \(\hat{x}\) and OptGap. for\(N\) in \(N_{list}\)do Obtain a candidate solution \(\hat{x}_{N}\) by solving the SAA problem (12). Calculate a normalized \(\%100(1-\alpha)\) upper bound on the optimality gap as \(\text{MRP}_{\alpha}(\hat{x}_{N})\). if\(\text{MRP}_{\alpha}(\hat{x}_{N})\leq\epsilon\)then \(\hat{x}\leftarrow\hat{x}_{N}\) and \(\text{OptGap}\leftarrow\text{MRP}_{\alpha}(\hat{x}_{N})\). exit for loop endif endfor
```
**Algorithm 2** MRP integrated SAA
We end this section by noting that each of the DEF, presented in Section 3.2, the L-shaped algorithm, presented in Section 4.1, and the heuristic algorithm, presented in Section 4.2, can be used to solve the SAA problem and obtain a candidate solution. However, the probability of scenarios \(\omega\in\Omega\), \(\rho_{\omega}\), must change in the formulations so that it reflect the scenarios in a sample \(\Omega_{N}\). Let \(\hat{N}\) and \(n_{\omega}\) represent the set of unique scenarios in \(\Omega_{N}\) and the number of their occurrences. Thus, in the described DEF, L-shaped algorithm, and TS algorithm, \(\sum_{\omega\in\Omega}\rho_{\omega}(\cdot)\)
changes to \(\frac{1}{N}\sum_{\omega\in\Omega_{N}}(\cdot)\) or equivalently, \(\frac{1}{N}\sum_{\omega\in\hat{N}}n_{\omega}(\cdot)\). Accordingly, in the L-shaped method, we generate one optimality cut for each unique scenario \(\omega\in\hat{N}\) by solving \(|\hat{N}|\) number of subproblems at each iteration.
## 5 Numerical Experiments
In Section 5.1, we describe the experimental setup. Then, in Section 5.2 and 5.3, we assess solution quality and computational performance of the proposed L-shaped and heuristic algorithms applied to a SAA problem, respectively.
### Experimental Setup
We generated real-world inspired instances from our automobile manufacturer partner's assembly line and planning information. As given in Table 3, we generated three types of instances: (1) small-sized instances with 7-10 vehicles to assess the performance of L-shaped algorithm, (2) medium-sized instances with 40 vehicles to assess the performance of the TS algorithm for the one-scenario problem, (3) large-sized instances with 200, 300, and 400 vehicles to evaluate the performance of the TS algorithm. All instances have five stations, of which the first one is selected as the most restrictive station for EVs, the battery loading station.
The rest are selected among other critical stations that conflict with the battery loading station. The cycle time \(c\) is 97 TU, and the station length \(l\) is 120 TU for all but the battery loading station, which is two station lengths, 240 TU. The information about the distribution of the processing times is given in Table 4. It can be observed that the average and maximum processing times for each station are lower than the cycle time and the station length, respectively. Moreover, the ratio of the EVs is in the range of [0.25, 0.33] across all instances.
We derived the failure rates from six months of historical data by performing predictive feature analysis on vehicles. Based on the analysis, two groups of vehicles are formed according to their failure probabilities, low-risk and high-risk vehicles, whose failure probabilities are in the range of [0.0, 0.01] and [0.2, 0.35], respectively. The failure probability is mostly higher for recently introduced features, e.g., the average failure probability of EVs is 50% higher than that of other vehicles. High-risk vehicles constitute [0.03, 0.05] of all vehicles. However, this percentage increases to [0.15, 0.25] for the small-sized instances in o
\begin{table}
\begin{tabular}{c c c c} & \multicolumn{3}{c}{Time (s)} \\ Station ID & Min & Mean & Max \\ \hline
1 & 42.6 & 94.1 & 117.2 \\
2 & 7.9 & 84.3 & 197.9 \\
3 & 57.8 & 96.2 & 113.3 \\
4 & 26.9 & 96.9 & 109.7 \\
5 & 57.8 & 96.2 & 114.3 \\ \end{tabular}
\end{table}
Table 4: Processing times distribution
\begin{table}
\begin{tabular}{l c c c} Instance Type & \(|V|\) & \(|K|\) & Number of Instances \\ \hline Small & 7, 8, 9, 10 & 5 & \(30\times 4\) \\ Medium & 40 & 5 & \(30\times 1\) \\ Large & 200, 300, 400 & 5 & \(30\times 3\) \\ \end{tabular}
\end{table}
Table 3: Data sets
of failed vehicles. We note that the failures are not considered for the medium-sized instances since these instances are used for only the one-scenario problem, which does not involve failures by definition.
The number of failure scenarios, \(2^{|V|}\), increases exponentially in the number of vehicles. Thus, we generated an i.i.d random sample of \(N\) realizations of the failure scenarios; hence, formed a SAA problem. For each failure scenario and vehicle, we first chose whether the vehicle was high risk or low risk (based on their prevalence). Then, depending on being a high-risk or low-risk vehicle, a failure probability was randomly selected from the respective range. Finally, it was determined whether the vehicle failed or not. In order to have a more representative sample of scenarios for large-sized instances, no low-risk vehicle was allowed to fail at any scenario.
For each parameter configuration, we generated 30 instances. The vehicles of each instance were randomly selected from a production day, respecting the ratios mentioned above. The algorithms were implemented in Python 3. For solving optimization problems we used Gurobi 9.0. The time limit is 600 seconds for all experiments unless otherwise stated. We run our experiments on computing nodes of the Clemson University supercomputer. The experiments with the exact solution approach were run on nodes with a single core and 15 GB of memory, and the experiments with the heuristic solution approach were on on nodes with 16 cores and 125 GB of memory.
### Exact Solution Approach
In this section, we present results for the solution quality and computational performance of the L-shaped algorithm. We used MRP scheme, explained in Section 4.3, to assess the solution quality. We also compared the computational performance of the L-shaped algorithm with that of solving the DEF. We present the results for 120 small-sized instances consisting of 7 to 10 vehicles. We do not present the results for large-sized instances as our preliminary experiments showed that the number of instances that could be solved to optimality decreases drastically.
We also point that instead of solving a relaxed master problem to optimality at each iteration of the L-shaped algorithm, one can aim for just obtaining a feasible solution \(\hat{x}\in X\). This may result in saving a significant amount of computational time that would be otherwise spent on exploring solutions that are already eliminated in previous iterations. This kind of implementation, referred to as _branch-and-Benders-cut_ (B&BC), is studied in the literature, see, e.g., (Hooker, 2011; Thorsteinsson, 2001; Codato and Fischetti, 2006). In our implementation of our proposed L-shaped algorithm, we used Gurobi's Lazy constraint callback to generate cuts at a feasible integer solution found in the course of the branch-and-bound algorithm.
#### 5.2.1 Solution Quality
Figure 4 shows the impact of sample size on the solution quality of the SAA problem. Observe that the improvement in the upper bound of the optimality gap, the MRP output, as the sample size increases from 100 to 1000 progressively. We set the number of replications \(M\) to 30 and \(\alpha=0.05\) (95% confidence interval). While the mean of the optimality gap decreases gradually from 0.76% to 0.12%, a drastic enhancement is observed with the variance. We have 36 out of 120 solutions with an optimality gap of larger than 1% when the sample size is 100. However, all of the obtained solutions have less than a 1% optimality gap when the sample size is 1000. It can
be seen in the figure that good solutions can be obtained with a sample size of 100, yet it is not assured due to the high variance of the approximation. Consequently, the results suggest that the sample size should be increased until the variance of objective estimation is small enough.
Based on the results in Figure 4, we implemented the MRP integrated SAA scheme, presented in Section 4.3 and Algorithm 2, to balance the required computational effort and solution quality. We set \(\alpha=0.05\) (95% confidence interval) and \(\epsilon=0.01\) in MRP. While it is ensured that we obtain solutions within a 1% optimality gap, most of the solutions are found with the least computational effort, e.g., at the first iteration with a sample size of 100. In Table 5, we provide key performance results to show the performance of the MRP integrated SAA scheme, where the number of replications \(M\) is 30 and the MRP sample size \(N\) is 5000. The average value for the optimal value, accepted candidate solution's expected objective value, and optimality gap are presented in Table 5. The sample size of the accepted candidate solutions is 84, 20, 11, and 5 for the sample sizes \(N_{list}=\{100,200,500,1000\}\), respectively. The average optimality gap is 0.2% which shows that SAA can produce high-quality solutions.
Additionally, we assess the solution quality of the one-scenario problem (i.e., the deterministic MMS problem without any car failure). Observe from Figure 5 that the average optimality gap is 23%, the maximum optimality gap is 274%, and the standard deviation is 39%. Comparing the performance of the SAA and the one-scenario problems shows that we can generate robust solutions by considering vehicle failures, which helps reduce work overloads by more than 20%.
#### 5.2.2 Computational Performance
In this section, we conduct different experiments to compare the DEF and L-shaped algorithm. On the one hand, we assess the impact of using the _improved_ model, described in Section 3.2, obtained by setting the processing time of failed vehicles to the cycle time, and compare the results with those obtained using the standard model. The DEF corresponding to the standard and improved models are denoted as \(D_{std}\) and \(D_{imp}\), respectively. Similarly, the L-shaped
Figure 4: Solution quality of the SAA problem based on sample sizes
\begin{table}
\begin{tabular}{l l l} Statistical Lower & Estimated Objective & Optimality \\ Bound \(\mathbb{E}[z_{N}]\) & Value \(\mathbb{E}[Q(\hat{x}_{N},\xi_{\omega})]\) & Gap \(\hat{G}_{N}\) \\ \hline
55.80 & 55.91 & 0.11 (0.2\%) \\ \end{tabular}
\end{table}
Table 5: Solution quality of the MRP integrated SAA
algorithm corresponding to the standard and improved are denoted as \(L_{std}\) and \(L_{imp}\). On the other hand, we assess the impact of our proposed cut selection strategy, described in Section 4.1, obtained using \(\ell_{1}\) norm regularization to find a cut with the least number of zero coefficients. We used the cut selection strategy for the improved model, and denote the corresponding L-shaped algorithm as \(L_{imp-cs}\).
In Table 6, we present the results on the impact of the improved model and cut selection strategy to compare the DEF and L-shaped algorithm for solving the SAA problem. We report the average and standard deviation of the computational time (in seconds), labelled as \(\mu_{t}\) and \(\sigma_{t}\), respectively, and the optimality gap, labelled as _Gap_ (in percentage). Additionally, the number of instances that could not be solved optimally within the time limit is given in parenthesis under the _Gap_ columns. All time results are the average of instances (out of 30 instances) that could be solved optimally within the time limit, while the _Gap_ results are the average of the instances that could not be solved optimally. Based on the results in Section 5.2.1, we conducted the computational experiments on the SAA problem with sample sizes 100, 200, 500, and 1000.
Observe from Table 6 that using the improved model instead of the standard model decreased the computational time and optimality gap of both the DEF and L-shaped algorithm drastically. In particular, the solution time decreased for all instances. On average (over different \(|V|\) and \(N\)), we observe a 67% and 70% decrease for the DEF and L-shaped algorithm, respectively. Additionally, the decrease in the standard deviation is around 64% and 74% for the DEF and L-shaped algorithm, respectively, when non-optimal solutions are left out. Moreover, the number of instances that could not be solved optimally is reduced by using the improved model: on
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c} & \multicolumn{3}{c}{\(D_{\mathit{add}}\)} & \multicolumn{3}{c}{\(D_{\mathit{imp}}\)} & \multicolumn{3}{c}{\(L_{\mathit{add}}\)} & \multicolumn{3}{c}{\(L_{\mathit{imp}}\)} & \multicolumn{3}{c}{\(L_{\mathit{imp}-\sigma}\)} \\ \(|V|\) & N & \(\mu_{t}\) & \(\sigma_{t}\) & (\(\sigma_{t}\)) & Gap (\%) & \(\mu_{t}\) & (\(\sigma_{t}\)) & Gap (\%) & \(\mu_{t}\) & (\(\sigma_{t}\)) & Gap (\%) & \(\mu_{t}\) & (\(\sigma_{t}\)) & Gap (\%) \\ \hline
7 & 100 & 6.3 & 3.8 & - & 1.4 & 1.1 & - & 4.9 & 3.2 & - & 1.2 & 0.5 & - & 1.1 & 0.5 & - \\
200 & 10.2 & 6.4 & - & 2.0 & 1.1 & - & 79.5 & - & 1.9 & 0.9 & - & 1.6 & 0.6 & - \\
500 & 15.0 & 8.8 & - & 3.2 & 1.8 & - & 10.2 & 6.5 & - & 2.4 & 1.0 & - & 2.1 & 0.8 & - \\
1000 & 19.5 & 11.2 & - & 4.4 & 2.5 & - & 11.9 & 8.0 & - & 2.8 & 1.1 & - & 2.4 & 0.9 & - \\
8 & 100 & 25.3 & 16.7 & - & 11.7 & 10.5 & - & 51.7 & 50.4 & - & 9.4 & 6.6 & - & 5.2 & 4.4 & - \\
200 & 50.5 & 31.5 & - & 19.1 & 16.5 & - & 83.2 & 83.4 & - & 16.7 & 22.9 & - & 8.1 & 6.9 & - \\
500 & 88.9 & 53.3 & - & 3.2 & 24.9 & - & 145.0 & 16.2 & - & 24.9 & 26.4 & - & 13.4 & 10.9 & - \\
1000 & 127.9 & 81.9 & - & 44.3 & 35.9 & - & 159.1 & 150.8 & (1) 0.31 & 30.7 & 20.5 & - & 15.6 & 14.2 & - \\
9 & 100 & 170.3 & 157.6 & (2)03.3 & 35.3 & 27.3 & - & 185.7 & 150.3 & (3) 0.54 & 43.1 & 39.0 & - & 25.0 & 19.1 & - \\
200 & 238.9 & 165.1 & (7) 03.4 & 68.4 & 53.1 & - & 315.6 & 170.2 & (1) 0.54 & 80.0 & 96.4 & - & 41.1 & 49.4 & - \\
500 & 263.6 & 140.3 & (2) 03.8 & 126.2 & 120.3 & (1) 0.20 & 357.0 & 170.0 & (1) 0.55 & 100.7 & 79.9 & (1) 0.16 & 58.6 & 49.1 & - \\
1000, 317.7 & 140.4 & (1) 06.4 & - & 20.12 & 155.5 & (1) 0.13 & 366.3 & 198.4 & (2) 0.55 & - & 155.6 & 100.9 & (1) 0.07 & 57.1 & 66.5 & - \\
10 & 100 & 279.1 & 150.7 & (1) 02.0 & 91.8 & 61.7 & - & 486.5 & 233.8 & (25) 0.61 & 196.3 & 137.1 & (0) 0.16 & 133.7 & 152.2 & - \\
200 & 258.5 & 161.2 & 203.3 & 160.9 & 117.9 & - & 344.2 & 361.7 & (2) 0.61 & 238.5 & 179.6 & (0) 1.85 & 183.8 & 133.2 & (0) 10.6 \\
500 & 565.2 & 58.4 & (26) 04.8 & 245.3 & 151.4 & (6) 0.13 & 283.9 & 0.0 & (2) 0.69 & 264.4 & 139.9 & (1) 0.18 & 223.5 & 170.9 & (7) 0.16 \\
100 & 479.0 & 89.4 & (2) 05.3 & 294.1 & 189.2 & (1) 0.18 & 191.0 & 0.0 & (2) 0.72 & 266.0 & 170.8 & (1) 0.26 & 273.6 & 189.6 & (8) 0.19 \\ \end{tabular}
\end{table}
Table 6: Computational performance of the DEF and L-shaped algorithms for the SAA problem of small-sized instances
Figure 5: Solution quality of the one-scenario problem
average, 83% and 78% of those instances are solved optimally with \(D_{imp}\) and \(L_{imp}\), respectively. Additionally, the remaining non-optimal solutions are enhanced as a reduction in optimality gaps is achieved.
Another drastic improvement is provided by our cut selection strategy. Comparing \(L_{imp}\) and \(L_{imp-cs}\) in Table 6 shows that the mean and standard deviation of the computational time, and optimality gap are reduced by 35%, 33%, and 18%, on average. Furthermore, an optimal solution is found by \(L_{imp-cs}\) for 56% of the instances that could not be solved optimally within the time limit by \(L_{imp}\).
Finally, we compare \(D_{imp}\) and \(L_{imp-cs}\). Observe from Table 6 that \(L_{imp-cs}\) resulted in lower mean computational time by 31%, 59%, 45%, and 1% for instances with 7, 8, 9, and 10 vehicles, respectively, and by 15%, 28%, 38%, and 45% for instances with 100, 200, 500, 1000 scenarios, respectively. In the same order, the variance decreased by 56%, 58%, 38%, and -52% for instances with 7,8,9, and 10 vehicles, respectively, and by -1%, 18%, 41%, and 42% for instances with 100, 200, 500, 1000 scenarios, respectively. We conclude that our L-shaped algorithm outperforms the DEF for instances with up to 9 vehicles, and they provide comparable results for instances with 10 vehicles. Additionally, the superiority of the L-shaped algorithm over the DEF escalates as the number of scenarios increases.
### Heuristic Solution Approach
In this section, we present results for the solution quality and computational performance of the TS algorithm. The solution quality is evaluated by employing the MRP scheme, explained in Section 4.3. We also assess the computational performance of the TS algorithm in various aspects.
We set the operator selection probabilities (weights) based on our preliminary experiments. The weights of swap, forward insertion, backward insertion, and inversion are 0.45, 0.10, 0.15, 0.30, respectively. We set the time limit for the one-scenario problem to 10 seconds, i.e., \(\tau_{one}=10\) seconds, which leaves \(\tau_{full}=590\) seconds. Finally, based on the results of the quality assessment, we set the sample size \(N\) to 1000 for the computational performance assessments.
#### 5.3.1 Solution Quality
We solved the SAA problem of large-sized instances using the TS algorithm. To assess the solution quality, we then used the proposed MRP integrated SAA scheme, given in Algorithm 2, with the replication \(M=30\), MRP sample size \(N=20000\), \(\alpha=0.05\) (95% confidence interval), \(\epsilon=0.05\), and the list of sample sizes \(N_{list}=\{1000,2500,5000\}\). Table 7 reports the key performance result to show the performance of the MRP integrated SAA scheme. While the maximum optimality gap is 3.7%, the average optimality gap is 0.28% which indicates that solving the SAA problem with the proposed TS algorithm can produce high-quality solutions. Figure 6 further shows that the optimality gap for most of the solutions is less than 1% with only five outliers out of 90 instances.
\begin{table}
\begin{tabular}{l l l} Statistical Lower Bound \(\mathbb{E}[z_{N}]\) & Estimated Objective & Optimality \\
239.59 & 240.26 & 0.67 (0.28\%) \\ \end{tabular}
\end{table}
Table 7: Solution quality of the MRP integrated SAA
Moreover, we assess the solution quality of the one-scenario problem over large-sized instances by using the same procedure in order to show its robustness. Observe from Figure 7 that the average optimality gap is \(24.8\%\), the maximum optimality gap is \(76.5\%\), and the standard deviation is \(10.8\%\). Comparing the performance of the SAA and one-scenario problems demonstrates that we can generate robust solutions by considering vehicle failures. Accordingly, we reassure with the industry-sized instances that we can reduce the work overloads by more than \(20\%\) by considering stochastic car failures and solving the corresponding problem efficiently.
#### 5.3.2 Computational Performance
To assess the computational performance of the TS algorithm, we conducted different tests: 1) compared the solution found with the TS algorithm with the solution found by an off-the-shelf solver for the one-scenario problem of medium-sized instances, 2) compared the solution found with the TS algorithm with the optimal solution found by \(L_{imp-cs}\) approach for the SAA problem of small-sized instances, 3) compared the solution found with the TS algorithm with that of a simulated annealing (SA) algorithm for the SAA problem of large-sized instances, 4) and analyzed the convergence of TS algorithm for the one-scenario and SAA problems of large-sized instance. We executed 30 runs for each of the instances and tests.
Table 8 reports the results for the first set of computational experiments for the one-scenario problem. We note that we generated 30 instances that could be solved within three hours time limit with Gurobi (solving the DEF). The minimum, average, maximum, and standard deviation of the computational time (in seconds) are shown for the TS algorithm and Gurobi. Additionally, the average number of movements for the TS algorithm is reported in order to provide some insight about the efficiency of the implementation. The optimal solutions, for all 30 instances and for all 30 runs, are found in under 10 seconds by the TS algorithm. The average computational times are \(1140\) and \(0.33\) seconds for the Gurobi solver and TS algorithm, respectively. These results show that the proposed TS algorithm can consistently provide optimal solutions to the one-scenario problem in a very short amount of time by avoiding any local optima.
Recall that as demonstrated in Section 5.2.2, \(L_{imp-cs}\) outperfor
Figure 6: Solution quality of the SAA problem
Figure 7: Solution quality of one-scenario problem
SAA problem for small-sized instances. In the second set of experiments, we compared the computational effectiveness of solving the SAA problem of small-sized instances with the TS algorithm and \(L_{imp-cs}\). To this end, we chose a sample size of 1000 and solved 30 small-sized instances optimally by using \(L_{imp-cs}\). We then solved the same set of instances using the TS algorithm until either an optimal solution was obtained or the time limit was reached. Table 9 shows that TS algorithm was able to find optimal solutions in 83% of the experiments, 746 out of 900. However, the average optimality gap for these non-optimal solutions is 0.14%, indicating that the TS algorithm is reliable in terms of optimality. For the TS algorithm, we recorded the computational time as the time until the last improvement is done, so we can see that the TS algorithm is very efficient, with an average runtime of 0.08 seconds.
We observed that the TS algorithm terminated at a local optima often when the number of vehicles is very small. Our hypothesis is that there are plateaus with the same objective value when the number of vehicles is large. However, this is not the case when there are only 10 vehicles as there are a limited number of sequences and each sequence has a different objective function value. Hence, in the third set of experiments, we compared the TS algorithm and a SA algorithm for large-sized instances to analyze the impact of the tabu list and accepting worse solutions on escaping a local optima. We utilized the same local search algorithm for the SA with two differences: 1) disabled the tabu rules and 2) enabled accepting worse solutions based on the acceptance criterion. For the SA algorithm, we set the starting temperature \(T_{init}=10\). We also adopted a geometric cooling with the cooling constant \(\alpha=0.999\). The acceptance ratio is calculated using the Boltzmann distribution \(P(\Delta f,T)=e^{-\frac{f(x^{\prime})-f(x)}{T}}\), where \(x^{\prime}\) is a new solution, \(x\) is the incumbent solution, and \(T\) is the current temperature.
We note that the Kruskal-Wallis test shows no significant difference between the computational performance of the TS and SA algorithms at a 95% confidence level. However, as illustrated in Figure 8, the TS algorithm produces better results and converges faster than the SA algorithm (averaged over of 900 runs). This result shows that the proposed TS algorithm is capable of exploiting the search space while generally avoiding premature convergence to local optima. Accordingly, we conclude that there is no need to accept worse solutions in the local search.
Finally, in the last set of experiments, we conducted an analysis in order to provide insight into the reliability of the proposed TS algorithm's convergence. In particular, in Figure 9, we presented the box plots of the standard deviation of the objective values for the one-scenario
\begin{table}
\begin{tabular}{l c c c c c c} & \multicolumn{3}{c}{Time (s)} & \multicolumn{3}{c}{Move (\#)} \\ \cline{2-7} Method & Min & Mean & Max & Std. Dev. & Mean \\ \hline Gurobi & 2.37 & 1140.91 & 9373.08 & 2613.95 & - \\ TS & 6e-4 & 0.33 & 9.44 & 0.81 & 16940 \\ \end{tabular}
\end{table}
Table 8: Computational performance of Gurobi and TS for the one-scenario problem of medium-sized instances
\begin{table}
\begin{tabular}{l c c c c c c c c c} & \multicolumn{3}{c}{Time (s)} & \multicolumn{3}{c}{Optimality Gap (\%)} & \multicolumn{2}{c}{Optimally (\%)} & \multicolumn{2}{c}{Move (\#)} \\ Method & Min & Mean & Max & Std. Dev. & Mean & Max & Std. Dev. & Mean \\ \hline Gurobi & 5.02 & 68.91 & 210.06 & 50.99 & 0 & 0 & 0 & 100 & - \\ TS & 1e-4 & 0.08 & 0.28 & 0.08 & 0.14 & 2.79 & 0.41 & 83 & 628 \\ \end{tabular}
\end{table}
Table 9: Computational performance of Gurobi and TS for the SAA problem of small-sized instances
and SAA problems of large-sized instances. Each data point represents the standard deviation of 30 runs (for each of the 90 large-sized instances). The average standard deviations for the one-scenario and SAA problems are 0.19 and 0.93, while the means of the objective values are 212.77 and 239.69, respectively. Accordingly, the average coefficient of variations, the ratio between the average standard deviation and mean, are 9e-4 and 4e-3, which indicates that the proposed TS algorithm provides highly reliable results for both of the problems.
## 6 Conclusion
This paper studied mixed-model sequencing (MMS) problem with stochastic failures. To the best of our knowledge, this is the first study that considers stochastic failures of products in MMS. The products (vehicles) fail according to various characteristics and are then removed from the sequence, moving succeeding vehicles forward to close the gap. Vehicle failure may cause extra work overloads that could be prevented by generating a robust sequence at the beginning. Accordingly, we formulated the problem as a two-stage stochastic program, and improvements were presented for the second-stage problem. We employed the sample average approximation approach to tackle the exponential number of scenarios. We developed L-shaped decomposition-based algorithms to solve small-sized instances. The numerical experiments showed that the L-shaped algorithm outperforms the deterministic equivalent formulation, solved with an off-the-shelf solver, in terms of both solution quality and computational time. To solve industry
Figure 8: Convergence comparison of the TS and SA algorithms
Figure 9: Convergence of the objective value with TS algorithm for the one-scenario and SAA problems
sized instances efficiently, we developed a greedy heuristic and a tabu search algorithm that is accelerated with problem-specific tabu rules. Numerical results showed that we can provide good quality solutions, with less than a 5% statistical optimality gap, to industry-sized instances in under ten minutes. The numerical experiments also indicated that we can generate good quality robust solutions by utilizing a sample of scenarios. In particular, we can reduce the work overload by more than 20%, for both small- and large-sized instances, by considering possible car failures.
### Managerial Insights
Car manufacturers are facing several challenges due to the increasing ratio of EVs in production. EVs have significant differences compared to non-EVs which require specific treatment when creating the sequence for the mixed-model assembly line. One of the main challenges is the battery installation process. Unlike traditional vehicles, EVs have large and heavy batteries that need to be installed in a specific order to avoid damaging the vehicle or the battery itself. Accordingly, a huge processing time variation at the battery loading station arises from this difference, which requires special treatment to ensure that the assembly line can continue to produce high-quality vehicles efficiently.
We have observed that consecutive EVs induce a significant amount of work overload, which generally requires line stoppage even with the help of utility workers. Planning the sequence by ruling out back-to-back EVs does not guarantee that there will not be any occurrence of consecutive EVs. The failure of vehicles disrupts the planned sequence, and the necessity of considering car failures during the planning process increases as the difference between product types expands.
In this study, we focused on generating robust schedules that take into account the possible deleterious effects resulting from the divergence between electric and non-electric vehicles. However, it is worth noting that our proposed solution methodologies are equally applicable to any feature that causes similar variations at specific work stations.
### Future Research
One direction for future research is the reinsertion of the failed vehicles back into the sequence. Even though the reinsertion process is being conducted via a real-time decision, a robust approach may increase the efficiency of production. Another direction for future research is to include stochastic processing times in addition to stochastic product failures. This may potentially generate more robust schedules, particularly in case a connection between failures and processing times is observed. Finally, there are similarities between MMS and some variants of the traveling salesman problem (TSP). Since the TSP is one of the most studied combinatorial optimization problems, the state-of-art solution methodologies presented for TSP may be adapted to MMS.
## Acknowledgements
The authors acknowledge Clemson University for generous allotment of compute time on Palmetto cluster.
## Appendix A Tabu List for Local Search Algorithm
Assume that we have two positions is selected for any operator to be applied, \(t_{1},t_{2}\) with \(t_{1}<t_{2}\). The tabu movements for each operator is described below under two cases: the vehicle at the position \(t_{1}\) is 1) an EV, 2) not an EV.
\(\underline{\textbf{Swap}}\)
1) The position \(t_{2}\) cannot be a neighbor of an EV, e.g., the vehicles at the positions \(t_{2}-1\) and \(t_{2}+1\) cannot be an EV.
2) The vehicle at the position \(t_{2}\) cannot be an EV if the position \(t_{1}\) is a neighbor of an EV.
\(\underline{\textbf{Forward Insertion}}\)
1) The vehicles at the positions \(t_{2}\) or \(t_{2}-1\) cannot be an EV.
\(\underline{\textbf{Backward Insertion}}\)
1) The vehicles at the positions \(t_{1}\) or \(t_{1}-1\) cannot be an EV.
2) At most one of the vehicles at the positions \(t_{2}-1\) and \(t_{2}+1\) can be an EV.
\(\underline{\textbf{Inversion}}\)
1) The position \(t_{2}\) cannot be a left neighbor of an EV, e.g., the vehicle at the position \(t_{2}+1\) cannot be an EV.
2) If the vehicle at the position \(t_{1}\) is a right neighbor of an EV, then the vehicle at the position \(t_{2}\) cannot be an EV.
|
2310.17086 | Transformers Learn Higher-Order Optimization Methods for In-Context
Learning: A Study with Linear Models | Transformers excel at in-context learning (ICL) -- learning from
demonstrations without parameter updates -- but how they do so remains a
mystery. Recent work suggests that Transformers may internally run Gradient
Descent (GD), a first-order optimization method, to perform ICL. In this paper,
we instead demonstrate that Transformers learn to approximate higher-order
optimization methods for ICL. For in-context linear regression, Transformers
share a similar convergence rate as Iterative Newton's Method; both are
exponentially faster than GD. Empirically, predictions from successive
Transformer layers closely match different iterations of Newton's Method
linearly, with each middle layer roughly computing 3 iterations; thus,
Transformers and Newton's method converge at roughly the same rate. In
contrast, Gradient Descent converges exponentially more slowly. We also show
that Transformers can learn in-context on ill-conditioned data, a setting where
Gradient Descent struggles but Iterative Newton succeeds. Finally, to
corroborate our empirical findings, we prove that Transformers can implement
$k$ iterations of Newton's method with $k + \mathcal{O}(1)$ layers. | Deqing Fu, Tian-Qi Chen, Robin Jia, Vatsal Sharan | 2023-10-26T01:08:47Z | http://arxiv.org/abs/2310.17086v2 | # Transformers Learn Higher-Order Optimization Methods for
###### Abstract
Transformers are remarkably good at _in-context learning_ (ICL)--learning from demonstrations without parameter updates--but how they perform ICL remains a mystery. Recent work suggests that Transformers may learn in-context by internally running Gradient Descent, a first-order optimization method. In this paper, we instead demonstrate that Transformers learn to implement higher-order optimization methods to perform ICL. Focusing on in-context linear regression, we show that Transformers learn to implement an algorithm very similar to _Iterative Newton's Method_, a higher-order optimization method, rather than Gradient Descent. Empirically, we show that predictions from successive Transformer layers closely match different iterations of Newton's Method _linearly_, with each middle layer roughly computing 3 iterations. In contrast, _exponentially_ more Gradient Descent steps are needed to match an additional Transformers layer; this suggests that Transformers have an comparable rate of convergence with high-order methods such as Iterative Newton, which are exponentially faster than Gradient Descent. We also show that Transformers can learn in-context on ill-conditioned data, a setting where Gradient Descent struggles but Iterative Newton succeeds. Finally, we show theoretical results which support our empirical findings and have a close correspondence with them: we prove that Transformers can implement \(k\) iterations of Newton's method with \(\mathcal{O}(k)\) layers.
## 1 Introduction
Transformer neural networks (Vaswani et al., 2017) have become the default architecture for natural language processing (Devlin et al., 2019, Brown et al., 2020, OpenAI, 2023), and have even been adopted by other areas like computer vision (Dosovitskiy et al., 2021). As first demonstrated by GPT-3 (Brown et al., 2020), Transformers excel at _in-context learning_ (ICL)--learning from input-output pairs provided as inputs to the model, without updating their model parameters. Through in-context learning, Transformer-based Large Language Models (LLMs) can achieve state-of-the-art few-shot performance across a wide variety of downstream tasks (Rae et al., 2022, Smith et al., 2022, Thoppilan et al., 2022, Chowdhery et al., 2022).
Given the importance of Transformers and ICL, many prior efforts have attempted to understand how Transformers perform in-context learning. Prior work suggests Transformers learn classification similar to support vector machines (Tarzanagh et al., 2023, 2022) and can approximate linear functions well in-context (Garg et al., 2022). Specifically to linear regression tasks, prior work has tried to understand the ICL mechanism and the dominant hypothesis is that Transformers learn in-context by running optimizations internally through gradient-based algorithms (von Oswald et al., 2022, 2023, Ahn et al., 2023, Dai et al., 2023).
This paper presents strong evidence for a competing hypothesis: Transformers trained to perform in-context linear regression learn to implement a higher-order optimization method rather than a first-order method like Gradient Descent. In particular, Transformers implement a method very similar to Newton-Schulz's Method, also known as the _Iterative Newton's Method_, which iteratively improves an estimate of the inverse of the design matrix to compute the optimal weight vector. Across many layers of the Transformer, subsequent layers approximately compute more and more iterations of Newton's Method, with increasingly better predictions; both eventually converge to the optimal minimum-norm solution found by ordinary least squares (OLS). Interestingly, this mechanism is specific to Transformers: LSTMs do not learn these same higher-order methods, as their predictions do not even improve across layers.
We present both empirical and theoretical evidence for our claims. Empirically, Transformer induced weights and residuals are similar to Iterative Newton and deeper layers match Newton with more iterations (see Figures 1
and 9). Transformers can also handle ill-conditioned problems without requiring significantly more layers, where GD would suffer from slow convergence but Iterative Newton would not. Crucially, Transformers share the same rate of convergence as Iterative Newton and are exponentially faster than Gradient Descent. Theoretically, we show that Transformer circuits can efficiently implement Iterative Newton, with the number of layers depending linearly on the number of iterations and the dimensionality of the hidden states depending linearly on the dimensionality of the data. Overall, our work provides a mechanistic account of how Transformers perform in-context learning that not only explains model behavior better than previous hypotheses, but also hints at what makes Transformers so well-suited for ICL compared with other neural architectures.
## 2 Related Work
In-context learning by large language models. GPT-3 (Brown et al., 2020) first showed that Transformer-based large language models can "learn" to perform new tasks from in-context demonstrations (i.e., input-output pairs). Since then, a large body of work in NLP has studied in-context learning, for instance by understanding how the choice and order of demonstrations affects results (Lu et al., 2022; Liu et al., 2022; Rubin et al., 2022; Su et al., 2023; Chang and Jia, 2023; Nguyen and Wong, 2023), studying the effect of label noise (Min et al., 2022; Yoo et al., 2022; Wei et al., 2023), and proposing methods to improve ICL accuracy (Zhao et al., 2021; Min et al., 2022; Wu et al., 2022; Wu et al., 2022).
In-context learning beyond natural language.Inspired by the phenomenon of ICL by large language models, subsequent work has studied how Transformers learn in-context beyond NLP tasks. Garg et al. (2022) first investigated Transformers' ICL abilities for various classical machine learning problems, including linear regression. We largely adopt their linear regression setup in this work. Li et al. (2023) formalize in-context learning as an algorithm learning problem where a Transformer model implicitly constructs a hypothesis function at inference-time and obtain generalization bounds for ICL. Han et al. (2023) suggests that Transformers learn in-context by performing Bayesian inference on prompts, which can be asymptotically interpreted as kernel regression. Tarzanagh et al. (2023) and Tarzanagh et al. (2023) show that Transformers can find max-margin solutions for classification tasks and act as support vector machines. Zhang et al. (2023) prove that a linear attention Transformer trained by gradient flow can indeed in-context learn class of linear models. Raventos et al. (2023) explore how diverse pretraining data can enable models to perform ICL on new tasks.
Do Transformers implement Gradient Descent?A growing body of work has suggested that Transformers learn in-context by implementing gradient descent within their internal representations. Akyurek et al. (2022) summarize
Figure 1: **Progression of Algorithms. (a) Transformer’s performance improves over the layer index \(\ell\). (b) Iterative Newton’s performance improves over the number of iterations \(t\), in a way that closely resembles the Transformer. We plot the best-matching \(t\) to Transformer’s \(\ell\) following Definition 4. (c) In contrast, LSTM’s performance does not improve from layer to layer.**
operations that Transformers can implement, such as multiplication and affine transformations, and show that Transformers can implement gradient descent for linear regression using these operations. Concurrently, von Oswald et al. (2022) argue that Transformers learn in-context via gradient descent, where one layer performs one gradient update. In subsequent work, von Oswald et al. (2023) further argue that Transformers are strongly biased towards learning to implement gradient-based optimization routines. Ahn et al. (2023) extend the work of von Oswald et al. (2022) by showing Transformers can learn to implement preconditioned Gradient Descent, where the pre-conditioner can adapt to the data. Bai et al. (2023) provides detailed constructions for how Transformers can implement a range of learning algorithms via gradient descent. Finally, Dai et al. (2023) conduct experiments on NLP tasks and conclude that Transformer-based language models performing ICL behave similarly to models fine-tuned via gradient descent; however, concurrent work (Shen et al., 2023) argues that real-world LLMs do not perform ICL via gradient descent. In this paper, we argue that Transformers actually learn to perform in-context learning by implementing a higher-order optimization method, not gradient descent. Predictions made by different Transformer layers match iterations of higher-order optimization methods better than they match iterations of gradient descent; moreover, Transformers can handle ill-conditioned data, unlike Gradient Descent.
Mechanistic interpretability for Transformers. Our work attempts to understand the mechanism through which Transformers perform in-context learning. Prior work has studied other aspects of Transformers' internal mechanisms, including reverse-engineering language models (Wang et al., 2022), the grokking phenomenon (Power et al., 2022; Nanda et al., 2023), manipulating attention maps (Hassid et al., 2022), and automated circuit finding (Conmy et al., 2023).
## 3 Problem Setup
In this paper, we focus on the following linear regression task. The task involves \(n\) examples \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}\) where \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) and \(y_{i}\in\mathbb{R}\). The examples are generated from the following data generating distribution \(P_{\mathcal{D}}\), parameterized by a distribution \(\mathcal{D}\) over \((d\times d)\) positive semi-definite matrices. For each sequence of \(n\) in-context examples, we first sample a ground-truth weight vector \(\mathbf{w}^{\star}\stackrel{{\mathrm{i.i.d.}}}{{\sim}}\mathcal{N}( \mathbf{0},\mathbf{I})\in\mathbb{R}^{d}\) and a matrix \(\mathbf{\Sigma}\stackrel{{\mathrm{i.i.d.}}}{{\sim}}\mathcal{D}\). For \(i\in[n]\), we sample each \(\mathbf{x}_{i}\stackrel{{\mathrm{i.i.d.}}}{{\sim}}\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\). The label \(y_{i}\) for each \(\mathbf{x}_{i}\) is given by \(y_{i}=\mathbf{w}^{\star\top}\mathbf{x}_{i}\). Note that for much of our experiments \(\mathcal{D}\) is only supported on the identity matrix \(\mathbf{I}\in\mathbb{R}^{d\times d}\) and hence \(\mathbf{\Sigma}=\mathbf{I}\), but we also consider some distributions over ill-conditioned matrices which will give rise to ill-conditioned regression problems.
### Standard Methods for Solving Linear Regression
Our central research question is:
_Does the algorithm Transformers learn for linear regression resemble any known algorithm?_
To investigate this question, we first discuss various known algorithms for linear regression. We then compare them with Transformers empirically in SS4 and theoretically in SS5. For iterative algorithms, we care particularly about their rates of convergence (the number of steps required to reach \(\epsilon\) error).
For any time step \(t\), let \(\mathbf{X}^{(t)}=\begin{bmatrix}\mathbf{x}_{1}&\cdots&\mathbf{x}_{t}\end{bmatrix}^{\top}\) be the data matrix and \(\mathbf{y}^{(t)}=\begin{bmatrix}y_{1}&\cdots&y_{t}\end{bmatrix}^{\top}\) be the labels for all the datapoints seen so far. Note that since \(t\) can be smaller than the data dimension \(d\), \(\mathbf{X}^{(t)}\) can be singular. We now consider various algorithms for making predictions for \(\mathbf{x}_{t+1}\) based on \(\mathbf{X}^{(t)}\) and \(\mathbf{y}^{(t)}\). When it is clear from context, we will drop the superscript and refer to \(\mathbf{X}^{(t)}\) and \(\mathbf{y}^{(t)}\) as \(\mathbf{X}\) and \(\mathbf{y}\), where \(\mathbf{X}\) and \(\mathbf{y}\) correspond to all the datapoints seen so far.
Figure 2: Illustration of how Transformers are trained to do in-context linear regression.
Ordinary Least Squares.This method finds the minimum-norm solution to the objective:
\[\mathcal{L}(\mathbf{w}\mid\mathbf{X},\mathbf{y})=\frac{1}{2n}\|\mathbf{y}-\mathbf{X}\mathbf{w}\|_{2}^{2}. \tag{1}\]
The Ordinary Least Squares (OLS) solution has a closed form given by the Normal Equations:
\[\hat{\mathbf{w}}^{\rm{OLS}}=(\mathbf{X}^{\top}\mathbf{X})^{\dagger}\mathbf{X}^{\top}\mathbf{y} \tag{2}\]
where \(\mathbf{S}:=\mathbf{X}^{\top}\mathbf{X}\) and \(\mathbf{S}^{\dagger}\) is the pseudo-inverse (Moore, 1920) of \(\mathbf{S}\).
Gradient Descent.Gradient descent (GD) finds the weight vector \(\hat{\mathbf{w}}^{\rm{GD}}\) with randomly initialized \(\hat{\mathbf{w}}^{\rm{GD}}_{0}\) and using the iterative update rule:
\[\hat{\mathbf{w}}^{\rm{GD}}_{k+1}=\hat{\mathbf{w}}^{\rm{GD}}_{k}-\eta\nabla_{\mathbf{w}} \mathcal{L}(\hat{\mathbf{w}}^{\rm{GD}}_{k}\mid\mathbf{X},\mathbf{y}). \tag{3}\]
It is known that Gradient Descent requires \(\mathcal{O}\left(\kappa(\mathbf{S})\log(1/\epsilon)\right)\) steps to converge to an \(\epsilon\) error where \(\kappa(\mathbf{S})=\frac{\lambda_{\max}(\mathbf{S})}{\lambda_{\min}(\mathbf{S})}\) is the _condition number_. Thus, when \(\kappa(\mathbf{S})\) is large, Gradient Descent converges slowly (Boyd and Vandenberghe, 2004).
Online Gradient Descent.While GD computes the gradient with respect to the full data matrix \(\mathbf{X}\) at each iteration, Online Gradient Descent (OGD) is an online algorithm that only computes gradients on the newly received data point \(\{\mathbf{x}_{k},y_{k}\}\) at step \(k\):
\[\hat{\mathbf{w}}^{\rm{OGD}}_{k+1}=\hat{\mathbf{w}}^{\rm{OGD}}_{k}-\eta_{k}\nabla_{\bm {w}}\mathcal{L}(\hat{\mathbf{w}}^{\rm{OGD}}_{k}\mid\mathbf{x}_{k},y_{k}). \tag{4}\]
Picking \(\eta_{k}=\frac{1}{\|\mathbf{x}_{k}\|_{2}^{2}}\) ensures that the new weight vector \(\hat{\mathbf{w}}^{\rm{OGD}}_{k+1}\) makes zero error on \(\{\mathbf{x}_{k},y_{k}\}\).
Iterative Newton's Method.This method finds the weight vector \(\hat{\mathbf{w}}^{\rm{Newton}}\) by iteratively apply Newton's method to finding the pseudo inverse of \(\mathbf{S}=\mathbf{X}^{\top}\mathbf{X}\)(Schulz, 1933; Ben-Israel, 1965):
\[\mathbf{M}_{0}=\alpha\mathbf{S}\text{, where }\alpha=\frac{2}{\|\mathbf{S} \mathbf{S}^{\top}\|_{2}},\qquad\hat{\mathbf{w}}^{\rm{Newton}}_{0}=\mathbf{M}_{0}\mathbf{X}^{ \top}\mathbf{y}, \tag{5}\] \[\mathbf{M}_{k+1}=2\mathbf{M}_{k}-\mathbf{M}_{k}\mathbf{S}\mathbf{M}_{k},\qquad\qquad \hat{\mathbf{w}}^{\rm{Newton}}_{k+1}=\mathbf{M}_{k+1}\mathbf{X}^{\top}\mathbf{y}.\]
This computes an approximation of the psuedo inverse using the moments of \(\mathbf{S}\). In contrast to GD, the Iterative Newton's method only requires \(\mathcal{O}(\log\kappa(\mathbf{S})+\log\log(1/\epsilon))\) steps to converge to an \(\epsilon\) error (Soderstrom and Stewart, 1974; Pan and Schreiber, 1991). Note that this is exponentially faster than the convergence rate of Gradient Descent.
### Solving Linear Regression with Transformers
We will use neural network models such as Transformers to solve this linear regression task. As shown in Figure 2, at time step \(t+1\), the model sees the first \(t\) in-context examples \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t}\), and then makes predictions for \(\mathbf{x}_{t+1}\), whose label \(y_{t+1}\) is not observed by the Transformers model.
We randomly initialize our models and then train them on the linear regression task to make predictions for every number of in-context examples \(t\), where \(t\in[n]\). Training and test data are both drawn from \(P_{\mathcal{D}}\). To make the input prompts contain both \(\mathbf{x}_{i}\) and \(y_{i}\), we follow same the setup as Garg et al. (2022)'s to zero-pad \(y_{i}\)'s, and use the same decoder-only GPT-2 model (Radford et al., 2019) with softmax activation and causal attention mask (discussed later in definition 1).
We now present the key mathematical details for the Transformer architecture, and how they can be used for in-context learning. First, the causal attention mask enforces that attention heads can only attend to hidden states of previous time steps, and is defined as follows.
**Definition 1** (Causal Attention Layer).: _An **causal** attention layer with \(M\) heads and activation function \(\sigma\) is denoted as \(\mathrm{Attn}\) on any input sequence \(\mathbf{H}=\begin{bmatrix}\mathbf{h}_{1},\cdots,\mathbf{h}_{N}\end{bmatrix}\in\mathbb{R}^{ D\times N}\), where \(D\) is the dimension of hidden states and \(N\) is the sequence length. In the vector form,_
\[\tilde{\mathbf{h}}_{t}=[\mathrm{Attn}(\mathbf{H})]_{t}=\mathbf{h}_{t}+\sum_{m=1}^{M}\sum_{j =1}^{t}\sigma\left(\langle\mathbf{Q}_{m}\mathbf{h}_{t},\mathbf{K}_{m}\mathbf{h}_{j}\rangle\right) \cdot\mathbf{V}_{m}\mathbf{h}_{j}. \tag{6}\]
Vaswani et al. (2017) originally proposed the Transformer architecture with the Softmax activation function for the attention layers. Later works have found that replacing \(\mathrm{Softmax}(\cdot)\) with \(\frac{1}{4}\mathrm{ReLU}(\cdot)\) does not hurt model performance (Cai et al., 2022; Shen et al., 2023a; Wortsman et al., 2023). The Transformers architecture is defined by putting together attention layers with feed forward layers:
**Definition 2** (Transformers).: _An \(L\)-layer decoder-based transformer with Causal Attention Layers is denoted as \(\mathrm{TF}_{\mathbf{\theta}}\) and is a composition of a MLP Layer (with a skip connection) and a Causal Attention Layers. For input sequence \(\mathbf{H}^{(0)}\), the transformers \(\ell\)-th hidden layer is given by_
\[\mathrm{TF}_{\mathbf{\theta}}^{\ell}(\mathbf{H}^{(0)}):=\mathbf{H}^{(\ell)} =\mathrm{MLP}_{\mathbf{\theta}_{\mathrm{mlp}}^{(\ell)}}\left(\mathrm{Att}_{\mathbf{ \theta}_{\mathrm{attn}}^{(\ell)}}(\mathbf{H}^{(\ell-1)})\right) \tag{7}\]
_where \(\mathbf{\theta}=\{\mathbf{\theta}_{\mathrm{mlp}}^{(\ell)},\mathbf{\theta}_{\mathrm{attn} }^{(\ell)}\}_{\ell=1}^{L}\) and \(\mathbf{\theta}_{\mathrm{attn}}^{(\ell)}=\{\mathbf{Q}_{m}^{(\ell)},\mathbf{K}_{m}^{(\ell )},\mathbf{V}_{m}^{(\ell)}\}_{m=1}^{M}\) consists of \(M\) heads at layer \(\ell\)._
In particular for the linear regression task, Transformers perform in-context learning as follows
**Definition 3** (Transformers for Linear Regression).: _Given in-context examples \(\{\mathbf{x}_{1},y_{1},\dots,\mathbf{x}_{t},y_{t}\}\), Transformers make predictions on a query example \(\mathbf{x}_{t+1}\) through a readout layer parameterized as \(\mathbf{\theta}_{\mathrm{readout}}=\{\mathbf{u},v\}\), and the prediction \(\hat{y}_{t+1}^{\mathrm{TF}}\) is given by_
\[\hat{y}_{t+1}^{\mathrm{TF}}:=\mathrm{ReadOut}\Big{[}\underbrace{ \mathrm{TF}_{\mathbf{\theta}}^{L}(\{\mathbf{x}_{1},\mathbf{y}_{1},\dots,\mathbf{x}_{t},\mathbf{y}_{ t},\mathbf{x}_{t+1}\})}_{\mathbf{H}^{(L)}}\Big{]}=\mathbf{u}^{\top}\mathbf{H}_{:,2t+1}^{(L)}+v. \tag{8}\]
To compare the rate of convergence of iterative algorithms to that of Transformers, we treat the layer index \(\ell\) of Transformers as analogous to the iterative step \(k\) of algorithms discussed in SS3.1. Note that for Transformers, we need to re-train the \(\mathrm{ReadOut}\) layer for every layer index \(\ell\) so that they can improve progressively (see SS4.1 and for experimental details) for linear regression tasks.
### Lstm
While our primary goal is to analyze Transformers, we also consider the LSTM architecture (Hochreiter and Schmidhuber, 1997) to understand whether Transformers learn different algorithms than other neural sequence models trained to do linear regression. In particular, we train a unidirectional \(L\)-layer LSTM, which generates a sequence of hidden states \(\mathbf{H}^{(\ell)}\) for each layer \(\ell\), similarly to an \(L\)-layer Transformer. As with Transformers, we add a readout layer that predicts the \(\hat{y}_{t+1}^{\mathrm{LSTM}}\) from the final hidden state at the final layer, \(\mathbf{H}_{:,2t+1}^{(L)}\).
### Measuring Algorithmic Similarity
#### 3.4.1 Metrics
We propose two metrics to measure the similarity between linear regression algorithms.
Similarity of Errors.For a linear regression algorithm \(\mathcal{A}\), let \(\mathcal{A}(\mathbf{x}_{t+1}\mid\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t})\) denote its prediction on the \((t+1)\)-th in-context example \(\mathbf{x}_{t+1}\) after observing the first \(t\) examples (see Figure 2). We write \(\mathcal{A}(\mathbf{x}_{t+1}):=\mathcal{A}(\mathbf{x}_{t+1}\mid\{\mathbf{x}_{i},y_{i}\}_{ i=1}^{t})\) for brevity. The errors (i.e., residuals) on the data sequence are:1
Footnote 1: the indices start from 2 to \(n+1\) because we evaluate all cases where \(t\) can choose from \(1,\cdots,n\).
\[\mathcal{E}(\mathcal{A}\mid\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n+1})=\Big{[}\mathcal{ A}(\mathbf{x}_{2})-y_{2},\cdots,\mathcal{A}(\mathbf{x}_{n+1})-y_{n+1}\Big{]}^{\top} \in\mathbb{R}^{n}. \tag{9}\]
For any two algorithms \(\mathcal{A}_{a}\) and \(\mathcal{A}_{b}\), their similarity of errors, corresponding to the metric \(\mathcal{C}(\cdot,\cdot)\), is
\[\mathrm{SimE}(\mathcal{A}_{a},\mathcal{A}_{b})=\mathop{\mathbb{E}}_{\{\mathbf{x}_{i },y_{i}\}_{i=1}^{n+1}\sim P_{\mathcal{D}}}\ \ \mathcal{C}\Big{(}\mathcal{E}(\mathcal{A}_{a}\mid\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n+1} ),\mathcal{E}(\mathcal{A}_{b}\mid\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n+1})\Big{)} \tag{10}\]
where we use the cosine similarity as our correlation metric \(\mathcal{C}(\mathbf{u},\mathbf{v})=\frac{\langle\mathbf{u},\mathbf{v}\rangle}{\|\mathbf{u}\|_{2}\| \mathbf{v}\|_{2}}\). Here \(n\) is the total number of in-context examples and \(P_{\mathcal{D}}\) is the data generation process discussed previously.
Similarity of Induced Weights.All standard algorithms for linear regression estimate a weight vector \(\hat{\mathbf{w}}\). While neural ICL models like Transformers do not explicitly learn such a weight vector, similar to Akyurek et al. (2022), we can _induce_ an implicit weight vector \(\tilde{\mathbf{w}}\) learned by any algorithm \(\mathcal{A}\) by fitting a weight vector to its predictions. To do this, for any fixed sequence of \(t\) in-context examples \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t}\), we sample \(T\gg d\) query examples \(\tilde{\mathbf{x}}_{k}\stackrel{{\mathrm{i.i.d.}}}{{\sim}}\mathcal{N }(\mathbf{0},\mathbf{\Sigma})\), where \(k\in[T]\). For this fixed sequence of in-context examples \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t}\), we create \(T\) in-context prediction tasks and use the algorithm \(\mathcal{A}\) to make predictions \(\mathcal{A}(\tilde{\mathbf{x}}_{k}\mid\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t})\). We define the induced data matrix and labels as
\[\tilde{\mathbf{X}}=\begin{bmatrix}\tilde{\mathbf{x}}_{1}^{\top}\\ \vdots\\ \tilde{\mathbf{x}}_{T}^{\top}\end{bmatrix}\qquad\tilde{\mathbf{Y}}=\begin{bmatrix} \mathcal{A}(\tilde{\mathbf{x}}_{1}\mid\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t})\\ \vdots\\ \mathcal{A}(\tilde{\mathbf{x}}_{T}\mid\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t})\end{bmatrix}. \tag{11}\]
Then, the induced weight vector for \(\mathcal{A}\) and these \(t\) in-context examples is:
\[\tilde{\mathbf{w}}_{t}(\mathcal{A}):=\tilde{\mathbf{w}}_{t}(\mathcal{A} \mid\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t})=(\tilde{\mathbf{X}}^{\top}\tilde{\mathbf{X}})^{-1} \tilde{\mathbf{X}}^{\top}\tilde{\mathbf{Y}}. \tag{12}\]
We measure the similarity between two algorithms \(\mathcal{A}_{a}\) and \(\mathcal{A}_{b}\) by measuring the similarity of induced weight vectors \(\tilde{\mathbf{w}}_{t}(\mathcal{A}_{a})\) and \(\tilde{\mathbf{w}}_{t}(\mathcal{A}_{b})\). We define the similarity of induced weights between two algorithms as
\[\mathrm{SimW}(\mathcal{A}_{a},\mathcal{A}_{b})=\operatorname*{ \mathbb{E}}_{\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t}\sim P_{\mathcal{D}}}\ \ \frac{1}{n}\sum_{t=1}^{n}\mathcal{C}\Big{(}\tilde{\mathbf{w}}_{t}( \mathcal{A}_{a}\mid\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t}),\tilde{\mathbf{w}}_{t}( \mathcal{A}_{b}\mid\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t}))\Big{)}.\]
#### 3.4.2 Best Matching Hyper-parameters Between Algorithms
Each algorithm we consider has its own hyper-parameter(s), for example the number of iterations for Iterative Newton and Gradient Descent, and the number of layers for Transformers (see Section 4.1). When comparing two algorithms, given a choice of hyper-parameters for the first algorithm, we compare with the hyper-parameters for the second algorithm that maximize algorithmic similarity. In other words, we measure whether there exists hyperparameters for the second algorithm that make the two algorithms are similar.
**Definition 4** (Hyper-Parameter Matching).: _Let \(\mathcal{M}\) be the metric for evaluating similarities between two algorithms \(\mathcal{A}_{a}\) and \(\mathcal{A}_{b}\), which have hyper-parameters \(p_{a}\in\mathcal{P}_{a}\) and \(p_{b}\in\mathcal{P}_{b}\), respectively. For a given choice of \(p_{a}\), We define the best-matching hyper-parameters of algorithm \(\mathcal{A}_{b}\) for \(\mathcal{A}_{a}\) as:_
\[p_{b}^{\mathcal{M}}(p_{a}):=\operatorname*{arg\,min}_{p_{b}\in \mathcal{P}_{b}}\mathcal{M}(\mathcal{A}_{a}(\cdot\mid p_{a}),\mathcal{A}_{b}( \cdot\mid p_{b})). \tag{13}\]
The matching processes can be visualized as heatmaps as shown in Figure 3, where best-matching hyperparameters are highlighted. This enables us to better compare the rate of convergence of algorithms and we will discuss these results further in SS4.
## 4 Experimental Evidence
We mainly work on the Transformers-based GPT-2 model with 12 layers with 8 heads per layer. Alternative configurations with fewer heads per layer also support our findings and we defer them to Appendix A. We initially focus on isotropic cases where \(\mathbf{\Sigma}=\mathbf{I}\) and later consider ill-conditioned \(\mathbf{\Sigma}\) in SS4.3. Our training setup is exactly the same as Garg et al. (2022): models are trained with at most \(n=40\) in-context examples for \(d=20\) (with the same learning rate, batch size etc.).
We claim that Transformers learn high-order optimization methods in-context. We provide evidence that Transformers improve themselves with more layers in SS4.1; Transformers share the same rate of convergence as Iterative Newton, which is exponentially faster than that of Gradient Descent, in SS4.2; and they also perform well on ill-conditioned problems in SS4.3. Finally, we contrast Transformers with LSTMs in SS4.4.
### Transformers improve progressively over layers
Many known algorithms for linear regression, including GD, OGD, and Iterative Newton, are _iterative_: their performance progressively improves as they perform more iterations, eventually converging to a final solution. How could a Transformer implement such an iterative algorithm? von Oswald et al. (2022) propose that deeper _layers_ of the Transformer may correspond to more iterations of an iterative method; in particular, they show that there exist Transformer parameters such that each attention layer performs one step of Gradient Descent.
Following this intuition, we first investigate whether the predictions of a trained Transformer improve as the layer index \(\ell\) increases. For each layer of hidden states \(\mathbf{H}^{(\ell)}\) (defined in definition 2), we re-train the ReadOut layer to predict \(y_{t}\) for each \(t\); the new predictions are given by \(\mathrm{ReadOut}^{(\ell)}\left[\mathbf{H}^{(\ell)}\right]\). Thus for each input prompt, there are \(L\) Transformer predictions parameterized by layer index \(\ell\). All parameters besides the Readout layer parameters are kept frozen.
As shown in Figure 0(a), as we increase the layer index \(\ell\), the prediction performance improves progressively. Hence, Transformers progressively improve their predictions over layers \(\ell\), similar to how iterative algorithms improve over steps. Such observations are consistent with language tasks where Transformers-based language models also improve their predictions along with layer progressions (Geva et al., 2022; Chuang et al., 2023).
### Transformers are more similar to Iterative Newton's Method
Next, we test the more specific hypothesis that the iterative updates performed across Transformer layers are similar to the iterative updates for known iterative algorithms. Specifically, we test whether each layer \(\ell\) of the Transformer corresponds to performing \(k\) steps of some iterative algorithm, for some \(k\) depending on \(\ell\). We focus here on Gradient Descent and Iterative Newton's Method; we will discuss online algorithms in Section 4.4.
For each layer \(\ell\) of the Transformer, we measure the best-matching similarity (see Def. 4) with candidate iterative algorithms with the optimal choice of the number of steps \(k\). As shown in Figure 3, the Transformer has very high error similarity with Iterative Newton's method at all layers. Moreover, we see a clear _linear_ trend between layer 3 and layer 9 of the Transformer, where each layer appears to compute roughly 3 additional iterations of the Iterative Newton's
Figure 3: **Heatmaps of Similarity. The best matching hyper-parameters are highlighted in yellow. Transformers layers show a linear trend with Iterative Newton steps but an exponential trend with Gradient Descent. This suggests Transformers and Iterative Newton have the same convergence rate that is exponentially faster than Gradient Descent. See Figure 8 for an additional heatmap where Gradient Descent’s steps are shown in log scale: on this plot there is a linear correspondence between Transformers and Gradient Descent’s steps. This further strengthens the claim that Transformers have an exponentially faster rate of convergence than Gradient Descent.**
method. This trend only stops at the last few layers because both algorithms converge to the OLS solution; Newton is known to converge to OLS (see SS3.1), and we verify in Appendix A.1 that the last few layers of the Transformer also basically compute OLS (see Figure 10 in the Appendix). We observe the same trends when using similarity of induced weights as our similarity metric (see Figure 7 in the Appendix),
In contrast, even though Gradient Descent has a comparable similarity with the Transformers at later layers, their best matching follows an _exponential_ trend. As discussed in the Section 3.1, for well-conditioned problems where \(\kappa\approx 1\), to achieve \(\epsilon\) error, the rate of convergence of Gradient Descent is \(\mathcal{O}(\log(1/\epsilon))\) while the rate of convergence of Iterative Newton is \(\mathcal{O}(\log\log(1/\epsilon))\). Therefore the rate of convergence of Iterative Newton is exponentially faster than Gradient Descent. Transformer's _linear_ correspondence with Iterative Newton and its _exponential_ correspondence with Gradient Descent provides strong evidence that the rate of convergence of Transformers is similar to Iterative Newton, i.e., \(\mathcal{O}(\log\log(1/\epsilon))\). We also note that it is not possible to significantly improve Gradient Descent's rate of convergence without using second-order methods: Nemirovski and Yudin (1983) showed a \(\Omega\big{(}\log(1/\epsilon)\big{)}\) lower bound on the rate of convergence of gradient-based methods for smooth and strongly convex problems.
Overall, we conclude that a Transformer trained to perform in-context linear regression learns to implement an algorithm that is very similar to Iterative Newton's method, not Gradient Descent. Starting at layer 3, subsequent layers of the Transformer compute more and more iterations of Iterative Newton's method. This algorithm successfully solves the linear regression problem, as it converges to the optimal OLS solution in the final layers.
### Transformers perform well on ill-conditioned data
We repeat the same experiments with data \(\mathbf{x}_{i}\overset{\mathrm{i.i.d.}}{\sim}\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\) sampled from an ill-condition covariance matrix \(\mathbf{\Sigma}\) with condition number \(\kappa(\mathbf{\Sigma})=100\), and eigenbasis chosen uniformly at random. The first \(d/2\) eigenvalues of \(\mathbf{\Sigma}\) are 100, the last \(d/2\) are 1.
As shown in Figure 4, the Transformer model's performance still closely matches Iterative Newton's Method with 21 iterations, same as when \(\mathbf{\Sigma}=\mathbf{I}\) (see layer 10-12 in Figure 3). The convergence of higher-order methods has a mild logarithmic dependence on the condition number since they correct for the curvature. On the other hand, Gradient Descent's convergence is affected polynomially by conditioning. As \(\kappa(\mathbf{\Sigma})\) increase from 1 to 100, the number steps required for GD's convergence increases significantly (see Fig. 4 where GD requires 800 steps to converge), making it impossible for a 12-layer Transformers to implement these many gradient updates. We also note that preconditioning the data by \((\mathbf{X}^{\top}\mathbf{X})^{\dagger}\) can make the data well-conditioned, but since the eigenbasis is chosen uniformly at random, with high probability there is no sparse pre-conditioner or any fixed pre-conditioner which works across the data distribution. Computing \((\mathbf{X}^{\top}\mathbf{X})^{\dagger}\) appears to be as hard as computing the OLS solution (Eq. 1)--in fact Sharan et al. (2019) conjecture that first-order methods such as gradient descent and its variants cannot avoid polynomial dependencies in condition number in the ill-conditioned case.
See Appendix A.2 for detailed experiments on ill-conditioned problems. These experiments on ill-conditioned data further strengthen our hypothesis that Transformers are learning to perform higher-order optimization methods in-context, not Gradient Descent.
### LSTM is more similar to OGD than Transformers
As discussed in Section 3, LSTM is an alternative auto-regressive model widely used before the introduction of Transformers. Thus, a natural research question is: _If Transformers can learn in-context, can LSTMs do so as well? If so, do they learn the same algorithms?_ To answer this question, we train a 10-layer LSTM model, with 5.3M parameters, in an identical manner to the Transformers (with 9.5M parameters) studied in the previous sections.2
Footnote 2: While the LSTM has fewer parameters than the Transformer, we found in preliminary experiments that increasing the hidden dimension or number of layers in the LSTM would not substantively change our results.
Figure 5 plots the mean squared error of Transformers, LSTMs, and other standard methods as a function of the number of in-context (i.e., training) examples provided. While LSTMs can also learn linear regression in-context, they
Figure 4: Transformers performance on ill-conditioned data.
have much higher mean-squared error than Transformers. Their error rate is similar to Iterative Newton's Method after only 5 iterations, a point where it is far from converging to the OLS solution.
LSTMs' inferior performance to Transformers can be explained by the inability of LSTMs to use deeper layers to improve their predictions. Figure 1 shows that LSTM performance does not improve across layers--a readout head fine-tuned for the first layer makes equally good predictions as the full 10-layer model. Thus, LSTMs seem poorly equipped to fully implement iterative algorithms.
Finally, we show that LSTMs behave more like an online learning algorithm than Transformers. In particular, its predictions are biased towards getting more recent training examples correct, as opposed to earlier examples, as shown in Figure 5. This property makes LSTMs similar to online Gradient Descent. In contrast, five steps of Newton's method has the same error on average for recent and early examples, showing that the LSTM implements a very different algorithm from a few iterations of Newton. Similarly, Table 1 shows that LSTMs are more similar to OGD than Transformers are, whereas Transformers are more similar to Newton and GD than LSTMs. We hypothesize that since LSTMs have limited memory, they must learn in a roughly online fashion; in contrast, Transformers' attention heads can access the entire sequence of past examples, enabling it to learn more complex algorithms.
## 5 Mechanistic Evidence
Our empirical evidence demonstrates that Transformers behave much more similarly to Iterative Newton's than to Gradient Descent. Iterative Newton is a higher-order optimization method, and is algorithmically more involved than Gradient Descent. We begin by first examining this difference in complexity. As discussed in Section 3, the updates for Iterative Newton are of the form,
\[\hat{\mathbf{w}}_{k+1}^{\mathrm{Newton}}=\mathbf{M}_{k+1}\mathbf{X}^{\top}\mathbf{y}\quad \text{where}\quad\mathbf{M}_{k+1}=2\mathbf{M}_{k}-\mathbf{M}_{k}\mathbf{S}\mathbf{M}_{k} \tag{14}\]
and \(\mathbf{M}_{0}=\alpha\mathbf{S}\) for some \(\alpha>0\). We can express \(\mathbf{M}_{k}\) in terms of powers of \(\mathbf{S}\) by expanding iteratively, for example \(\mathbf{M}_{1}=2\alpha\mathbf{S}-4\alpha^{2}\mathbf{S}^{3},\mathbf{M}_{2}=4\alpha\mathbf{S}-12 \alpha^{2}\mathbf{S}^{3}+16\alpha^{3}\mathbf{S}^{5}-16\alpha^{4}\mathbf{S}^{7}\), and in general \(\mathbf{M}_{k}=\sum_{s=1}^{2^{k+1}-1}\beta_{s}\mathbf{S}^{s}\) for some \(\beta_{s}\in\mathbb{R}\) (see Appendix B.3 for detailed calculations). Note that \(k\) steps of Iterative Newton's requires computing \(\Omega(2^{k})\) moments of \(\mathbf{S}\). Let us contrast this with Gradient Descent. Gradient Descent updates for linear regression take the form,
\[\hat{\mathbf{w}}_{k+1}^{\mathrm{GD}}=\hat{\mathbf{w}}_{k}^{\mathrm{GD}}-\eta(\mathbf{S} \hat{\mathbf{w}}_{k}^{\mathrm{GD}}-\mathbf{X}^{\top}\mathbf{y}). \tag{15}\]
Like Iterative Newton, we can express \(\hat{\mathbf{w}}_{k}^{\mathrm{GD}}\) in terms of powers of \(\mathbf{S}\) and \(\mathbf{X}^{\top}\mathbf{y}\). However, after \(k\) steps of Gradient Descent, the highest power of \(\mathbf{S}\) is only \(O(k)\). This exponential separation is consistent with the exponential gap in
\begin{table}
\begin{tabular}{c|c|c} & Transformers & LSTM \\ \hline Newton & **0.991** & 0.920 \\ GD & **0.957** & 0.916 \\ OGD & 0.806 & **0.954** \\ \end{tabular}
\end{table}
Table 1: **Similarity of errors between algorithms.** Transformers are more similar to full-observation methods such as Newton and GD; and LSTMs are more similar to online methods such as OGD.
Figure 5: In the left figure, we measure model predictions with normalized MSE. Though LSTM is seemingly most similar to Newton’s Method with only 5 steps, neither algorithm converges yet. OGD also has a similar trend as LSTM. In the center figure, we measure model’s forgetting phenomenon (see Appendix A.4 for explanations), and find both Transformers and not-converged Newton have better memorization than LSTM and OGD. In the right table, we find that Transformers are more similar to Newton and GD than LSTMs do while LSTM is significantly more similar to OGD.
terms of the parameter dependence in the convergence rate--\(\mathcal{O}\left(\kappa(\mathbf{S})\log(1/\epsilon)\right)\) steps for Gradient Descent compared to \(\mathcal{O}(\log\kappa(\mathbf{S})+\log\log(1/\epsilon))\) steps for Iterative Newton. Therefore, a natural question is whether Transformers can actually represent as complicated of a method as Iterative Newton with only polynomially many layers?
Theorem 1 shows that this is indeed possible.
**Theorem 1**.: _There exist Transformer weights such that on any set of in-context examples \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}\) and test point \(\mathbf{x}_{\mathrm{test}}\), the Transformer predicts on \(\mathbf{x}_{\mathrm{test}}\) using \(\mathbf{x}_{\mathrm{test}}^{\top}\hat{\mathbf{w}}_{k}^{\mathrm{Newton}}\). Here \(\hat{\mathbf{w}}_{k}^{\mathrm{Newton}}\) are the Iterative Newton updates given by \(\hat{\mathbf{w}}_{k}^{\mathrm{Newton}}=\mathbf{M}_{k}\mathbf{X}^{\top}\mathbf{y}\) where \(\mathbf{M}_{j}\) is updated as_
\[\mathbf{M}_{j}=2\mathbf{M}_{j-1}-\mathbf{M}_{j-1}\mathbf{S}\mathbf{M}_{j-1},1\leq j\leq k,\quad \mathbf{M}_{0}=\alpha\mathbf{S},\]
_for some \(\alpha>0\) and \(\mathbf{S}=\mathbf{X}^{\top}\mathbf{X}\). The number of layers of the transformer is \(\mathcal{O}(k)\) and the dimensionality of the hidden layers is \(\mathcal{O}(d)\)._
Here we provide a sketch of the proof. We note that our proof uses full attention instead of causal attention and ReLU activations for the self-attention layers. The definitions of these and the full proof appear in Appendix B.
### Proof Sketch for Theorem 1
The constructive proof leverages some key operators which Akyurek et al. (2022) showed a single Transformers layer can implement. We summarize these in Proposition 1. We mainly use the \(\mathrm{mov}\) operator, which can copy parts of the hidden state from time stamp \(i\) to time stamp \(j\) for any \(j\geq i\); the \(\mathrm{mul}\) operator which can do multiplications; and the aff operator, which can be used to do addition and subtraction.
Transformers Implement Initialization \(\mathbf{T}^{(0)}=\alpha\mathbf{S}\).Given input sequence \(\mathbf{H}=\{\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\}\), we can first use the \(\mathrm{mov}\) operator from Proposition 1 so that the input sequence becomes \(\begin{bmatrix}\mathbf{x}_{1}&\cdots&\mathbf{x}_{n}\\ \mathbf{x}_{1}&\cdots&\mathbf{x}_{n}\end{bmatrix}\). We call each column \(\mathbf{h}_{j}\). With an full attention layer and normalized ReLU activations, one can construct two heads with query and value matrices of the form \(\mathbf{Q}_{1}^{\top}\mathbf{K}_{1}=-\mathbf{Q}_{2}^{\top}\mathbf{K}_{2}=\begin{bmatrix}\mathbf{I }_{d\times d}&\mathbf{O}_{d\times d}\\ \mathbf{O}_{d\times d}&\mathbf{O}_{d\times d}\end{bmatrix}\) and value matrices \(\mathbf{V}_{m}=n\alpha\begin{bmatrix}\mathbf{I}_{d\times d}&\mathbf{O}_{d\times d}\\ \mathbf{O}_{d\times d}&\mathbf{O}_{d\times d}\end{bmatrix}\) for some \(\alpha\in\mathbb{R}\).3 Combining the attention layer and skip connections we end up with \(\mathbf{h}_{t}\leftarrow\begin{bmatrix}\mathbf{x}_{t}+\alpha\mathbf{S}\mathbf{x}_{t}\\ \mathbf{x}_{t}\end{bmatrix}\). Applying the aff operator from Proposition 1 to do subtraction, we can have each column of the form \(\begin{bmatrix}\alpha\mathbf{S}\mathbf{x}_{t}\\ \mathbf{x}_{t}\end{bmatrix}\). We denote \(\mathbf{T}^{(0)}:=\alpha\mathbf{S}\) so that Transformers and Iterative Newton have the similar initialization and we call these columns \(\mathbf{h}_{t}^{(0)}\).
Footnote 3: The value matrices contain the number of in-context examples \(n\). There are ways to avoid this by applying Proposition 1 to count.
Transformers implement Newton Iteration.We claim that we can construct layer \(\ell\)'s hidden states to be of the form
\[\mathbf{H}^{(\ell)}=\begin{bmatrix}\mathbf{h}_{1}^{(\ell)}&\cdots&\mathbf{h}_{n}^{(\ell)} \end{bmatrix}=\begin{bmatrix}\mathbf{T}^{(\ell)}\mathbf{x}_{1}&\cdots&\mathbf{T}^{(\ell)} \mathbf{x}_{n}\\ \mathbf{x}_{1}&\cdots&\mathbf{x}_{n}\end{bmatrix} \tag{16}\]
We prove by induction that assuming our claim is true for \(\ell\), we work on \(\ell+1\): Let \(\mathbf{Q}_{m}=\tilde{\mathbf{Q}}_{m}\begin{bmatrix}\mathbf{O}_{d}&-\frac{n}{2}\mathbf{I}_{d} \\ \mathbf{O}_{d}&\mathbf{O}_{d}\end{bmatrix},\mathbf{K}_{m}=\tilde{\mathbf{K}}_{m}\begin{bmatrix} \mathbf{I}_{d}&\mathbf{O}_{d}\\ \mathbf{O}_{d}&\mathbf{O}_{d}\end{bmatrix}\) where \(\tilde{\mathbf{Q}}_{1}^{\top}\tilde{\mathbf{K}}_{1}:=\mathbf{I}\), \(\tilde{\mathbf{Q}}_{2}^{\top}\tilde{\mathbf{K}}_{2}:=-\mathbf{I}\) and \(\mathbf{V}_{1}=\mathbf{V}_{2}=\begin{bmatrix}\mathbf{I}_{d}&\mathbf{O}_{d}\\ \mathbf{O}_{d}&\mathbf{O}_{d}\end{bmatrix}\). A 2-head self-attention layer gives
\[\mathbf{h}_{t}^{(\ell+1)}=\begin{bmatrix}\left(\mathbf{T}^{(\ell)}-\frac{1}{2}\mathbf{T}^{ (\ell)}\mathbf{S}\mathbf{T}^{(\ell)}\right)\mathbf{x}_{t}\\ \mathbf{x}_{t}\end{bmatrix} \tag{17}\]
Note that all \(\mathbf{T}^{(\ell)}\) are symmetric. Passing over an MLP layer gives
\[\mathbf{h}_{t}^{(\ell+1)}\leftarrow\mathbf{h}_{t}^{(\ell+1)}+\begin{bmatrix}\mathbf{I}_{d} &\mathbf{O}_{d}\\ \mathbf{O}_{d}&\mathbf{O}_{d}\end{bmatrix}\mathbf{h}_{t}^{(\ell+1)}=\begin{bmatrix}\left(2 \mathbf{T}^{(\ell)}-\mathbf{T}^{(\ell)}\mathbf{S}\mathbf{T}^{(\ell)}\right)\mathbf{x}_{t}\\ \mathbf{x}_{t}\end{bmatrix} \tag{18}\]
We denote \(\mathbf{T}^{(\ell+1)}:=2\mathbf{T}^{(\ell)}-\mathbf{T}^{(\ell)}\mathbf{S}\mathbf{T}^{(\ell)}\) and this is the exactly same form as Iterative Newton updates.
Transformers can implement \(\hat{\mathbf{w}}_{\ell}^{\mathrm{TF}}=\mathbf{T}^{(\ell)}\mathbf{X}^{\top}\mathbf{y}\).We insert columns with \([0,0,\cdots,y_{j}]^{\top}\) after each \(\mathbf{x}_{j}\) (See Figure 2 for illustration) and keep unchanged until reaching layer \(\ell\). Applying \(\mathrm{mov}\) and \(\mathrm{mul}\), we have columns \(\begin{bmatrix}\mathbf{\xi}\\ \mathbf{T}^{(\ell)}y_{j}\mathbf{x}_{j}\end{bmatrix}\) where \(\mathbf{\xi}\) are irrelevant quantities. Apply Lemma 1 for summation, we can gather \(\sum_{j=1}^{n}\mathbf{T}^{(\ell)}y_{j}\mathbf{x}_{j}=\mathbf{T}^{(\ell)}\mathbf{X}^{\top}\mathbf{y}\), which is again the same as Iterative Newton and we call this \(\hat{\mathbf{w}}_{\ell}^{\mathrm{TF}}\).
Transformers can make predictions on \(\mathbf{x}_{test}\) by \(\left\langle\hat{\mathbf{w}}_{\ell}^{\mathrm{TF}},\mathbf{x}_{\mathrm{test}}\right\rangle\).Now we can make predictions on text query \(\mathbf{x}_{\mathrm{test}}\):
\[\begin{bmatrix}\mathbf{\xi}&\mathbf{x}_{\mathrm{test}}\\ \hat{\mathbf{w}}_{\ell}^{\mathrm{TF}}&\mathbf{x}_{\mathrm{test}}\end{bmatrix} \stackrel{{\mathrm{mov}}}{{\longrightarrow}}\begin{bmatrix}\mathbf{ \xi}&\mathbf{x}_{\mathrm{test}}\\ \hat{\mathbf{w}}_{\ell}^{\mathrm{TF}}&\mathbf{x}_{\mathrm{test}}\\ \mathbf{0}&\hat{\mathbf{w}}_{\ell}^{\mathrm{TF}}\\ 0&\left\langle\hat{\mathbf{w}}_{\ell}^{\mathrm{TF}},\mathbf{x}_{\mathrm{test}}\right\rangle \end{bmatrix} \tag{19}\]
A final readout layer can extract the prediction \(\left\langle\hat{\mathbf{w}}_{\ell}^{\mathrm{TF}},\mathbf{x}_{\mathrm{test}}\right\rangle\).
Now we complete the proof that Transformers can perform exactly Iterative Newton. Finally, we count the number of layers and the dimension of hidden states. We can see all operations in the proof require linear amount of hidden state dimensions, which are \(\mathcal{O}(d)\). Operations from Transformers index \(\ell\) to \(\ell+1\) requires \(\mathcal{O}(1)\) attention layers. Hence, to perform \(k\) Iterative Newton updates, Transformers require \(\mathcal{O}(k)\) layers. This implies that the rate of convergence of Transformers to solve linear regression in-context is the same as Iterative Newton's: \(\mathcal{O}(\log\log(1/\epsilon))\) and this is consistent with our experimental results in SS4.
## 6 Conclusion and Discussion
In this work, we studied how Transformers perform in-context learning for linear regression. In contrast with the hypothesis that Transformers learn in-context by implementing gradient descent, our experimental results show that different Transformer layers match iterations of Iterative Newton's method _linearly_ and Gradient Descent _exponentially_. This suggests that Transformers share a similar rate of convergence to Iterative Newton's method but not Gradient Descent. Moreover, Transformers can perform well empirically on ill-conditioned linear regression, whereas first-order methods such as Gradient Descent struggle. Theoretically, we provide exact Transformer circuits that can implement \(k\)-steps of Iterative Newton's method with \(\mathcal{O}(k)\) layers to make predictions from in-context examples.
An interesting direction is to explore a wider range of higher-order methods that Transformers can implement. It also seems promising to extend our analysis to classification problems, especially given recent work showing that Transformers resemble SVMs in classification tasks [Li et al., 2023, Tarzanagh et al., 2023a]. Finally, a natural question is to understand the differences in the model architecture that make Transformers better in-context learners than LSTMs. Based on our investigations with LSTMs, we hypothesize that Transformers can implement more powerful algorithms because of having access to a longer history of examples. Investigating the role of this additional memory in learning appears to be an intriguing direction.
## Acknowledgement
We would like to thank the USC NLP Group and Center for AI Safety for providing compute resources. DF would like to thank Oliver Liu and Ameya Godbole for extensive discussions. RJ was supported by an Open Philanthropy research grant and a Google Research Scholar Award. VS was supported by NSF CAREER Award CCF-2239265 and an Amazon Research Award. |
2303.01904 | EcoTTA: Memory-Efficient Continual Test-time Adaptation via
Self-distilled Regularization | This paper presents a simple yet effective approach that improves continual
test-time adaptation (TTA) in a memory-efficient manner. TTA may primarily be
conducted on edge devices with limited memory, so reducing memory is crucial
but has been overlooked in previous TTA studies. In addition, long-term
adaptation often leads to catastrophic forgetting and error accumulation, which
hinders applying TTA in real-world deployments. Our approach consists of two
components to address these issues. First, we present lightweight meta networks
that can adapt the frozen original networks to the target domain. This novel
architecture minimizes memory consumption by decreasing the size of
intermediate activations required for backpropagation. Second, our novel
self-distilled regularization controls the output of the meta networks not to
deviate significantly from the output of the frozen original networks, thereby
preserving well-trained knowledge from the source domain. Without additional
memory, this regularization prevents error accumulation and catastrophic
forgetting, resulting in stable performance even in long-term test-time
adaptation. We demonstrate that our simple yet effective strategy outperforms
other state-of-the-art methods on various benchmarks for image classification
and semantic segmentation tasks. Notably, our proposed method with ResNet-50
and WideResNet-40 takes 86% and 80% less memory than the recent
state-of-the-art method, CoTTA. | Junha Song, Jungsoo Lee, In So Kweon, Sungha Choi | 2023-03-03T13:05:30Z | http://arxiv.org/abs/2303.01904v4 | # EcoTTA: Memory-Efficient Continual Test-time Adaptation
###### Abstract
This paper presents a simple yet effective approach that improves continual test-time adaptation (TTA) in a memory-efficient manner. TTA may primarily be conducted on edge devices with limited memory, so reducing memory is crucial but has been overlooked in previous TTA studies. In addition, long-term adaptation often leads to catastrophic forgetting and error accumulation, which hinders applying TTA in real-world deployments. Our approach consists of two components to address these issues. First, we present lightweight meta networks that can adapt the frozen original networks to the target domain. This novel architecture minimizes memory consumption by decreasing the size of intermediate activations required for backpropagation. Second, our novel self-distilled regularization controls the output of the meta networks not to deviate significantly from the output of the frozen original networks, thereby preserving well-trained knowledge from the source domain. Without additional memory, this regularization prevents error accumulation and catastrophic forgetting, resulting in stable performance even in long-term test-time adaptation. We demonstrate that our simple yet effective strategy outperforms other state-of-the-art methods on various benchmarks for image classification and semantic segmentation tasks. Notably, our proposed method with ResNet-50 and WideResNet-40 takes 86% and 80% less memory than the recent state-of-the-art method, CoTTA.
## 1 Introduction
Despite recent advances in deep learning [15, 24, 23, 22], deep neural networks often suffer from performance degradation when the source and target domains differ significantly [8, 43, 38]. Among several tasks addressing such domain shifts, test-time adaptation (TTA) has recently received a significant amount of attention due to its practicality and wide applicability especially in on-device settings [65, 42, 32, 16]. This task focuses on adapting the model to unlabeled online data from the target domain without access to the source data.
While existing TTA methods show improved TTA performances, minimizing the sizes of memory resources have been relatively under-explored, which is crucial considering the applicability of TTA in on-device settings. For example, several studies [66, 42, 9] update entire model parameters
Figure 1: **(a) Memory cost comparison between TTA methods. The size of activations, not the parameters, is the primary memory bottleneck during training. (b) CIFAR-C adaptation performance. We perform the continual online adaptation on CIFAR-C dataset. The x- and y-axis are the average error of all corruptions and the total memory consumption including the parameters and activations, respectively. Our approach, EcoTTA, achieves the best results while consuming the least amount of memory, where K is the model partition factor used in our method.**
to achieve large performance improvements, which may be impractical when the available memory sizes are limited. Meanwhile, several TTA approaches update only the batch normalization (BN) parameters [65, 50, 17] to make the optimization efficient and stable However, even updating only BN parameters is not memory efficient enough since the amount of memory required for training models significantly depends on the size of intermediate activations rather than the learnable parameters [4, 14, 69]. Throughout the paper, activations refer to the intermediate features stored during the forward propagation, which are used for gradient calculations during backpropagation. Fig. 1 (a) demonstrates such an issue.
Moreover, a non-trivial number of TTA studies assume a stationary target domain [65, 42, 9, 57], but the target domain may continuously change in the real world (_e.g.,_ continuous changes in weather conditions, illuminations, and location [8] in autonomous driving). Therefore, it is necessary to consider long-term TTA in an environment where the target domain constantly varies. However, there exist two challenging issues: 1) catastrophic forgetting [66, 50] and 2) error accumulation. Catastrophic forgetting refers to degraded performance on the source domain due to long-term adaptation to target domains [66, 50]. Such an issue is important since the test samples in the real world may come from diverse domains, including the source and target domains [50]. Also, since target labels are unavailable, TTA relies on noisy unsupervised losses, such as entropy minimization [19], so long-term continual TTA may lead to error accumulation [75, 2].
To address these challenges, we propose memory-**E**fficient **c**ontimual **T**est-**T**ime **A**daptation (EcoTTA), a simple yet effective approach for 1) enhancing memory efficiency and 2) preventing catastrophic forgetting and error accumulation. First, we present a memory-efficient architecture consisting of frozen original networks and our proposed meta networks attached to the original ones. During the test time, we freeze the original networks to discard the intermediate activations that occupy a significant amount of memory. Instead, we only adapt lightweight meta networks to the target domain, composed of only one batch normalization and one convolution block. Surprisingly, updating only the meta networks, not the original ones, can result in significant performance improvement as well as considerable memory savings. Moreover, we propose a self-distilled regularization method to prevent catastrophic forgetting and error accumulation. Our regularization leverages the preserved source knowledge distilled from the frozen original networks to regularize the meta networks. Specifically, we control the output of the meta networks not to deviate from the one extracted by the original networks significantly. Notably, our regularization leads to negligible overhead because it requires no extra memory and is performed in parallel with adaptation loss, such as entropy minimization.
Recent TTA studies require access to the source data _before model deployments_[42, 9, 34, 1, 40, 50]. Similarly, our method uses the source data to warm up the newly attached meta networks for a small number of epochs before model deployment. If the source dataset is publicly available or the owner of the pre-trained model tries to adapt the model to a target domain, access to the source data is feasible [9]. Here, we emphasize that pre-trained original networks are frozen throughout our process, and our method is applicable to any pre-trained model because it is agnostic to the architecture and pre-training method of the original networks.
Our paper presents the following contributions:
* We present novel meta networks that help the frozen original networks adapt to the target domain. This architecture significantly minimize memory consumption up to 86% by reducing the activation sizes of the original networks.
* We propose a self-distilled regularization that controls the output of meta networks by leveraging the output of frozen original networks to preserve the source knowledge and prevent error accumulation.
* We improve both memory efficiency and TTA performance compared to existing state-of-the-art methods on 1) image classification task (_e.g._, CIFAR10/100-C and ImageNet-C) and 2) semantic segmentation task (_e.g._, Cityscapes with weather corruption)
Figure 2: **Architecture for test-time adaptation.** We illustrate TTA methods: TENT [65], EATA [50], CoTTA [66], and Ours (EcoTTA). TENT and EATA update _multiple_ batch norm layers, in which large activations have to be stored for gradient calculation. In CoTTA, an entire network is trained with additional strategies for continual adaptation that requires a significant amount of both memory and time. In contrast, our approach requires a minimum size of activations by updating only _a few_ layers. Also, stable long-term adaptation is performed by our proposed regularization, named self-distilled regularization.
## 2 Related Work
**Mitigating domain shift.** One of the fundamental issues of DNNs is the performance degradation due to the domain shift between the train (_i.e_. source) and test (_i.e_. target) distributions. Several research fields attempt to address this problem, such as unsupervised domain adaptation [64, 6, 53, 56, 46, 58] and domain generalization [76, 8]. In particular, domain generalization aims to learn invariant representation so as to cover the possible shifts of test data. They simulate the possible shifts using a single or multiple source dataset [76, 74, 39] or force to minimize the dependence on style information [52, 8]. However, it is challenging to handle all potential test shifts using the given source datasets [20]. Thus, instead of enhancing generalization ability during the training time, TTA [65] overcomes the domain shift by directly adapting to the test data.
**Test-time adaptation.** Test-time adaptation allows the model to adapt to the test data (_i.e_., target domain) in a source-free and online manner [33, 62, 65]. Existing works improve TTA performance with sophisticated designs of unsupervised loss [48, 72, 42, 9, 45, 57, 5, 16, 1, 3, 12, 59] or enhance the usability of small batch sizes [36, 70, 31, 51, 40] considering streaming test data. They focus on improving the adaptation performance with a stationary target domain (_i.e_., single domain TTA setup). In such a setting, the model that finished adaptation to a given target domain is reset to the original model pre-trained with the source domain in order to adapt to the next target domain.
Recently, CoTTA [66] has proposed continual TTA setup to address TTA under a continuously changing target domain which also involves a long-term adaptation. This setup frequently suffers from error accumulation [75, 2, 63] and catastrophic forgetting [66, 35, 50]. Specifically, performing a long-term adaptation exposes the model to unsupervised loss from unlabeled test data for a long time, so errors are accumulated significantly. Also, the model focuses on learning new knowledge and forgets about the source knowledge, which becomes problematic when the model needs to correctly classify the test sample as similar to the source distribution. To address such issues, CoTTA [66] randomly restores the updated parameters to the source one, while EATA [50] proposed a weight regularization loss.
**Efficient on-device learning.** Since the edge device is likely to be memory constrained (_e.g_., a Raspberry Pi with 512MB and iPhone 13 with 4GB), it is necessary to take account of the memory usage when deploying the models on the device [41]. TinyTL [4], a seminal work in on-device learning, shows that the activation size, not learnable parameters, bottlenecks the training memory. Following this, recent on-device learning studies [4, 68, 69] targeting fine-tuning task attempt to decrease the size of intermediate activations. In contrast, previous TTA studies [65, 50] have overlooked these facts and instead focused on reducing learnable parameters. This paper, therefore, proposes a method that not only reduces the high activation sizes required for TTA, but also improves adaptation performance.
## 3 Approach
Fig. 3 illustrates our simple yet effective approach which only updates the newly added meta networks on the target domain while regularizing them with the knowledge distilled from the frozen original network. This section describes how such a design promotes memory efficiency and prevents error accumulation and catastrophic forgetting which are frequently observed in long-term adaptation.
### Memory-efficient Architecture
**Prerequisite.** We first formulate the forward and the backward propagation. Assume that the \(i^{th}\) linear layer in the model consists of weight \(\mathcal{W}\) and bias \(b\), and the input and output features of this layer are \(f_{i}\) and \(f_{i+1}\), respectively. Given that the forward propagation of \(f_{i+1}=f_{i}\mathcal{W}+b\), the backward propagation from the \(i+1^{th}\) layer to the \(i^{th}\) layer, and the weight gradient are respectively formulated as:
\[\frac{\partial\mathcal{L}}{\partial f_{i}}=\frac{\partial\mathcal{L}}{\partial f _{i+1}}\mathcal{W}^{T},\quad\frac{\partial\mathcal{L}}{\partial\mathcal{W}}=f _{i}^{T}\frac{\partial\mathcal{L}}{\partial f_{i+1}}. \tag{1}\]
Eq. (1) means that the learnable layers whose weight \(\mathcal{W}\) need to be updated must store intermediate activations \(f_{i}\) to compute the weight gradient. In contrast, the backward propagation in frozen layers can be accomplished without saving the activations, only requiring its weight \(\mathcal{W}\). Further descriptions are provided in Appendix A.
TinyTL [4] shows that activations occupy the majority of the memory required for training the model rather than learnable parameters. Due to this fact, updating the entire model (e.g., CoTTA [66]) requires a substantial amount of memory. Also, updating only parameters in batch normalization (BN) layers (e.g., TENT [65] and EATA [50]) is not an effective approach enough since they still save the large intermediate activations for _multiple_ BN layers. While previous studies fail to reduce memory by utilizing large activations, this work proposes a simple yet effective way to reduce a significant amount of memory by discarding them.
**Before deployment.** As illustrated in Fig. 3 (a, b), we first take a pre-trained model using any pre-training method. We divide the encoder of the pre-trained model into K number of parts and attach lightweight meta networks to each part of the original network. The details of how to divide the model into K number of parts are explained in the next section. One group of meta network composes of one batch normalization layer and one convolution block (_i.e_., Conv-BN-Relu). Before the deployment, we pre-train the meta networks on the source dataset \(\mathcal{D}_{s}\) for a small number of
epochs (e.g., 10 epochs for CIFAR dataset) while freezing the original networks. Such a warm-up process is completed before the model deployment, similarly done in several TTA works [9, 34, 40, 50]. Note that we do not require source dataset \(\mathcal{D}_{s}\) during test time.
**Pre-trained model partition.** Previous TTA studies addressing domain shifts [9, 48] indicate that updating shallow layers is more crucial for improving the adaptation performance than updating the deep layers. Inspired by such a finding, given that the encoder of the pre-trained model is split into model partition factor K (_e.g._, 4 or 5), we partition the shallow parts of the encoder more (_i.e._, densely) compared to the deep parts of it. Table 4c shows how performance changes as we vary the model partition factor K.
**After deployment.** During the test-time adaptation, we only adapt meta networks to target domains while freezing the original networks. Following EATA [50], we use the entropy minimization \(H(\hat{y})=-\sum_{c}p(\hat{y})\log p(\hat{y})\) to the samples achieving entropy less than the pre-defined entropy threshold \(H_{0}\), where \(\hat{y}\) is the prediction output of a test image from test dataset \(\mathcal{D}_{t}\) and p(\(\cdot\)) is the softmax function. Thus, the main task loss for adaptation is defined as
\[\mathcal{L}^{ent}=\mathbb{I}_{\{H(\hat{y})<H_{0}\}}\cdot H(\hat{y}), \tag{2}\]
where \(\mathbb{I}_{\{\cdot\}}\) is an indicator function. In addition, in order to prevent catastrophic forgetting and error accumulation, we apply our proposed regularization loss \(\mathcal{R}^{k}\), which is described next in detail. Consequently, the overall loss of our method is formulated as,
\[\mathcal{L}^{total}_{\theta}=\mathcal{L}^{ent}_{\theta}+\lambda\ \sum_{k}^{K}\mathcal{R}^{k}_{\theta_{k}}, \tag{3}\]
where \(\theta\) and \(\theta_{k}\) denotes parameters of all meta networks and those of k-th group of meta networks, respectively, and \(\lambda\) is used to balance the scale of the two loss functions. Note that our architecture requires less memory than previous works [66, 65] since we use frozen original networks and discard its intermediate activations. To be more specific, our architecture uses 82% and 60% less memory on average than CoTTA and TENT/EATA.
### Self-distilled Regularization
The unsupervised loss from unlabeled test data \(\mathcal{D}_{t}\) is likely to provide a false signal (_i.e._, noise) to the model (\(\hat{y}\neq y_{t}\) where \(y_{t}\) is the ground truth test label). Previous works have verified that long-term adaptation with unsupervised loss causes overfitting due to error accumulation [75, 2] and catastrophic forgetting [66, 35]. To prevent the critical issues, we propose a self-distilled regularization utilizing our architecture. As shown in Fig. 3, we regularize the output \(\tilde{x}_{k}\) of each \(k\)-th group of the meta networks not to deviate from the output \(x_{k}\) of the \(k\)-th part of frozen original networks. Our regularization loss which computes the mean absolute error (_i.e._, L1 loss) is formulated as follows:
\[\mathcal{R}^{k}_{\theta_{k}}=\left\|\tilde{x}_{k}-x_{k}\right\|_{1}. \tag{4}\]
Since the original networks are not updated, the output \(x_{k,k\sim K}\) extracted from them can be considered as containing the knowledge learned from the _source_ domain. Taking advantage of this fact, we let the output of meta networks \(\tilde{x}_{k}\) be regularized with knowledge distilled from the original networks. By preventing the adapted model to not significantly deviate from the original model, we can prevent 1) catastrophic forgetting by maintaining the source domain knowledge and 2) error accumulation by utilizing the class discriminability of the original model. Remarkably, unlike previous works [66, 50], our regularization does not require saving additional original networks, which accompanies extra memory usage. Moreover, it only needs a negligible
Figure 3: **Overview of our approach.****(a)** The encoder of the pre-trained model is divided into K parts (_i.e._, model partition factor K). **(b)** Before deployment, the meta networks are attached to each part of the original networks and pre-trained with source dataset \(\mathcal{D}_{s}\). **(c)** After the model is deployed, _only_ the meta networks are updated with unsupervised loss (_i.e._, entropy minimization) on target data \(\mathcal{D}_{t}\), while the original networks are frozen. To avoid error accumulation and catastrophic forgetting by the long-term adaptation, we regularize the output \(\tilde{x}_{k}\) of each group of the meta networks leveraging the output \(x_{k}\) of the _frozen_ original network, which preserves the source knowledge.
amount of computational overhead because it is performed in parallel with the entropy minimization loss \(\mathcal{L}^{ent}\).
## 4 Classification Experiments
We evaluate our approach to image classification tasks based on the continual test-time adaptation setup with three datasets: CIFAR10-C, CIFAR100-C, and ImageNet-C.
**Experimental setup.** Following CoTTA [66], we conduct most experiments on the continual TTA task, where we continually adapt the deployed model to each corruption type sequentially without resetting the model. This task is more challenging but more realistic than single domain TTA task [65] in which the adapted model is periodically reset to the original pre-trained model after finishing adaptation to each target, so they require additional domain information. Moreover, we evaluate our approach on the long-term TTA setup, which is detailed in Section 4.2.
Following the previous TTA studies [65, 66], we evaluate models with {CIFAR10, CIFAR10-C}, {CIFAR100, CIFAR100-C}, and {ImageNet, ImageNet-C} where the first and the second dataset in each bracket refers to the source and the target domain, respectively. The target domains include 15 types of corruptions (noise, blur, weather, and digital) with 5 levels of severity, which are widely used in conventional benchmarks [25].
**Implementation Details.** We evaluate our approach within the frameworks officially provided by previous state-of-the-art methods [66, 50]. For fair comparisons, we use the same pre-trained model, which are WideResNet-28 and WideResNet-40 [71] models from the RobustBench [11], and ResNet-50 [24] model from TTT++ [42, 9]. Before the deployment, we pre-train the meta networks on the source dataset using a cross-entropy loss with SGD optimizer with the learning rate of 5e-2. Since the meta networks contain only a few layers, we pre-train them with a small number of epochs: 10 and 3 epochs for CIFAR and ImageNet, respectively. After deployment, similar to EATA [50], we use the same SGD optimizer with the learning rate of 5e-3. In Eq. (2), the entropy threshold \(H_{0}\) is set to \(0.4\times\ln C\) where \(C\) denotes the number of task classes. The batch size is 64 and 32 for CIFAR and ImageNet, respectively. We set the importance of the regularization \(\lambda\) in Eq. (3) to 0.5 to balance it with the entropy minimization loss. Additional implementation details can be found in Appendix C.
**Evaluation Metric.** For all the experiments, we report error rates calculated during testing and the memory consumption, including the model parameter and the activation stor
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{**WideResNet-40 (AugMix)**} & \multicolumn{2}{c}{**WideResNet-28**} & \multicolumn{2}{c}{**ResNet-50**} \\
**Method** & **Avg. err :** & **Mem. (MB)** & **Avg. err :** & **Mem. (MB)** & **Avg. err :** & **Mem. (MB)** \\ \hline Source & 36.7 & 11 & 43.5 & 58 & 48.8 & 91 \\ BN Studs Adapt [49] & 15.4 & 11 & 20.9 & 58 & 16.6 & 91 \\ \hline Single do. McNT [65] & 12.7 & 188 & 19.2 & 646 & 15.0 & 925 \\ Continant TENT & 13.3 & 188 & 20.0 & 646 & 15.2 & 925 \\ TTT++ [42] & 14.6 & 391 & 20.3 & 1405 & 16.1 & 1877 \\ SWRANSP [9] & 12.1 & 400 & 17.2 & 1551 & 15.4 & 1971 \\ NOTE [17] & 13.4 & 188 & 20.2 & 646 & - & - \\ EATA [50] & 13.0 & 188 & 18.6 & 646 & 14.2 & 925 \\ CoTTA [66] & 14.0 & 409 & 17.0 & 1697 & 14.4 & 2066 \\ \hline
**Ours (K-u)** & 12.2 & 80 (0.088,0.0) & 16.9 & 404 (0.08,0.0) & 14.4 & 296 (0.08,0.0) \\
**Ours (K-s5)** & **12.1** & 92 (0.758,0.0) & **16.8** & 471 (0.275,0.0) & **14.1** & 498 (0.08,0.0) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison of error rate (\(\%\)) on CIFAR-C.** We report an average error of 15 corruptions on _continual_ TTA and a memory requirement including model parameters and activation sizes. The lowest error is in bold, and the second lowest error is underlined. The memory reduction rates compared to CoTTA and TENT are presented sequentially. WideResNet-40 was pre-trained with AugMix [26] that is a data processing to increase the robustness of the model. Source denotes the pre-trained model without adaptation. Single domain (in short, single do.) TENT resets the model when adapting to a new target domain, so the domain labeles are required.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **ResNet-50 (AugMix)** & **ResNet-50** & (MB) \\
**Method** & **Avg. err :** & **Avg. err :** & **Total Mem. :** \\ \hline Source & 74.36 & 82.35 & 91 \\ BN Studs Adapt [49] & 57.87 & 72.18 & 91 \\ Continual TENT [65] & 56.1 (0.6) & 66.2 (1.1) & 1486 \\ EATA [50] & 54.9 (2.3) & 63.8 (2.7) & 1486 \\ CoTTA [66] & 54.6 (3.9) & **62.6** (3.1) & 3132 \\ \hline
**Ours (K-u)** & 552 (3.0) & 64.6 (3.2) & **438 (0.08,0.0)** \\
**Ours (K-s5)** & **54.4** (2.7) & 63.4 (3.0) & **274 (0.88,0.0)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Comparison of error rate (\(\%\)) on ImageNet-C with severity level 5.** Standard deviation for ten diverse corruption sequences is denoted by the parentheses values. The total memory refers to the sum of model parameters and activations.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Avg. err (\%) & \multicolumn{3}{c}{**CIFAR10-C**} & \multicolumn{2}{c}{**CIFAR100-C**} \\
**Method** & **Mem. (MB)** & single do. & continual & single do. & continual \\ \hline BN Studs Adapt [49] & 91 & 16.6 & 16.6 & 44.5 & 44.5 \\ TinyL\({}^{\dagger}\)[4] & 379 & 15.8 & 21.9 & 40.5 & 77.4 \\ RepNet\({}^{\dagger}\)[69] & 508 & 15.2 & 20.9 & 41.5 & 52.1 \\ AuxAdapt\({}^{\dagger}\)[73] & **207** & 16.0 & 16.7 & 44.0 & 45.8 \\
**Ours (K-u)** & 296 & **14.4** & **14.4** & **39.5** & **39.2** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Comparison with methods for on-device learning.** The backbone is ResNet-50. \({}^{\dagger}\) denotes our own re-implemented models. single do. indicates the single domain TTA setup.
age. We demonstrate the memory efficiency of our work by using the official code provided by TinyTL [4].
### Comparisons
**Comparisons with TTA methods.** We compare our approach to competing TTA methods on extensive benchmarks and various pre-trained models. The results of CIFAR10/100-C are detailed in Table 1. The model partition factor K are set to 4 and 5. Our approach outperforms existing TTA methods with the lowest memory usage in all pre-trained models. Specifically, in WideResNet-40, our method achieves superior performance while requiring 80% and 58% less memory than CoTTA [66] and EATA [50], respectively, which are also designed for continual TTA. Approaches targeting single domain TTA [65, 42, 9] show poor performance due to error accumulation and catastrophic forgetting, as observed in CoTTA. The error rates for each corruption type are provided in Appendix F.
Table 2 shows the experiment for ImageNet-C. Two ResNet-50 backbones from RobustBench [11] are leveraged. Following CoTTA, evaluations are conducted on ten diverse corruption-type sequences. We achieve comparable performance to CoTTA while utilizing 86% and 75% less memory with K=4 and 5, respectively. In addition, we observe that our approach shows superior performance when adopting the model pre-trained with strong data augmentation methods (_e.g.,_ Augmix [26]).
**Comparisons with on-device learning methods.** We compare our approach with methods for memory-efficient on-device learning. TinyTL [4] and RepNet [69] focus on _supervised_ on-device learning (_i.e._, requiring labeled target data). However, since TTA assumes that we do not have access to the target labels, utilizing such methods to TTA directly is infeasible. Therefore, we experimented by replacing supervised loss (_i.e._, cross-entropy) with unsupervised loss (_i.e._, entropy minimization) in TinyTL and RepNet. As shown in Table 3, they suffer from performance degradation in continual TTA, showing inferior performance compared to our proposed approach even in the single domain TTA.
Similar to ours, AuxAdapt [73] adds and updates a small network (_i.e._, ResNet-18) while freezing the pre-trained model. Unlike our approach, they only modify a prediction output, not intermediate features. While AuxAdapt requires the least memory usage, it fails to improve TTA performance in single domain TTA. Nevertheless, since the original model is frozen, it suffers less from catastrophic forgetting and error accumulation than TinyTL [4] and RepNet [69] in the continual TTA. Through the results, we confirm that our proposed method brings both memory efficiency and a significant performance improvement in both TTA setups.
### Empirical Study
**Architecture design.** An important design of our meta networks is injecting a single BN layer before the original networks and utilizing a residual connection with one conv block. Table 3(b) studies the effectiveness of the proposed design by comparing it with six different variants. From the results, we observe that using only either conv block (ii) or BN (iii) aggravates the performance: error rate increases by 1.4% and 3.8% on CIFAR-100-C with WideResNet-40.
In design (i), we enforce both BN parameters and Conv layers in the meta networks to take the output of the original networks as inputs. Such a design brings performance drop. We speculate that it is because the original network,
\begin{table}
\end{table}
Table 4: **Architecture ablation experiments.****(a,b)** We compare continual TTA performance on several memory-efficient designs. WRN refers to WideResNet [71] backbone. **(c)** We report the performance based on different designs of partitioning the model. The value next to the backbone’s name denotes the total number of residual blocks of a model.
Figure 4: **Ablation study of K.** We uniformly divide the encoder of the pre-trained model into the model partition factor K. The x-axis indicates the memory size including both model parameter size and activation size while the y-axis indicates the average error rate. The values in parentheses show the rate of increase for the model parameters compared to the original model.
which is not adapted to the target domain, lacks the ability to extract sufficiently meaningful features from the target image. Also, we observed a significant performance degradation after removing the residual connection in design (iv). In addition, since attention mechanisms [67, 30] generally have improved classification accuracy, we study how attention mechanisms can further boost TTA performance of our approach in design (v, vi). The results show that it is difficult for the attention module to train ideally in TTA setup using unsupervised learning, unlike when applying it to supervised learning. An ablation study on each element of meta networks can be found in Appendix D.
**Number of blocks in each partition.** ResNet [24] consists of multiple residual blocks (_e.g._, BasicBlock and Bottleneck in Pytorch [54]). For instance, WideResNet-28 has 12 residual blocks. By varying the number of blocks for each part of the original networks, we analyze TTA performance in Table 3(c). We observe that splitting the shallow parts of the encoder densely (_e.g._, 2,2,4,4 blocks, from the shallow to the deep parts sequentially) brings more performance gain than splitting the deep layers densely (_e.g._, 4,4,2,2 blocks). We suggest that it is because we modify the lower-level feature more as we split shallow layers densely. Our observation is aligned with the finding of previous TTA works [9, 48], which show that updating the shallow layers more than the deep layers improves TTA performance.
**Number of model partition K.** Fig. 4 shows both memory requirement and adaptation performance according to the model partition factor K. With a small K (_e.g._, 1 or 2), the intermediate outputs are barely modified, making it difficult to achieve a reasonable level of performance. We achieve the best TTA performance with K of 4 or 5 as adjusting a greater number of intermediate features. In the meanwhile, we observe that the average error rate is saturated and remains consistent when K is set to large values (_e.g._, 6,7 or 8) even with the increased amount of activations and learnable parameters. Therefore, we set K to 4 and 5.
**Catastrophic forgetting.** We conduct experiments to confirm the catastrophic forgetting effect (Fig. 4(a)). Once finishing adaptation to each corruption, we evaluate the model on clean target data (_i.e._, test-set of CIFAR dataset) without updating the model. For TENT with no regularization, the error rates for the clean target data (_i.e._, clean error (%)) increase gradually, which can be seen as the phenomenon of catastrophic forgetting. In contrast, our approach consistently maintains the error rates for the clean target data, proving that our regularization loss effectively prevents catastrophic forgetting. These results indicate that our method can be reliably utilized in various domains, including the source and target domains.
**Error accumulation in long-term adaptation.** To evaluate the error accumulation effect, we repeat all the corruption sequences for 100 rounds. The results are described in Fig. 4(b). For TENT, a gradual increase in error rates is observed in later rounds, even with small learning rates. For example, TENT [65] with the learning rate of 1e-5 achieves the error rate of 39.7%, and reached its lowest error rate of 36.5% after 8 rounds. However, it shows increased error rate of 38.6% after 100 rounds due to overfitting. It suggests
Figure 5: **Regularization ablation experiments**. We conduct experiments with WideResNet-40 on CIFAR100-C. **(a)** We utilize a test set of the CIFAR-100 dataset to measure clean error after adapting to each corruption. Maintaining clean errors at a stable level indicates that our approach helps the model robust to catastrophic forgetting. **(b)** We simulate a long-term adaptation scenario by repeating 100 rounds of 15 corruption sequences. In the absence of regularization, error accumulation can lead to overfitting (_i.e._, the case of the error increases exponentially). However, our approach does not suffer from such an error accumulation. We set K to 5 in the above experiments.
\begin{table}
\begin{tabular}{c l c c c c c} \hline \hline & **Batch size** & **16** & **8** & **4** & **2** & **1** \\ \hline \multirow{3}{*}{\begin{tabular}{c} Non \\ training \\ \end{tabular} } & Source & 69.7 & 69.7 & 69.7 & 69.7 & 69.7 \\ & BN Stats Adapt [49] & 41.1 & 50.2 & 59.9 & 81.0 & 99.1 \\ & AdaptBN [55] & 39.1 & 41.2 & 45.2 & 49.0 & 54.0 \\ \hline \multirow{3}{*}{
\begin{tabular}{c} Training \\ \end{tabular} } & Con. TENT [65] & 40.9 & 47.8 & 58.6 & 82.2 & 99.0 \\ & Con. TENT+AdaptBN & 38.2 & 40.2 & 43.2 & 47.7 & 52.2 \\ \cline{1-1} \cline{2-7} & **Ours (K=5)** & 40.0 & 45.8 & 63.4 & 80.8 & 99.0 \\ \cline{1-1} & **Ours (K=5)+AdaptBN** & **36.9** & **39.3** & **42.2** & **46.5** & **51.8** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Experiments with small batch sizes.** We evaluate all baselines with WideResNet-40 on CIFAR100-C. Con. TENT is the abbreviation for continual TENT.
that without regularization, TTA methods eventually face overfitting in long-term adaptation [75, 2, 35]. Our method in the absence of regularization (\(\lambda=0\)) also causes overfitting. On the other hand, when self-distilled regularization is involved (\(\lambda>0\)), the performance remains consistent even in the long-term adaptation.
**Small batch size.** We examine the scalability of our approach with a TTA method designed for small batches size, named adapting BN statistics (_i.e_., AdaptBN [55, 72]). When the number of batches is too small, the estimated statistics can be unreliable [55]. Thus, they calibrate the source and target statistics for the normalization of BN layers so as to alleviate the domain shift and preserve the discriminative structures. As shown in Table 5, training models with small batch sizes (e.g., 2 or 1) generally increase the error rates. However, such an issue can be addressed by applying AdaptBN to our method. To be more specific, we achieve an absolute improvement of 17.9% and 2.2% from Source and AdaptBN, respectively, in the batch size of 1.
**Number of the source samples for meta networks.** Like previous TTA works [9, 42, 34, 40] including EATA [50], our approach requires access to the source data for pre-training our proposed meta networks before model deployment. In order to cope with the situation where we can only make use of a subset of the source dataset, we study the TTA performance of our method according to the number of accessible source samples. The results are specified in Table 7 where we use WideResNet-40. We observe that our method outperforms the baseline model even with small number of training samples (_e.g_., 10% or 20%) while showing comparable performance with excessively small numbers (_e.g_. 5%). Note that we still reduce the memory usage of about 51% compared to EATA.
## 5 Segmentation Experiments
We investigate our approach in semantic segmentation. First, we create Cityscapes-C by applying the weather corruptions (brightness, fog, frost, and snow [25]) to the validation set of Cityscapes [10]. Then, to simulate continual distribution shifts, we repeat the four types of Cityscapes-C ten times. In this scenario, we conduct continual TTA using the publicly-available ResNet-50-based DeepLabV3+ [7], which is pre-trained on Cityscapes for domain generalization task [8] in semantic segmentation. For TTA, we use the batch size of 2. More details are specified in Appendix C.
**Results.** We report the results based on mean intersection over union (mIoU) in Table 6. It demonstrates that our approach helps to both minimize memory consumption and performs long-term adaptation stably for semantic segmentation. Unlike continual TENT, our method avoids catastrophic forgetting and error accumulation, allowing us to achieve the highest mIoU score while using 66% less memory usage in a continual TTA setup. Additional experiment results can be found in Appendix B.
## 6 Conclusion
This paper proposed a simple yet effective approach that improves continual TTA performance and saves a significant amount of memory, which can be applied to edge devices with limited memory. First, we presented a memory-efficient architecture that consists of original networks and meta networks. This architecture requires much less memory size than the previous TTA methods by decreasing the intermediate activations used for gradient calculations. Second, in order to preserve the source knowledge and prevent error accumulation during long-term adaptation with noisy unsupervised loss, we proposed self-distilled regularization that controls the output of meta networks not to deviate significantly from the output of the original networks. With extensive experiments on diverse datasets and backbone networks, we verified the memory efficiency and TTA performance of our approach. In this regard, we hope that our efforts will facilitate a variety of studies that make test-time adaptation for edge devices feasible in practice.
**Acknowledgments.** We would like to thank Kyuwoong Hwang, Simyung Chang, and Byeonggeun Kim for their valuable feedback. We are also grateful for the helpful discussions from Qualcomm AI Research teams.
\begin{table}
\begin{tabular}{l|c|c c c c|c c c c|c c c c|c c c c|c} \hline \hline Time & \multicolumn{3}{c|}{\(t\)} & \multicolumn{3}{c|}{\multirow{3}{*}{\(\lambda=0\)}} & \multicolumn{3}{c|}{\multirow{3}{*}{\(\lambda=0\)}} & \multicolumn{3}{c|}{\multirow{3}{*}{\(\lambda=0\)}} & \multicolumn{3}{c|}{\multirow{3}{*}{All}} \\ \hline Round & & 1 & & & & & & & & & & & & & & All \\ \hline Method & **Mem. (MB)** & **Brig.** & **Fog** & **Fros.** & **Snow** & **Brig.** & **Fog** & **Fros.** & **Snow** & **Brig.** & **Fog** & **Fros.** & **Snow** & **Brig.** & **Fog** & **Fros.** & **Snow** & **Mean** \\ \hline Source & 280 & 60.4 & 54.3 & 30.0 & 4.1 & 60.4 & 54.3 & 30.0 & 4.1 & 60.4 & 54.3 & 30.0 & 4.1 & 60.4 & 54.3 & 30.0 & 4.1 & 37.2 \\ BN Stats Adapt [49] & 280 & 69.1 & 61.0 & 44.8 & 39.1 & 69.1 & 61.0 & 44.8 & 39.1 & 69.1 & 61.0 & 44.8 & 39.1 & 69.1 & 61.0 & 44.8 & 39.1 & 53.6 \\ Continual TENT [65] & 2721 & 70.1 & 62.1 & 46.1 & 40.2 & 62.2 & 53.7 & 44.4 & 37.9 & 50.0 & 41.5 & 31.6 & 26.6 & 39.2 & 32.6 & 25.3 & 22.4 & 42.9 \\ \hline
**Ours (K=4)** & **918**(60) & **70.2** & **62.4** & **46.3** & **41.9** & **70.0** & **62.8** & **46.5** & **42.1** & **70.1** & **62.8** & **46.6** & **42.2** & **55.3** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Semantic segmentation results in continual test-time adaptation tasks.** We conduct experiments on Cityscapes [10] with four weather corruptions [25] applied. The four conditions are repeated ten times to simulate continual domain shifts. All results are evaluated based on DeepLabV3Plus-ResNet-50.
\begin{table}
\begin{tabular}{l|c|c|c c} \hline \hline & **EATA [50]** & **Ours (K=5)** & **\# of source samples** \\ Target domain & (188MB) & (92MB) & 10k (20\%) & 5k (10\%) & 2.8\% (5\%) \\ \hline CIFAR10-C & 13.0 & **12.1** & 12.4 & 12.9 & 13.1 \\ CIFAR100-C & 37.1 & **36.3** & 36.4 & 36.6 & 37.2 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Ablation of # of source samples to warm up the meta networks.** Before deployment, we pre-trained the meta networks using only a subset of the source dataset (_e.g_., 20%, 10%, and 5%). The memory usage (MB) of each method is also presented. |
2305.14739 | Trusting Your Evidence: Hallucinate Less with Context-aware Decoding | Language models (LMs) often struggle to pay enough attention to the input
context, and generate texts that are unfaithful or contain hallucinations. To
mitigate this issue, we present context-aware decoding (CAD), which follows a
contrastive output distribution that amplifies the difference between the
output probabilities when a model is used with and without context. Our
experiments show that CAD, without additional training, significantly improves
the faithfulness of different LM families, including OPT, GPT, LLaMA and
FLAN-T5 for summarization tasks (e.g., 14.3% gain for LLaMA in factuality
metrics). Furthermore, CAD is particularly effective in overriding a model's
prior knowledge when it contradicts the provided context, leading to
substantial improvements in tasks where resolving the knowledge conflict is
essential. | Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Scott Wen-tau Yih | 2023-05-24T05:19:15Z | http://arxiv.org/abs/2305.14739v1 | # Trusting Your Evidence:
###### Abstract
Language models (LMs) often struggle to pay enough attention to the input context, and generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we present context-aware decoding (CAD), which follows a contrastive output distribution that amplifies the difference between the output probabilities when a model is used with and without context. Our experiments show that CAD, without additional training, significantly improves the faithfulness of different LM families, including OPT, GPT, LLaMA and FLAN-T5 for summarization tasks (e.g., 14.3% gain for LLaMA in factuality metrics). Furthermore, CAD is particularly effective in overriding a model's prior knowledge when it contradicts the provided context, leading to substantial improvements in tasks where resolving the knowledge conflict is essential.
## 1 Introduction
Language models (LMs) are remarkably effective in generating coherent and fluent continuations of a prompt or document prefix. During generation, they mostly rely on two sources of knowledge: (1) _prior knowledge_, which is learned during pretraining and stored implicitly within the model parameters; (2) _context knowledge_, which is passed as inputs in the prefix context (Chan et al., 2022). However, it remains an open question how a pretrained LM, particularly a vanilla LM without task-specific finetuning, balances these two knowledge sources during generation.
Previous research shows that LMs can fail to pay enough attention to new information introduced in the context knowledge. This can lead to hallucination in summarization (Maynez et al., 2020; Pagnoni et al., 2021), where the generated summaries include facts not present in the input document. Insufficient attention to context is especially problematic when the context knowledge contradicts with the prior knowledge (Longpre et al., 2021; Zhou et al., 2023). For instance, when LLaMA (Touvron et al., 2023) is presented with a latest document "Argentina won the FIFA World Cups in 1978,1986 and 2022..." in its context (Figure 1), it still predicts "Two" in response to the question "How many World Cups have Argentina won?", due in part to the outdated training data.
In this work, we present a simple context-aware decoding (CAD) method to encourage the LM to attend to its context during generation. As shown in Figure 1, CAD samples from a new output distribution, which amplifies the difference between output probabilities with and without the context document. This provides a new form of contrastive decoding (Li et al., 2022), which effectively downweights the prior knowledge when more relevant contextual information is provided. CAD can be used with off-the-shelf pretrained language models without any additional training.
Experimental results from summarization tasks show that context-aware decoding significantly enhances the generation faithfulness of various vanilla LMs including OPT (Zhang et al., 2022), GPT-Neo (Black et al., 2021), LLaMA (Touvron et al., 2023) and instruction-finetuned LMs such as FLAN (Chung et al., 2022). For instance, when applied to LLaMA-30B in CNN-DM, CAD leads to substantial improvement in both ROUGE-L (21%)
Figure 1: An illustration of context-aware decoding.
and summary factuality evaluation metrics (14.3%). More notably, CAD is especially beneficial for knowledge conflicting tasks, where the context contains information contradictory to the model's prior knowledge. CAD brings a 2.9x improvement to LLaMA-30B on a knowledge conflicts QA dataset Longpre et al. (2021). Furthermore, we observe that this gain brought by CAD increases as the model size grows in knowledge conflicts tasks. These results demonstrate the potential of CAD in mitigating hallucinations in text generation and overriding prior knowledge with reliable and trusted information.
## 2 Method
### Background
Given a language model \(\theta\), an input query \(\mathbf{x}\), and a context \(\mathbf{c}\) that contains some external knowledge _unfamiliar_ or _in conflict_ to the model's prior knowledge, we ask our model \(\theta\) to generate a response \(\mathbf{y}\) given the the query and context. The response can be directly sampled (autoregressively) from the probability distribution conditioned on query \(\mathbf{x}\) and context \(\mathbf{c}\):
\[y_{t}\sim p_{\theta}(y_{t}\mid\mathbf{c},\mathbf{x},\mathbf{y}_{<t})\] \[\propto\exp\mathrm{logit}_{\theta}(y_{t}\mid\mathbf{c},\mathbf{x},\mathbf{y} _{<t})\]
However, in cases where the context \(\mathbf{c}\) contains knowledge that is out-of-distribution with respect to \(\theta\), we hypothesize that the model can struggle to effectively attend to \(\mathbf{c}\) and overly rely on the prior knowledge encoded in \(\theta\). For instance, as illustrated in Figure 1, when the context \(\mathbf{c}\) states "Argentina won the FIFA World Cups in 1978, 1986 and 2022...", it contradicts the LM's outdated prior knowledge that Argentina has won the World Cup twice. The language model may still incorrectly predict "Two" even when presented with the context \(\mathbf{c}\) and the query \(\mathbf{x}\).
### Context-aware Decoding
To mitigate such issues, we factor out the prior knowledge from the model's original output distribution contrastively. Here, we model the prior knowledge as \(p_{\theta}(y_{t}\mid\mathbf{x},\mathbf{y}_{<t})\) and adjust the model's original output probability distribution using the pointwise mutual information (PMI) between the context \(\mathbf{c}\) and the generation \(y_{t}\), conditioned on \(\mathbf{x},\mathbf{y}_{<t}\). Formally, we have:
\[y_{t}\sim\tilde{p}_{\theta}(y_{t}\mid\mathbf{c},\mathbf{x},\mathbf{y}_{<t})\] \[\propto p_{\theta}(y_{t}\mid\mathbf{c},\mathbf{x},\mathbf{y}_{<t})\bigg{(} \frac{p_{\theta}(y_{t}\mid\mathbf{c},\mathbf{x},\mathbf{y}_{<t})}{p_{\theta}(y_{t}\mid\mathbf{ x},\mathbf{y}_{<t})}\bigg{)}^{\alpha}\]
where the output probability is a product-of-experts of the original output probability and PMI weighted by \(\alpha\). Essentially, outputs that become much more likely when the context is included are preferred (Figure 1).
This expression is not a valid probability distribution and needs to be normalized across all possible values of \(y_{t}\). By rearranging the terms, we obtain the final form:
\[y_{t}\sim\mathrm{softmax}[(1+\alpha)\,\mathrm{logit}_{\theta}(y_{ t}\mid\mathbf{c},\mathbf{x},\mathbf{y}_{<t})\] \[-\alpha\,\mathrm{logit}_{\theta}(y_{t}\mid\mathbf{x},\mathbf{y}_{<t})]\]
Larger \(\alpha\) means more weight on our adjustment (\(\alpha=0\) reduces to regular decoding).1 We refer to this simple method as context-aware decoding. From the adjusted output distribution \(\tilde{p}\), we can apply various sampling strategies, such as nucleus sampling Holtzman et al. (2019).
Footnote 1: If we identify an external knowledge \(\mathbf{c}\) conditionally independent to the generation, \(p_{\theta}(y_{t}\mid\mathbf{c},\mathbf{x},\mathbf{y}_{<t})=p_{\theta}(y_{t}\mid\mathbf{x},\mathbf{ y}_{<t})\), even a non-zero \(\alpha\) would not have an impact to the original output distribution.
Essentially, context-aware decoding is just a contrastive ensemble between the logits of \(p_{\theta}(y_{t}\mid\mathbf{c},\mathbf{x},\mathbf{y}_{<t})\) and \(p_{\theta}(y_{t}\mid\mathbf{x},\mathbf{y}_{<t})\). A similar contrastive objective is universal in image generation, where classifier-free diffusion models Ho and Salimans (2022) predict diffusion noise with \((1+\alpha)\mathbf{\epsilon}_{\theta}(\mathbf{x},\mathbf{c})-\alpha\mathbf{\epsilon}_{\theta}( \mathbf{x})\), with \(\mathbf{c}\) being a control to the image. In text generation, Malkin et al. (2021) propose coherence boosting with the same intuition, with a focus on contrasting the full input and a short premise-free input, promoting coherence w.r.t. the long context. Instead of using a single model \(\theta\) in this work, different models can also be used in the distribution adjustments to demote unwanted model behaviors or distill expert model's capability Liu et al. (2021); Li et al. (2022).
## 3 Experimental Setup
We perform evaluation on tasks that require LMs to read and reason over contexts and produce outputs that are faithful to the contexts. Following prior work Zhang et al. (2023); Zhou et al. (2023), we evaluate the models using prompting.
### Datasets and Metrics
SummarizationWe conduct summarization experiments on two news datasets: CNN-DM (See et al., 2017) and XSUM (Narayan et al., 2018). We use ROUGE-L (Lin, 2004) to evaluate summarization quality. To measure the factual consistency of summaries, we adopt BERT-Precision (Pagnoni et al., 2021) as well as FactKB (Feng et al., 2023), which has been demonstrated to achieve high correlations with human judgment on the two summarization datasets.
Knowledge ConflictsWe evaluate performance on two knowledge conflict datasets: MemoTrap (Liu and Liu, 2023) and NQ-Swap (Longpre et al., 2021). MemoTrap is created to investigate whether language models could fall into memorization traps. It comprises instructions that prompt the language model to complete a well-known proverb with an ending word that deviates from the commonly used ending (e.g., _Write a quote that ends in the word "early": Better late than_ ). NQ-Swap is based on a QA dataset, natural questions (NQ) (Kwiatkowski et al., 2019), where the objective is to answer questions based on a reliable gold document. To generate NQ-Swap, Longpre et al. (2021) first identify questions in NQ with named entity answers, find the supportive document for each question and then replace the gold answer entity in the document with a random entity. A faithful LM should generate the replaced entity as the answer when given the question and modified document. We also include the original NQ dataset with the question and original document for evaluation. We use Exact Match (EM) as the evaluation metric for NQ-Swap, NQ and MemoTrap.
In Table 1, we show illustrative examples of the contexts we aim to upweight for the model and the queries across different datasets. We hope LMs pay more attention to the source document in XSUM and NQ-Swap. On the other hand, we hope LMs focus more on the instruction in MemoTrap.
### Models and Baselines
We apply CAD to pretrained language models including OPT (13B and 30B) (Zhang et al., 2022), GPT-Neo (2.7B and 20B) (Black et al., 2021), LLaMA (13B and 30B) (Touvron et al., 2023) and instruction-finetuned language models such as FLAN-T5 (XL 3B and XXL 11B) (Chung et al., 2022).
CAD introduces a hyperparameter \(\alpha\) to control the adjustment level. We set \(\alpha=0.5\) for all models evaluated on the summarization datasets and \(\alpha=1\) for all models evaluated on the knowledge conflict datasets. We observed that \(\alpha=0.5\) generally yielded good results across all settings and all datasets, but a slightly higher \(\alpha\) is more effective in the knowledge conflict setting, where the prior knowledge needs to be factored out more. We investigate the effect of \(\alpha\) in Section 4.2.
For the baselines, we use regular decoding following prior work (Longpre et al., 2021; Kwiatkowski et al., 2019) to use greedy decoding for knowledge conflict tasks and top-\(p\) sampling with \(p\)=0.9 for summarization tasks (Holtzman et al., 2019). For CAD, we use the same sampling strategies on top of the adjusted output probability distribution.
## 4 Results
### Main Results
SummarizationTable 2 reports the results on CNN-DM and XSUM. We observe that CAD outperforms the standard decoding algorithm by a large margin in all eight models across both datasets. Specifically, when applied to LLAMA-30B in CNN-DM, CAD leads to 21% increase in ROUGE-L, 14.3% increase in factKB and 7.8% increase in BERT-P. This result demonstrates that CAD could effectively improve the quality and factuality of the generated summaries from a diverse set of language models.
\begin{table}
\begin{tabular}{l l} \hline \hline & \multicolumn{1}{c}{**XSUM**} \\ \hline \(\mathbf{c}\) & Article: Prison Link Cymru had 1,099 referrals in 2015-16 and said some ex-offenders were living rough for up to a year before finding suitable accommodation... \\ \(\mathbf{x}\) & Summarize the article in one sentence. Summary: \\ \hline \multicolumn{2}{c}{**NQ-SWAP**} \\ \hline \(\mathbf{c}\) & Tesla CEO Elon Musk is now in charge of Twitter, \\ & CNBC has learned... \\ \(\mathbf{x}\) & Who is Twitter CEO now? \\ \hline \multicolumn{2}{c}{**MemoTrap**} \\ \hline \(\mathbf{c}\) & Write a quote that ends in the word “early”: \\ \(\mathbf{x}\) & Better late than \\ \hline \hline \end{tabular}
\end{table}
Table 1: An illustration of the inputs to CAD applied to each dataset. CAD upweights the context \(\mathbf{c}\) (in red) by sampling each token from \(\mathrm{softmax}[(1+\alpha)\,\mathrm{logit}_{\theta}(y_{t}\mid\mathbf{c},\mathbf{x}, \mathbf{y}_{ct})-\alpha\,\mathrm{logit}_{\theta}(y_{t}\mid\mathbf{x},\mathbf{y}_{ct})]\).
Knowledge ConflictsOur results for the knowledge conflict datasets, NQ-SWAP and MemoTrap, as well as the original NQ are detailed in Table 3. CAD is significantly better than the regular decoding in all settings, with the exception of a minor decrease observed for FLAN-T5 on the non-conflict NQ dataset.2 Despite this, CAD achieves substantially better performance on the knowledge conflict datasets, e.g., CAD improve GPT-Neo 20B by 54.4% on Memotrap and by 128% on NQ-SWAP. This substantial improvement suggests that context-aware decoding is particularly beneficial for LMs to adhere to the given context, in scenarios where the model's prior knowledge contradicts with the context knowledge.
Footnote 2: This slight decline can be attributed to the fact that this particular NQ dataset is included in the instruction-finetuning sets used by FLAN-T5, and hence, the model has been previously trained on it.
### Analysis
Qualitative analysisWe provide qualitative examples for XSUM and Memotrap in Table 4. In XSUM, the regular decoding generates texts that is not mentioned in the article, whereas CAD produces output exclusively based on the information in the input article. For MemoTrap, the standard decoding disregards the instruction and generates the memorized ending, while CAD adheres to the instruction within the given context and produces the desired output.
CAD stays consistent with different model sizes in CNN-DM. In Memotrap and NQSWAP, this gain increases as the model size grows, indicating that larger LMs can have a greater tendency to rely on their prior knowledge instead of reading the contexts, thereby benefiting more from CAD.
Effect of adjustment level \(\alpha\)Context-aware decoding introduces a hyperparameter \(\alpha\), which serves to control the adjustment level of CAD (a small \(\alpha\) makes the distribution closer to the original next token distribution). We conduct experiments with various values of \(\alpha\) and present the results in Figure 3. Across all three datasets, we find \(\lambda\) = \(0.5\) consistently provide robust improvements over regular decoding. Further increasing the value of \(\alpha\) yields additional improvement in tasks involving knowledge conflicts.
## 5 Related Work
Summarization FactualitySummarization models have shown a tendency to generate hallucinated texts (Maynez et al., 2020; Pagnoni et al., 2021). This has led to growing efforts to improve the factual consistency, including applying attentions to fact triples extracted from source documents (Cao et al., 2018; Zhu et al., 2021), optimizing summarization models towards a factual consistency metrics (Nan et al., 2021; Cao and Wang, 2021), learning a post-editing error corrector (Dong et al., 2020) and removing noisy training samples (Kang and Hashimoto, 2020; Goyal and Durrett, 2021). However, all these methods require additional fine-tuning and are not directly suitable for zero-shot and few-shot prompting scenarios.
Knowledge ConflictsWhen presented with an updated document with conflicting knowledge, we expect language models to generate responses based on the provided contexts rather than relying solely on outdated parametric knowledge. This setting is especially valuable to retrieval-augmented language models (Khandelwal et al., 2020; Shi et al., 2023; Min et al., 2022; Yasunaga et al., 2023), where documents retrieved from external databases are used as additional input to provide LMs additional knowledge. However, simply adding documents does not always change the model predictions, as current LMs often overlook the contexts
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{2}{c}{**XSUM**} \\ \hline Article & He passed away peacefully in hospital on Tuesday after a short illness. Born in Tourmakeady, County Mayo, he worked as a teacher before securing a part in the premiere of the Brian Friel play Translations in 1980. Lally became a household name in Ireland for his role as Miley Byrne in the RTE soap opera Glenroe and later started in the BBC series Ballykissangel. He also appeared in the Hollywood movie Alexander and provided the voice for the Oscar-nominated, animated Irish film, The Secret of Kells. As a fluent Irish speaker and advocate of the language, Lally had roles in several Irish language films... \\ Regular & Westminister actor Pat \\ & Tuesday night aged 82 \\ CAD & Actor Lally, best known for Glenroe and Ballykissangel, has died in hospital on Tuesday \\ \hline \multicolumn{2}{c}{**MemoTrap**} \\ \hline Input & Write a quote that ends in the word “early”. Better late than \\ Regular & never \\ CAD & early \\ \hline \hline \end{tabular}
\end{table}
Table 4: Qualitative examples of contrast-aware decoding. The nonfactual or inconsistent texts are highlighted in yellow.
Figure 2: OPT models of varying sizes consistently benefit from CAD. The x-axis indicates the size of language models and the y-axis is the performance.
and rely heavily on their prior parametric knowledge (Longpre et al., 2021; Chen et al., 2022). Existing approaches for improving model's faithfulness to the context, such as the prompting-based method (Zhou et al., 2023), are limited in that they could only apply to large-scale instruction-finetuned LMs like OpenAI's text-davinci-003. In contrast, our work investigates a decoding strategy to tackle this problem, which is applicable to any LM.
Contrastive Decoding MethodsContrastive decoding methods have been extensively explored for text generation. Coherence boosting (Malkin et al., 2021) demotes a short context from a full context, focusing on the longer-range context for coherence and overall better generation quality. MMI-based decoding (Li et al., 2015) uses a contrastive formulation to improve output diversity in dialog generation. In this work, we adopt a same intuition and focus on analyzing the knowledge conflict scenarios where the faithfulness to the context is particularly important but difficult for the regular decoding methods. DExperts (Liu et al., 2021) demotes the output distribution of an _anti_-expert (e.g., exposed to toxic language) to help lead the generations free from the unwanted attributes. Contrastive decoding (Li et al., 2022) demotes an _amateur_ model (e.g., models with a very small number of parameters) to help distill the expert knowledge learned in the larger, more competitive models. In general, contrastive decoding has shown to be a general way to control model outputs, which we reinforce by considering the new case of factual consistency with the textual context.
## 6 Conclusion
Off-the-shelf language models may suffer from an insufficient attention to the supplied context compared to its learned prior knowledge, leading to an unfaithful generation to the input context. We present context-aware decoding, a simple inference-time method that downweights an output probability associated with the model's prior knowledge to promote models' attention to the contextual information. We experiment on two families of tasks that require a strong attention to the context, summarization and knowledge conflicts tasks. We show that CAD provides more reliable and factual outputs across different language models of various sizes.
|
2307.01203 | Near-field coded-mask technique and its potential for proton therapy
monitoring | Objective. Prompt-gamma imaging encompasses several approaches for online
monitoring of beam range or deposited dose distribution in proton therapy. We
test one of the imaging techniques - a coded mask approach - both
experimentally and via simulations. Approach. Two imaging setups have been
investigated experimentally. Each of them comprised a structured tungsten
collimator in a form of a MURA mask and a LYSO:Ce scintillation detector of
fine granularity. The setups differed in the detector dimensions and the
operation mode (1D or 2D imaging). A series of measurements with radioactive
sources have been conducted, testing the setups' performance of near-field
gamma imaging. Additionally, Monte Carlo simulations of a larger setup of the
same type were conducted, investigating its performance with a realistic gamma
source distribution occurring during proton therapy. Main results. The images
of point-like sources reconstructed from two smallscale prototypes' data using
the MLEM algorithm constitute the experimental proof of principle for the
near-field coded-mask imaging modality, both in the 1D and the 2D mode. Their
precision allowed us to calibrate out certain systematic offsets appearing due
to the misalignment of setup elements. The simulation of the full-scale setup
yielded a mean distal falloff retrieval precision of 0.72 mm in the studies for
beam energy range 89.5-107.9 MeV and with 1x10^8 protons (typical number for
single distal spots). The implemented algorithm of image reconstruction is
relatively fast - a typical procedure needs several seconds. Significance.
Coded-mask imaging appears a valid option for proton therapy monitoring. The
results of simulations let us conclude that the proposed fullscale setup is
competitive to the knife-edge-shaped and the multiparalell slit cameras
investigated by other groups. | Ronja Hetzel, Vitalii Urbanevych, Andreas Bolke, Jonas Kasper, Magdalena Kołodziej, Monika Kercz, Andrzej Magiera, Florian Mueller, Sara Müller, Magdalena Rafecas, Katarzyna Rusiecka, David Schug, Volkmar Schulz, Achim Stahl, Bjoern Weissler, Ming-Liang Wong, Aleksandra Wrońska | 2023-06-22T18:31:30Z | http://arxiv.org/abs/2307.01203v1 | # Near-field coded-mask technique and its potential for proton therapy monitoring
###### Abstract
_Objective._ Prompt-gamma imaging encompasses several approaches for online monitoring of beam range or deposited dose distribution in proton therapy. We test one of the imaging techniques - a coded mask approach - both experimentally and via simulations.
_Approach._ Two imaging setups have been investigated experimentally. Each of them comprised a structured tungsten collimator in a form of a MURA mask and a LYSO:Ce scintillation detector of fine granularity. The setups differed in the detector dimensions and the operation mode (1D or 2D imaging). A series of measurements with radioactive sources have been conducted, testing the setups' performance of near-field gamma imaging. Additionally, Monte Carlo simulations of a larger setup of the same type were conducted, investigating its performance with a realistic gamma source distribution occurring during proton therapy.
_Main results._ The images of point-like sources reconstructed from two small-scale prototypes' data using the MLEM algorithm constitute the experimental proof of principle for the near-field coded-mask imaging modality, both in the 1D and the 2D mode. Their precision allowed us to calibrate out certain systematic offsets appearing due to the misalignment of setup elements. The simulation of the full-scale setup yielded a mean distal falloff retrieval precision of \(0.72\,\mathrm{mm}\) in the studies for beam energy range \(89.5\)-\(107.9\,\mathrm{MeV}\) and with \(1\times 10^{8}\) protons (typical number for single distal spots). The implemented algorithm of image reconstruction is relatively fast - a typical procedure needs several seconds.
_Significance._ Coded-mask imaging appears a valid option for proton therapy monitoring. The results of simulations let us conclude that the proposed full-scale setup is competitive to the knife-edge-shaped and the multiparalell slit
cameras investigated by other groups.
keywords: coded mask, prompt-gamma imaging, proton therapy, range verification +
Footnote †: journal: Journal of Nuclear Physics
## 1 Introduction
There is a consensus in the proton therapy community that the implementation of methods that enable real-time monitoring of proton therapy would allow to better exploit the potential of this treatment modality and thus offer better and safer therapy to patients [1]. What requires verification is the spatial distribution of the dose delivered during therapy and its compliance with the one resulting from the treatment plan, preferably in a continuous manner during irradiation, on a single-spot (or at most a few-spot) basis. It is also of interest for the development of an online adaptive proton therapy, which is already used for photon irradiations [2]. Many of the proposed approaches rely on the detection of prompt-gamma (PG) radiation, where the spectral and spatial characteristics are correlated with the beam range [3; 4; 5; 6]. An overview of PG-based monitoring methods for proton therapy can be found in [7].
Several groups have been developing imaging setups featuring gamma cameras with passive collimation. The most mature projects involve the use of a camera combined with a collimator with a knife-edge-shape slit. Such a setup has been tested in clinical conditions of pencil-beam scanning and passively-scattered beam and proven to provide control of inter-fractional range changes with a precision of about \(2\,\mathrm{mm}\)[8; 9]. However, in view of the fact that the number of registered prompt-gamma quanta is one of the main limiting factors in prompt-gamma imaging (PGI), multi-slit systems have been investigated too. Not only do they register more gamma quanta than single-slit cameras, but they offer also a larger field of view. The performance of multi-slit setups was studied extensively via Geant4 simulations for different geometrical configurations in [10], and experimentally in [11]. However, no superiority with respect to the knife-edge-shaped camera could be shown. An extension of the latter was proposed in [12; 13], where a collimator with many knife-edge-shaped slits was studied. Although the obtained results were quite impressive (\(2\sigma=$1\,\mathrm{mm}$\) range retrieval precision), the studies were conducted at a beam energy of \(50\,\mathrm{MeV}\), i.e., below the lower limit of the clinically applied energy range. Unfortunately, the studies of that group were discontinued. The concept, however, was picked up and extended to a dual-head setup, enabling 3D-imaging [14]. Simulation results indicate the feasibility of using the setup for online range monitoring in proton therapy, with a position resolution better than \(2\,\mathrm{mm}\) across the whole field of view. The group is currently developing a prototype setup to verify their simulation results. Yet a different approach has been presented in [15], where a setup consisting of a pixelated detector and a coded-mask collimator with a modified uniformly redundant array (MURA) pattern [16] has been studied via simulations. This kind of gamma collimation is widely used in astronomy and proven to work well also in reconstructing the positions of gamma sources (see, e.g., [17; 18]), though one needs to stress that those applications deal with far-field imaging and mainly point-like sources, which is in general a less demanding imaging scenario. A MURA collimator is undoubtedly easier to manufacture than the one with multiple knife-edge shaped slits. The setup
like the one proposed in [15] offers 2D-imaging with a much larger detection efficiency than the solutions discussed above, since half of the collimator pixels are open. The authors report an accuracy of range determination better than \(0.8\,\mathrm{mm}\), but this number holds for \(10^{10}\) impinging protons, which exceeds by 2 orders of magnitude the number typically applied in a single spot.
Coded-mask (CM) imaging is an extension of the well-known pinhole camera concept, widely used in various areas: from photography to space imaging [19]. In a pinhole camera, the detector is fully covered with an impenetrable material except for a small hole that decodes a source while projecting it onto the detector. Although it may provide a good image resolution when there are no constraints on the irradiation time, it is not applicable in the PGI in clinical conditions, when the fixed number of impinged protons per irradiated spot specified by the prescribed dose to the patient limits the number of emitted gammas. An almost completely opaque collimator further reduces the number of registered gamma quanta, leading to a situation in which the reconstructed image is strongly affected by statistical fluctuations.
In the CM approach, a detector is covered not with a single-hole shield, but with a mask consisting of a number of such holes forming a specific pattern. Usually, the number of open pixels is similar to the number of filled ones, so approximately 50% of the mask is opaque. In comparison to the single-slit camera, such a setup registers more photons which allows to increase the detector efficiency. The optimisation of mask patterns is an interesting problem that has been widely explored in the last few decades [16; 20]. In this work, we are using a MURA pattern [16] as it is beneficial in performance metrics such as signal-to-noise ratio (SNR) and image resolution. A specific MURA mask is characterized by a prime number (called the rank or order of the mask) which defines the number of pixels per dimension and the construction of the pattern of opaque and empty pixels.
In this work, we present the results of our experimental studies with two small-scale prototypes of detection setups exploiting the coded-mask technique, conducted using point-like radioactive sources. The two setups featured different detectors and mask patterns, allowing 1D- and 2D-imaging. The images are reconstructed using the maximum-likelihood expectation maximization (MLEM) algorithm [21]. We report the obtained point-spread function (PSF) for different source positions within the field of view and the system detection efficiency. Using the experimental results to benchmark the simulation application, we furthermore simulate the performance of a full-scale prototype, currently under development as a scatterer module within the **S**ilicon Photomultiplier and Scintillating **F**ibre-based **C**ompton **C**amera (SiFi-CC) project [22], including its capability to reconstruct a continuous linear gamma source present during proton therapy.
## 2 Materials and methods
### Detector components
#### 2.1.1 Scintillators
The measurements are conducted with two different scintillation detectors. In both cases, the Ce:LYSO scintillator was used as a sensitive material because of its large effective atomic number and large density, resulting in high efficiency
of gamma detection. Good availability and moderate price were the additional arguments supporting this choice [24]. We use the small-scale prototype (SSP) of a SiFi-CC module which consists of 64 Ce:LYSO fibers (see Fig. 1(a)). The fibers have a squared cross section of \(1\,\mathrm{mm}\times 1\,\mathrm{mm}\) and are \(100\,\mathrm{mm}\) long. For this measurement, they are arranged in two layers of 32 fibers each. Every fiber is wrapped individually in aluminum foil and they are held together in an aluminum frame. The pitch between both two fibers and the two layers is \(1.36\,\mathrm{mm}\).
The second scintillator used is a three-layered Ce:LYSO array developed for positron emission tomography (PET) imaging simultaneously to magnetic resonance imaging (MRI). The array has a base area of \(45\,\mathrm{mm}\times 48\,\mathrm{mm}\), a total height of \(15\,\mathrm{mm}\) and three layers (see Fig. 1(b)). Each layer consists of individual needles with a pitch of \(1.33\,\mathrm{mm}\) resulting in 3425 needles in total. The needles within a layer are optically separated by BaSO\({}_{4}\). Each layer is shifted by half a needle pitch with respect to the layer below to enable the identification of the layer and thus the depth-of-interaction in the array by the footprint of the detected light. The height of the individual layers is optimized for uniform absorption of \(511\,\mathrm{keV}\)-photons.
#### 2.1.2 Readout platform
As a readout system we use the Hyperion III platform [23; 25; 26]. It is developed by Hyperion Hybrid Imaging Systems as MRI-compatible detector platform for PET systems and includes hardware, firmware and software for data acquisition, processing and analysis. We use sensor tiles equipped with digital silicon photomultipliers by PDPC (Philips Digital Photon Counting). The dimensions of one sensor tile are \((48\times 48)\,\mathrm{mm}^{2}\) and the whole tile holds \(6\times 6\) DPC-3200-22 [27; 28]. Each digital photon counter (DPC) has four readout channels for the detected number of photons and delivers one common timestamp. Each of the four readout channels contains 3200 single-photon avalanche diodes (SPADs). The DPCs are self-triggering if one of the four readout channels passes a given trigger and validation threshold based on sub-regions on the sensor. During these measurements a DPC is triggered if on average \(3.0\pm 1.4\) photons are registered in one channel and the average validation threshold for an event to be recorded to disk is \(53\pm 15\) photons [29]. We use a validation time
Figure 1: (a) Small-scale prototype of the SiFi-CC. (b) Three-layered PET crystal array coupled to a sensor tile. Adapted from an image by Bjoern Weissler licensed under CC BY [23].
to accept a trigger of 40 ns and an integration time of 325 ns to collect photons on the sensor tile. The overvoltage of the silicon photomultipliers (SiPMs) is set to 3 V. As this is a digital tile it is possible to disable the SPADs which produce a high number of dark counts. This inhibit fraction is set to 10 %. The surface of each sensor tile is covered with a glass plate of 1.1 mm. The sensor tiles are connected to a singles processing unit (SPU) which manages their voltage supply and feeds their data to the data acquisition and processing server (DAPS). During the measurement, the tiles are cooled by a 15 \({}^{\circ}\)C liquid cooling system.
#### 2.1.3 Masks
We perform measurements in one and two dimensions, i.e., we reconstruct an image along one axis or on a plane. For these two tasks, we use a one- and a two-dimensional versions of a MURA mask of rank 476, clipped to \(31\times 31\) central pixels (see Fig. 2). The mask rank as well as the setup geometry have been optimised via Monte Carlo simulations before the experiment. To construct the physical masks we use tungsten rods of \((2.26\times 2.26\times 20)\) mm\({}^{3}\) which are inserted into 3D printed rasters made from Pro Grey Resin. The rod manufacturing reaches a precision of 0.1 mm. The resulting masks have a dimension of \((73.6\times 73.6)\) mm\({}^{2}\). The rasters have a total thickness of 13 mm and the holes to insert the rods are 10 mm deep. To prevent the rods from falling out, the assembled masks are wrapped in cling film.
### Radioactive sources
For image reconstruction, the experimental data were obtained with a radioactive \({}^{22}\)Na source with an activity of 2.89 MBq. The active material in that source covers an area of 1 mm \(\times\) 1 mm. As a \(\beta^{+}\)-emitter, \({}^{22}\)Na provides two photons of 511 keV emitted back-to-back, which can be used for electronic collimation. For calibration of the detectors we additionally used the 1275 keV gamma line of \({}^{22}\)Na and two more radioactive sources: a \({}^{137}\)Cs source with a gamma line at 662 keV with an activity of 1.73 MBq and a \({}^{133}\)Ba source with
Figure 2: Coded masks for 1D measurement with the small-scale prototype (a) and for 2D measurement with the three-layered PET array (b).
a prominent line at 356 keV with an activity of 1.52 MBq. During the measurement the sources were placed in front of the detector on a grid allowing for easy repositioning of the source for different measurement configurations in 1 cm-steps in vertical and horizontal direction.
### Experiment: Setup 1D
#### 2.3.1 Experimental setup for imaging
In the 1D setup of our experiment we aimed for reconstruction of a source position along one axis, so only in one dimension. We used the small-scale prototype as scintillation detector and the 1D coded mask (see Fig. 3(a)). In the current experiment, the distance between the source plane and front part of the detector is 236.5 mm while the distance from the centre of mask to the front part of the detector is 66.5 mm. Just like the mask patterns, the distances were optimised via simulations. The orientation of the bars forming the mask pattern is vertical, the same as of the fibers. Both ends of the fibers are coupled to one SiPM tile each with an optical silicon pad of 0.4 mm thickness made of Elastosil RT 604. The pad has a size of \((8\times 48)\) mm\({}^{2}\) so it covers one row of DPCs and light-sharing between different readout channels is enabled. The other five DPC rows of the tile are not directly exposed to light from the fibers.
#### 2.3.2 Setup for energy and \(y\)-position calibration
As the light yield on the SiPMs heavily depends on the position of the interaction along the fiber, a position-dependent energy calibration is needed. For this, we use a fan beam collimator which is described in detail in [30; 31; 32]. The collimator consists of lead blocks with adjustable slits to two sides so that particles emitted by a radioactive source placed in the centre leave the collimator in thin elongated beams. To calibrate the small scale prototype with this setup, we use a \({}^{22}\)Na source and employ the three-layered PET crystal as coincidence detector on the other side of the collimator to enable electronic collimation. The used slit width of 1.9 mm leads to a beam width on the fibers of 2.5 mm (FWHM). The coincidence window is set to 10 ns. The fibers of the small scale prototype are irradiated at nine different positions in 10 mm steps.
### Experiment: Setup 2D
#### 2.4.1 Experimental setup for imaging
In the two-dimensional setup we used the three-layered PET crystal as a detector with the 2D coded mask (see Fig. 2(b)) placed in front of it. The dis
Figure 3: The experimental setup for 1D measurements (a) and for 2D measurements (b).
tance between the source plane and the front plane of the detector was 220 mm and the distance from the centre of the mask to the front plane of the detector was 50 mm (see Fig. 3(b)). The scintillator was coupled to the sensor tile over a 1.1 mm thick light guide glued to it with SCIONIX RTV 481.
#### 2.4.2 Setup for energy calibration
For calibration of the detector in this configuration, we use frontal irradiations of the scintillator with radioactive \({}^{22}\)Na, \({}^{137}\)Cs and \({}^{133}\)Ba sources.
### Analysis Chain for Experimental Data
#### 2.5.1 Preprocessing
As all DPCs are triggered independently, a clustering algorithm is needed to form events. We use a cluster window of 40 ns to combine triggered DPCs on one tile. These combined signals, we refer to as one hit in the following. The first timestamp recorded in one of the DPCs is used as the timestamp of the hit. If we use several tiles in one measurement, we apply a coincidence window of 10 ns to combine hits from the different tiles. This we call an event.
#### 2.5.2 Fiber identification in 1D setup
With its base area of \((1\times 1)\) mm\({}^{2}\), one fiber is smaller than the area covered by one readout channel which is approximately \((4\times 4)\) mm\({}^{2}\). To identify single fibers in which impinging gammas deposited energy, we used the light-sharing through the optical pad. Depending on the position of the fiber, one or two horizontally neighbouring DPCs are triggered, so there are four or eight signals which can be used to calculate a center of gravity (CoG). Plotting all CoG in one histogram yields a so-called floodplain. When separating events by the number of triggered DPCs, one can identify light accumulation points on the floodmaps corresponding to interactions in individual fibers. These peaks are determined in projections of the floodmaps. To tag the hit fibers, the floodmaps are first divided into layers by horizontal lines and afterwards the fibers are separated within the layers by the centreline between two peaks so that each fiber corresponds to a rectangular region on the floodplain. To consider an event valid, we demand that the same hit fiber is identified on the two sensor boards. This is the case for 84.8 % of all recorded events. More details on the fiber identification are presented in [33].
#### 2.5.3 Energy and \(y\)-position calibration in 1D setup
The calibration of the energy and the interaction position within a fiber is performed individually for each fiber. For this purpose, we used the nine measurements with a \({}^{22}\)Na source obtained with the fan beam collimator and employed the ELAR model described in detail in [24]. This provides a position-dependent energy calibration based on the measurement of the 511 keV-peak of \({}^{22}\)Na on both ends of the fibers. The description of the calibration of this data set can be found in [33].
#### 2.5.4 Needle identification in 2D setup
To identify the hit needle in the three-layered PET array, again floodmaps of the CoG positions are used. Due to the shift of the three layers with respect
to each other, all needles yield different light accumulation points on the flood-map and are thus distinguishable. Here, separate floodmaps for four different regions of interest (ROIs) dependent on the main channel, which is the channel with the highest photon count in this event, are employed: 1. the main DPC containing the main channel, 2. the main DPC and the vertically neighbouring DPC to the main channel triggered, 3. the main DPC and the horizontally neighbouring DPC to the main channel triggered, 4. the main DPC, the vertically neighbouring DPC, the horizontally neighboring DPC and the diagonally neighbouring DPC triggered. If more than one ROI is valid for one event, all are evaluated. For each of the four floodmaps the light accumulation points are identified on a two-dimensional grid and assigned to a needle ID. A predecessor of this algorithm is described in [34]. During the needle identification process, the event is assigned to the needle ID of the closest light accumulation point on the floodmap. If different ROIs yield different needle IDs, for each ROI a quality factor \(QF=d_{1}/(d_{1}+d_{2})\) is calculated, where \(d_{1}\) is the distance to the closest light accumulation point and \(d_{2}\) the distance to the second closest light accumulation point on the respective floodmap. The final needle ID is then taken from the ROI with the smallest quality factor. The three top rows and three bottom rows of needles, i.e., with the highest and lowest \(y\)-coordinates, could not be resolved with this approach and were therefore not taken into account in the further steps of the analysis. An in-depth description of the procedure can be found in [35].
#### 2.5.5 Energy calibration in 2D setup
The energy calibration is performed separately for each needle and for each of the four ROIs explained in Section 2.5.4. For each of these cases, we take the energy spectra of the calibration measurements with radioactive sources and fit the 511 keV peak in the \({}^{22}\)Na spectrum, the 662 keV peak in the \({}^{137}\)Cs spectrum and the 356 keV peak in the \({}^{133}\)Ba spectrum with a Gaussian function plus a linear function as background approximation. The three peak positions were used to obtain a first linear calibration. After the needles responses were individually calibrated, the energy spectra for all needles were added up. Then the statistics were sufficient to also take the 1274.5 keV peak in the \({}^{22}\)Na spectrum into account. Its position was used to apply a global saturation correction to all data:
\[E(Q)=p_{2}\cdot\exp\left(-\frac{Q}{p_{1}}\right)\cdot\left[\exp\left(-\frac{Q} {p_{1}}\right)-\exp\left(-\frac{p_{0}}{p_{1}}\right)\right] \tag{1}\]
where \(Q\) represents the pre-calibrated energy and fitted \(p_{i}\) are the parameters of the saturation correction. All the intermediate steps are reported in [35].
### Implementation of MLEM for image reconstruction
In order to reconstruct the image decoded with CM, we are using the MLEM algorithm [21] while different approaches (like FISTA [15] or OSEM [36]) are also possible. MLEM utilises prior information about the probability of a photon being emitted in a particular position of the source plane to be registered in each detector pixel.
MLEM is a widely used iterative algorithm, examples of its application in PET can be found e.g. in [37; 38]. It serves for the reconstruction of Poissonian
data:
\[\mathbf{f}^{[k]}=\frac{\mathbf{f}^{[k-1]}}{\mathbf{S}}\left(\mathbf{H}^{T}\cdot \frac{\mathbf{l}}{\mathbf{H}\cdot\mathbf{f}^{[k-1]}}\right),\text{ for }k=1,2,\ldots \tag{2}\]
where \(\mathbf{l}\) is a vector of measured data, \(\mathbf{f}^{[k]}\) is the image estimate after \(k\)-th iteration (\(\mathbf{f}^{[0]}=1\)), \(\mathbf{H}\) is a system matrix and \(\mathbf{S}\) a normalisation term (or sensitivity map). Eq. 2 is written in vector form and all multiplications indicated by dots represent the vector (matrix) multiplications, while all other operations are element-wise.
An element of a system matrix \(\mathbf{\mathsf{H}}_{ij}\) is a probability that a particle originated from the \(j\)-th voxel of the source plane will be detected in a \(i\)-th voxel of the detector:
\[\mathbf{\mathsf{H}}_{ij}=p(V_{i}|O_{j}) \tag{3}\]
where \(V_{i}\) is an event that particle was detected in the detector voxel \(i\), \(O_{j}\) is an event in which a particle originated from the source plane voxel \(j\).
In order to obtain such a set of probabilities, we used a Monte Carlo simulation. The chosen field of view is divided into pixels and simulations with a single point-like source placed in the centre of each pixel were performed. The detector response from each of these simulations gives a column in a system matrix, since the number of registered particles in each detector pixel is proportional to the probability from Eq. 3 (given that the number of simulated events in each vertex \(j\) is the same). The detailed explanation of the system matrix generation procedure is in Sec. 2.8.1.
After normalising each column of a system matrix by the sum of its elements, the elements \(\mathbf{\mathsf{H}}_{ij}\) can be interpreted as a conditional probability to register the particle in the detector voxel \(i\) given that the particle originated from the voxel \(j\) and was detected somewhere.
Note that both the image \(\mathbf{l}\) and the reconstructed object \(\mathbf{f}\) are one-dimensional vectors, even when we want to reconstruct 2D (or even 3D) objects. It allows us to utilize one- and two-dimensional setups with the same reconstruction algorithm.
Figure 4: Preprocessing components for the second layer of the three-layered PET array according to Eq. 4 represented as two-dimensional histograms: (a) Background, i.e. a measurement without both a mask and a source, (b) No mask - experimental data from a measurement with a single point-like source placed in the centre of field of view (FOV) without a mask. Both (a) and (b) histograms have been normalized to the total measurement time so each pixel’s value is in counts per second. Efficiency (c) is calculated using Eq. 4 and is normalized to the maximal value among all three layers.
### Efficiency and background corrections
Having a calibrated detector response, before proceeding with image reconstruction, we perform a data correction step. In order to do that, we make use of two auxiliary elements: a background measurement \(\mathbf{l}_{\mathrm{bg}}\) performed without a radioactive source and without a mask, and a detection efficiency map. For the latter, we register the detector response \(\mathbf{l}_{\mathrm{no-mask}}\) when irradiating the detector without a mask, and with a source placed far enough to consider the gamma flux uniform over the surface of the detector. Having those data we can construct an efficiency map which will reflect the relative detection efficiency of individual detector elements (fibers or needles) \(\boldsymbol{\epsilon}\):
\[\boldsymbol{\epsilon}=\frac{\mathbf{l}_{\mathrm{no-mask}}-\mathbf{l}_{ \mathrm{bg}}}{\max\{\mathbf{l}_{\mathrm{no-mask}}^{(i)}-\mathbf{l}_{\mathrm{bg }}^{(i)}\}}, \tag{4}\]
where the index \(i\) runs over all detector elements.
This step is performed once for each detection setup, the resulting efficiency map and \(\mathbf{l}_{\mathrm{bg}}\) are used to correct each registered data set that serves as input for image reconstruction. In the correction, we subtract the background from each raw data vector \(\mathbf{l}_{\mathrm{raw}}\) and divide it by the efficiency map:
\[\mathbf{l}=\frac{\mathbf{l}_{\mathrm{raw}}-\mathbf{l}_{\mathrm{bg}}}{ \boldsymbol{\epsilon}}. \tag{5}\]
The resulting vector \(\mathbf{l}\) is ready to be used with the MLEM algorithm according to Eq. 2 together with the system matrix prepared beforehand (see Sec. 2.8.1).
The preparation and application of the efficiency correction we demonstrate in Fig. 4 taking the second layer of the three-layered PET array as an example. The background data \(\mathbf{l}_{\mathrm{bg}}\) is presented in histogram Fig. 4(a), where each bin value is a number of counts per second in the corresponding crystal. Fig. 4(b) presents results of measurement with a single point-like source but without a mask (\(\mathbf{l}_{\mathrm{no-mask}}\) in Eq. 4) with the same units as the \(\mathbf{l}_{\mathrm{bg}}\) histogram. By evaluating Eq. 4 for these two histograms, we obtain an efficiency map presented in Fig. 4(c). One can notice that those three histograms have some similarities in their patterns, representing the DPC structure of the sensor tile. With other algorithms, a more homogeneous efficiency distribution can be achieved. In our approach, this is cancelled out when the image is corrected with the efficiency map and so does not compromise our image reconstruction. In order to avoid singularities resulting from a division by zero, we add a small value (\(10^{-6}\)) to all elements of the efficiency map.
In Fig. 5 we demonstrate all intermediate steps from Eq. 5, applied to the same second layer of the three-layered PET array: the raw detector response \(\mathbf{l}_{\mathrm{raw}}\) in panel (a) is followed by the background-free histogram \(\mathbf{l}_{\mathrm{raw}}-\mathbf{l}_{\mathrm{bg}}\) and finally the background-subtracted and efficiency-corrected detector response \(\mathbf{l}\) is shown in panel (c) which serves as an input for the MLEM algorithm. This data is taken in one of the full-fledged measurements, with a coded mask and a point-like source placed at \((-20,0)\,\mathrm{mm}\). Fig. 5 shows how this procedure transforms the seemingly messy raw detector response into the histogram with a recognizable mask pattern (see Fig. 2(b)) projected onto it.
### Simulations
#### 2.8.1 Generation of system matrix
MLEM requires a system matrix (SM) which, in this work, is calculated prior to the reconstruction. For both 1D and 2D setups we utilize Monte Carlo simulations in order to obtain corresponding probabilities which are the elements of the system matrix. The FOV was chosen to be of the same size as the mask and it is \(70\,\mathrm{mm}\) (in each dimension for 2D) and is divided into 100 pixels (in each dimension). By performing simulations of \(10^{6}\)\(\gamma\)-particles with a point source placed subsequently in the centre of each pixel of the FOV we obtain the system matrix elements column by column. Such simulated statistics result in the statistical uncertainty of matrix elements being below \(1.5\,\mathrm{\char 37}\). All simulated particles have the same energy \(4.4\,\mathrm{MeV}\) as our investigations revealed that our approach to SM calculation is not sensitive to energy. During the simulation, each element of the system matrix is incremented by the energy deposit obtained from the corresponding detector bin. As a result, we are working with accumulated energy values that are then transformed into probabilities through a normalization process.
A system matrix for a one-dimensional coded-mask setup (described in Sec. 2.3) has a size of \(32\times 100\) elements and is shown in Fig. 6. We see that its elements reflect the pattern (or its central part) of the coded mask (see Fig. 2(a)). With shifting the source position (horizontal axis in Fig. 6), the pattern moves vertically in the histogram, which corresponds to the translation of the mask shadow along the detector plane.
For our simulations, we use Geant4 v10.6 with PhysicsList QGSP_-BIC_HP and electromagnetic option 4. The generation of each matrix column (one simulation) takes around \(25\,\mathrm{sec}\) on Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz and utilizing 6 CPU cores in parallel, which means a full matrix for the 1D setup (100 simulations) can be generated within \(10\,\mathrm{min}\). In the case of a 2D system matrix with similar conditions, the number of auxiliary simulations will be \(10\,000\) (100 in each direction). Consequently, the total time will increase proportionally to about \(16\,\mathrm{h}\).
Figure 5: Histograms demonstrating preprocessing steps applied to each experimental data set before image reconstruction: (a) raw data normalized by the total measurement time, (b) the same data after background subtraction, (c) background-free histogram divided by the efficiency map (Fig. 4(c)) - the processed data which are the input of the reconstruction. All three histograms show data with the coded mask and a point-like source placed at \((-20,0)\,\mathrm{mm}\) for the second layer of the three-layered PET array only, the same procedure was applied for all detector layers.
#### 2.8.2 Setup for 1D full-scale prototype
In this paper, we are showing experimental results for point-like sources only. Those laboratory experiments allowed us to estimate the performance of the prototype detectors (energy- and spatial resolutions, capabilities, limitations, etc) and conclude about the near-field imaging capabilities of the setups based on them. However, the ultimate goal of the whole project is to come up with an approach applicable in proton therapy. This requires imaging of continuous source distributions, with a particular focus on the distal part of the gamma production depth profile with its falloff occurring close to the Bragg peak position [3]. This scenario is clearly more demanding than the case of point-like sources. In fact, the small-scale prototypes presented in this paper (Fig. 3(a)) are not suitable for that task; their small surface areas limit the spatial information being used as an input for image reconstruction (i.e. limited views, incomplete sampling). However, this does not necessarily imply that the technology is not well-suited for range monitoring. Encouraged by the results presented in this study, we started the construction of a larger detector within the SiFi-CC project, with its design being very similar to the small-scale prototype used in the 1D setup. It consists of 7 layers with 55 fibers each; the fiber size is \((1.94\times 1.94\times 100)\,\mathrm{mm}^{3}\), and the pitch is \(2.01\,\mathrm{mm}\). We tested via Monte Carlo simulations configured similarly as in Sec. 2.8.1 how the detector - called henceforth the full-scale prototype - performs in a coded mask setup, i.e. combined with a structured collimator. In Geant4, we coupled that detector with a larger section of the 467-th order MURA mask, with the same pixel size as the one presented in Fig. 2(a), but dimensions extended to 51 central bins from the full array (instead of 31) in the horizontal direction and 45 pixels in vertical. The detector-to-mask and mask-to-source distances remained the same as those used for the 1D setup of the small-scale prototype.
In the Monte Carlo simulations, a poly(methyl methacrylate) (PMMA) phantom of the dimensions \((60\times 60\times 90)\,\mathrm{mm}^{3}\) and density \(1.19\,\mathrm{g/cm}^{3}\) was irradiated with a proton beam. The phantom was centered with respect to our FOV and the beam impinged from the positive direction of the \(x\)-axis, along the longest
Figure 6: System matrix for 1D setup. The horizontal axis corresponds to FOV pixels and the vertical axis corresponds to the detector elements.
axis of the phantom. A schematic of the setup geometry is presented in Fig. 7. In the simulations, we recorded energy deposits in individual fibers. The processes of production and detection of scintillation photons are not simulated, but their effect is taken into account by smearing the obtained energy deposits with a resolution function, found via more realistic simulations and benchmarked with laboratory tests with single fibers [22; 24]:
\[\frac{\sigma_{E}(E)}{E}=a+b\cdot E^{-1/2}+c\cdot E^{-3/2}, \tag{6}\]
where \(a=0.0322\), \(b=0.6730\) and \(c=-0.0013\).
The simulations were conducted for the following beam energies: 85.9, 90.7, 95.1 and 107.9 MeV, corresponding to the Bragg peak depth in the phantom of 5.0, 5.5, 6.0 and 7.5 cm, respectively, where the latter were determined using SRIM [39]. For each beam energy, a set of 1000 simulations, each with \(10^{7}\) protons, was performed. It is worth noting, that in proton therapy the typical spot strengths range from a few times \(10^{7}\) for proximal spots up to about \(2\times 10^{8}\) for distal ones [40]. Having separate simulations with \(10^{7}\) protons, we were able to investigate the effect of statistics in image reconstruction, the resulting resolutions and uncertainties, and conclude about the feasibility of beam range shift detection on the basis of data from a single spot or an iso-energy layer. For each file simulated for a beam with a kinetic energy of \(T_{p}=85.9\) MeV, we have around 840 000 PGs generated, out of which our detector registers about 15 000 gammas which is about 1.8 %. The overall detection efficiency (including geometrical acceptance) is thus \(1.5\times 10^{-3}\). Those numbers remain similar for other studied beam energies.
A system matrix for this setup was generated assuming one-dimensional reconstruction. A FOV of 130 mm along the \(x\)-direction was assumed and divided into 200 pixels. In this case, unlike in the studies with radioactive sources, the gamma source is not monoenergetic. Nevertheless, when generating the system matrix, we shot \(10^{6}\) 4.4 MeV gamma particles from each FOV bin, since our earlier studies showed that the resulting system matrix is not very strongly
Figure 7: Simulated geometry for the full-scale prototype (top view).
energy-dependent. To test that,we in a preliminary work we investigated if the reconstruction of a point source is influenced by the energy used to generate the system matrix. Namely, we generated one-dimensional system matrices with various energies ranging from \(0.5\) to \(5\,\mathrm{MeV}\), in increments of \(0.5\,\mathrm{MeV}\). We then reconstructed the same simulation data obtained from a point-like source with a sample energy value of \(4.4\,\mathrm{MeV}\) and a position \((-10,0)\,\mathrm{mm}\). The results showed that the reconstructed source position varied by only \(0.03\,\mathrm{mm}\), and the width of the fitted Gaussian curve exhibited a variation of just \(3\,\mathrm{\char 37}\). These findings indicate that the reconstruction process is nearly independent of the energy used for the system matrix generation. The observed differences can be attributed to statistical fluctuations.
We evaluated the image reconstruction performance by inspecting the uncertainty on the distal fall-off position determination (DFPD), which is calculated in a similar way as defined in [41] and the procedure is demonstrated in Fig. 8. Namely, we take the reconstructed profile (blue dots in the figure), and interpolate it with a cubic spline, obtaining a smooth function (green line). Subsequently, we subtract from the function its minimum value and then normalize it by the resulting function maximum. Like this, we obtain a normalized reconstructed profile described by a smooth function of the values between \(0\) and \(1\). We find all the points where this function equals \(0.5\) (red cross and orange stars) and take the left-most one (red cross) as our estimation of the distal falloff position. The same is being done with the MC source distribution so that we can compare these two values.
### Performance evaluation of MLEM
In order to examine the performance of MLEM, we have prepared sample reconstruction results based on experimental data from the 2D coded-mask setup with the three-layered PET detector. The very same data as shown in Fig. 4(c)
Figure 8: DFPD procedure. The blue dots show the reconstructed image, and the green line is the reconstructed image smeared with a Gaussian filter (kernel size 2 bins) and interpolated with a cubic spline. The reconstruction was performed on a particular sample of simulation data for the beam energy \(95.1\,\mathrm{MeV}\) with \(10^{8}\) simulated protons. Orange stars are rejected candidates for the distal fall-off position (DFP) value and the red cross specifies the accepted estimation.
(but for all three detector layers) were loaded into a single vector and used as an input for the image reconstruction. Having a system matrix of size \(10000\times 3425\) and an image vector with size 3425, 100 iterations take 6.51 sec on a Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz to be performed.
Results after 100 iterations are shown in Fig. 9 as a 2D histogram (a) and its projections on the \(x\)- and \(y\)- axes (panels (b) and (c), respectively). In the 2D histogram, we observe a clear image without artefacts that exhibits a single peak. The peak is close to the designed source position (marked as a green cross) with a small shift to the upper left. Numerical evaluation of the reconstruction quality was performed via a Gaussian fit - its results are visible in Fig. 9(b) and (c) as a green solid line. The peak width in the reconstructed image (here 1.7 mm and 1.5 mm for \(x\)- and \(y\)-direction) depends on the number of MLEM iterations and will further decrease when using a higher number of iterations.
Evaluation of the reconstruction performance is straightforward in the case of point-like sources: comparing the reconstructed peak position with the designed one and, using Gaussian fitting, determining the peak width and position. In the case of continuous source distributions, we are using a universal image quality index (UQI) [42] that allows comparing two vectors \(\boldsymbol{\mathsf{X}}\) and \(\boldsymbol{\mathsf{Y}}\), representing in our case the reconstructed image and the expected source distribution. Its application in the near-field coded-aperture imaging was demonstrated in [43]. The index is defined as follows:
\[\text{UQI}=\frac{2\text{cov}(\boldsymbol{\mathsf{X}},\boldsymbol{\mathsf{Y}}) }{\text{var}(\boldsymbol{\mathsf{X}})+\text{var}(\boldsymbol{\mathsf{Y}})} \frac{2\boldsymbol{\bar{\mathsf{X}}}\boldsymbol{\bar{\mathsf{Y}}}}{\boldsymbol {\bar{\mathsf{X}}}^{2}+\boldsymbol{\bar{\mathsf{Y}}}^{2}}, \tag{7}\]
where cov and var are covariance and variance functions, and \(\boldsymbol{\bar{\mathsf{X}}},\boldsymbol{\bar{\mathsf{Y}}}\) are the means of the vector components. UQI takes values in the range \([-1;1]\) where 1 corresponds to identical images and -1 to inverted ones. Vectors \(\boldsymbol{\bar{\mathsf{X}}}\) and \(\boldsymbol{\bar{\mathsf{Y}}}\) are standardised before calculating UQI, namely we apply:
\[\boldsymbol{\mathsf{X}}^{\prime}_{n}=\frac{\boldsymbol{\mathsf{X}}_{n}- \boldsymbol{\mathsf{X}}_{min}}{\boldsymbol{\mathsf{X}}_{max}-\boldsymbol{ \mathsf{X}}_{min}} \tag{8}\]
to each vector, where \(n\) loops over all vector elements. UQI is beneficial in the case of simulations because one knows the true source distribution, unlike in the case of experimental data. When investigating how the UQI depends on the number of performed iterations, we observe first a steep rise up to about 800 iterations; in that range, subsequent iterations significantly improve the image quality. For larger numbers of iterations, a slow decrease is observed due to the amplification of statistical fluctuations inherent to MLEM. The maximum corresponds to the optimal number of iterations at which the reconstructed image is as close to the true one as possible, given UQI is used as a metric. However, the maximum is rather broad, for the presented example the UQI does not drop below 0.95 of its maximum value for the numbers of iterations from the range 361-1628. In this range, the reconstructed picture does not change significantly, thus it is not critical to run MLEM iterations exactly the number of times corresponding to the maximum UQI, but close to it. This has important implications for image reconstruction using experimental data, where we have no reference image - even if the number of iterations deduced based on the simulations will not be identical with the a priori unknown optimal number
of iterations for experimental data, the latter is likely to be close enough, i.e. the reconstructed image will be close to the best possible reconstruction.
## 3 Results
### Calibration results
#### 3.1.1 Energy and \(y\)-position calibration in 1D setup
A detailed description of the calibration results including the intermediate steps is presented in [33]. The average energy resolution of the fibers is \(\sigma_{E}/E=7.7(4)\%\) and the average position resolution along the fibers is \(34\pm 3\,\mathrm{mm}\) (FWHM) at \(511\,\mathrm{keV}\).
#### 3.1.2 Energy calibration in 2D setup
The intermediate results of the calibration procedure of the 2D setup can be found in [35]. The overall energy resolution obtained at \(511\,\mathrm{keV}\) is \(\sigma_{E}/E=5.35\%\). For 14 additional needles, the calibration process failed because their spectra could not be fitted satisfactorily. Events with hits in those needles were excluded from the further analysis.
### Results from imaging experiments
We performed a series of five measurements with the three-layered PET array, the 2D coded mask and a point-like source placed in different positions. In all cases, the measurement time was \(1201.2\pm 0.2\,\mathrm{sec}\) and the number of registered hits was about \(2\times 10^{8}\). Combined results of all image reconstructions, based on the collected data, after 100 iterations, are presented in Fig. 10(a) as contour plots. Each contour plot consists of ellipses bounding 38, 68, 86 and \(95\,\%\) confidence regions for a particular reconstructed peak. In addition, the green point inside the smallest ellipse (for each reconstruction) points to the reconstructed peak position, while the red cross is the designed source position. Similarly to the results from Fig. 9(a) (which are also presented here), all reconstructions exhibit offsets of the reconstructed peak positions with respect to the designed source positions and all of them are in the same direction (upper left). For the measurements where the source was off \(x\)-axis (source coordinates \((-20,-20)\,\mathrm{mm}\) and \((-20,20)\,\mathrm{mm}\)) we observe that the major axis of both
Figure 9: 2D reconstruction based on experimental data from a measurement with the three-layered PET array for a point-like source located at (-20, 0) mm, after 100 iterations. (a) 2D reconstructed image with green cross marker specifying the designed source position; (b) and (c) projections of 2D image onto \(x\)- and \(y\)-axis, respectively. Parameters of Gaussian fits are listed above the corresponding figures.
groups of ellipses is rotated and the rotation is symmetrical with respect to the \(x\)-axis. Nevertheless, the offset of the reconstructed peak position is not symmetrical with reference to the origin but always into the same direction. This hints towards a systematic effect rather than an artefact of the reconstruction.
The latter conclusion is further supported by Fig. 10(b) where we present the residuals of the reconstructed source positions as a function of the designed source position for \(x\)- and \(y\)-axes (blue circles and red crosses, respectively). Error bars represent standard deviations of the fitted Gaussian functions, peak positions were determined in the same fit. For \(x=-20\,\mathrm{mm}\) and for \(y=0\,\mathrm{mm}\) we have three measurements so in order to demonstrate all results in one plot, the points have been slightly shifted around the design values. In the figure it is clearly visible that indeed all residuals for the \(x\) position have the same sign and very similar values; the same applies to the residuals of the reconstructed y coordinate: the average residual value for \(x\)-coordinate is \(-1.23\pm 0.08\,\mathrm{mm}\) and for \(y\)-coordinate it is \(0.73\pm 0.45\,\mathrm{mm}\) (stated uncertainties are standard deviations). We assume that this constant offset originates from a small misalignment in our setup, e.g. that the source holder was not placed perfectly centrally with respect to the detector.
We performed also nine measurements with the one-dimensional setup and the point-like sources. First, data for the \({}^{22}\)Na source placed on the \(x\) axis in the following positions \(-30\), \(-20\), \(-10\), \(0\) and \(20\,\mathrm{mm}\) were taken. Additional two points with the same source off \(x\)-axis were examined: (-20, 20)\(\,\mathrm{mm}\) and (0, 20)\(\,\mathrm{mm}\). In addition, two experiments with the \({}^{137}\)Cs source were performed for \(x=\)\(-20\) and \(0\,\mathrm{mm}\). A sample reconstructed image is presented in Fig. 11(a) along with a Gaussian fit (solid line) and the designed source position (vertical dashed line). Fit parameters are listed at the top of the figure. In Fig. 11(b) we demonstrate the residuals for all 1D reconstructions. As previously, the reconstructed source position is taken from the Gaussian fit and the error bars represent standard deviations of the fitted functions. All reconstructed source positions are consistently shifted towards negative values, i.e., the residuals have
Figure 10: (a) 2D contour plots for a set of 5 reconstructed images from separate measurements for different source positions. Contour lines indicate 38, 68, 86 and 95\(\,\%\) confidence regions for the reconstructed source position. The red cross specifies a designed source position in the corresponding experiment. (b) Residuals of the reconstructed peak positions for \(x\)- and \(y\)-coordinates of the source for all measurements performed with the three-layered PET array (2D setup). Reconstructed peak positions and uncertainties are the mean and the standard deviation obtained by fitting Gaussian functions to the reconstructed images.
very similar, negative values with an average of \(-1.03\pm 0.14\) mm. The observed offset of the reconstructed source position is independent of that position and is constant throughout FOV. The offset is consistent with the 2D measurement within \(1\,\sigma\), again showing the small misalignment of the source. We present the whole set of reconstructed images in Fig. 11(c). The mean standard deviation of the Gaussian fits for all source positions is \(1.14\pm 0.18\) mm.
### Results of simulations
In this section we discuss the results of simulations with the full-scale prototype for 1D imaging. Fig. 12(a) shows a reconstructed image (blue filled line) of the gamma vertex distribution resulting from the simulation of \(10^{8}\) protons of \(90.7\) MeV interacting with the PMMA phantom. The image was smeared with a Gaussian filter with a kernel of 2 pixels. The shown reconstructed image is a result of 795 MLEM iterations which corresponds to a maximum of UQI. The image is smooth, without significant noise artefacts. There are small peaks in the tail, behind the main one, but each of them is less than half as high as the main peak so their presence does not affect the DFPD. The orange line represents a depth profile constructed from Monte Carlo true information, in which entries are weighted by gamma initial energies, and the profile is subsequently
Figure 11: (a) Result of 1D image reconstruction of experimental data with a point-like \({}^{22}\)Na source placed at \((0,0)\) mm after 100 iterations. Fit parameters are listed on the top. (b) Residuals of the reconstructed peak position for all experiments with the 1D setup. Reconstructed peak positions have been obtained by fitting a Gaussian function to the reconstructed data, and error bars are standard deviations of fitted functions. Points grouped around \(x=-20\) mm and \(x=0\) mm correspond to the same \(x\)-coordinate, but differ in source \(y\)-position or source type. Colour code and shape of markers are identical as in (c) (see legend). (c) Reconstructed images for all investigated source positions and types after 100 iterations (markers) and their Gaussian fits (solid lines).
interpolated to create a smooth line. In general, the reconstructed main peak position is very close to the true one and the shape of the gamma depth profile is reflected properly. The DFP, determined for this particular sample reconstructed image, is \(-7.64\,\mathrm{mm}\) and for the MC truth it is \(-8.20\,\mathrm{mm}\), which means that the offset is below \(0.6\,\mathrm{mm}\).
Offsets of the determined DFP for all regarded energies are shown in Fig.12(b) as a function of the initial number of protons impinged on the target. Presented means and resolutions of the DFP were obtained based on 50 bootstrap samples taken out of 1000 samples corresponding to \(10^{7}\) protons each. We see that starting from \(50\times 10^{7}\) protons, the DFP mean values are very stable and only error bars are being reduced. The same data are presented numerically in Table 1. Starting from the statistics \(50\times 10^{7}\), the DFP resolutions, calculated as a standard deviation of 50 bootstrap samples, are less than \(0.6\,\mathrm{mm}\) for all beam energies.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Proton energy & \(85.9\,\mathrm{MeV}\) & \(90.7\,\mathrm{MeV}\) & \(95.1\,\mathrm{MeV}\) & \(107.9\,\mathrm{MeV}\) \\ \hline \hline \(N_{p}\) & \multicolumn{4}{|c|}{DFP in mm} \\ \hline \(1\times 10^{8}\) & -3.64 \(\pm\) 0.64 & -8.02 \(\pm\) 0.53 & -13.56 \(\pm\) 0.85 & -29.03 \(\pm\) 0.87 \\ \(5\times 10^{8}\) & -3.73 \(\pm\) 0.33 & -8.22 \(\pm\) 0.34 & -14.01 \(\pm\) 0.46 & -29.12 \(\pm\) 0.40 \\ \(1\times 10^{9}\) & -3.70 \(\pm\) 0.24 & -8.26 \(\pm\) 0.20 & -13.94 \(\pm\) 0.28 & -29.12 \(\pm\) 0.26 \\ \(2\times 10^{9}\) & -3.72 \(\pm\) 0.17 & -8.27 \(\pm\) 0.19 & -14.06 \(\pm\) 0.17 & -29.18 \(\pm\) 0.25 \\ MC & -3.36 & -8.20 & -13.51 & -28.89 \\ Bragg peak & -5.00 & -10.00 & -15.00 & -30.00 \\ \hline \end{tabular}
\end{table}
Table 1: DFPs for different energies of the proton beam and different numbers of protons \(N_{p}\). Values in the table are the means and the standard deviations calculated from 50 bootstrap samples, expressed in mm. "MC" represents the DFP of the Monte Carlo gamma profile for \(N_{p}=10^{10}\) protons. The statistical uncertainty for the ”MC” estimation is so small (below \(0.04\,\%\)) that it can be neglected.
Figure 12: (a) A sample reconstructed gamma depth profile for beam energy of \(T_{p}=90.7\,\mathrm{MeV}\) and \(N_{p}=10^{8}\) protons on target. The blue-filled area shows a reconstructed image after 795 iterations and the orange line is the true source distribution (both images have been normalized to unity in their maxima). The red cross points to the distal fall-off estimation which is \(x=-8.04\,\mathrm{mm}\). The reconstructed image was smeared with a Gaussian filter with kernel size of 2 bins. (b) Residuals of the reconstructed DFP as a function of the number of protons in the beam. Each group of four markers (four different energies) correspond to the same number of protons in the beam. Error bars represent the DFP resolution, obtained as a standard deviation of results for 50 bootstrap samples.
## 4 Discussion
We performed a set of measurements varying the source position for two setups: one- and two-dimensional. In the two-dimensional setup we observe that after 100 iterations the reconstructed image has an evident single peak without significant noise artifacts (see Fig. 9). The peak position and its \(\sigma\)-value were calculated via a Gaussian fit to both \(x\)- and \(y\)- projections of the image. The obtained reconstructed peak position is slightly shifted with respect to the designed position in both horizontal and vertical directions. Having reconstructed the images for all measurements at different source positions (Fig. 10(a)), we see that all images have similar offsets in the same direction. The small standard deviation of those offsets (\(\Delta x=-1.23\pm 0.08\,\mathrm{mm}\) and \(\Delta y=0.73\pm 0.45\,\mathrm{mm}\)) hints towards their systematic origin rather than a method artefact. They are likely to be caused by a misalignment of the setup elements. In fact, our experiment showed that a set of measurements with radioactive sources like the one performed can be used to detect setup misalignments and correct for them. Furthermore, for determining a proton range in a clinical setup, one would rely on detecting relative positions determining different proton ranges, so a constant shift does not hinder a precise range shift determination.
Results from a single-dimensional setup show similar outcomes as the 2D case, having only one dimension instead of taking \(x\)- and \(y\)- projections. Also here, after 100 iterations (Fig. 11(a)), a very clear peak without large noise artefacts is visible. In this particular case, we have a peak position shifted on average by \(-1.03\pm 0.14\,\mathrm{mm}\) with respect to the designed position which is still within one standard deviation (\(\sigma=1.2\,\mathrm{mm}\)) (see Fig. 11(b)). It is consistent with the shift in the \(x\)-direction determined from 2D measurements, reinforcing the assumption of a misalignment of the setup elements.
While the three-layered PET array coupled with a structured collimator provides good-quality two-dimensional images of a radioactive source, we show that we also achieve a good reconstruction in one dimension with our 1D setup. A 1D gamma depth profile is already sufficient for the determination of a range shift in a clinical scenario and a 1D setup requires less readout channels (and therefore is more cost-effective) for its operation than a full 2D setup, if the system is scaled up to extend its FOV.
All reconstructions performed for experimental data were stopped after 100 iterations. For a point-like source, the number of iterations (after a certain point) does not change the reconstructed picture much in the sense that the peak position remains the same and only its width is being reduced with each subsequent iteration. In Fig. 13 we show a dependence of the peak width on the number of iterations for a sample 2D reconstruction (solid blue and dashed red lines) and 1D reconstruction (dashed-dotted green line). It is clearly visible that the peak widths are constantly decreasing with the number of iterations. In the 1D reconstruction, the large number of iterations reduces the peak to a few bins, which hinders a Gaussian fit and causes the steps in the figure resulting from fit instabilities rather than the real change of the peak width.
In order to test the method with source distributions relevant for proton therapy monitoring via PGI, we investigated a setup featuring a larger detector: the full-scale prototype and its performance via simulations of its response to PGs originating from a PMMA phantom irradiated with a proton beam. The simulations were performed for four proton energies. We evaluate our results
comparing the DFP values obtained from the reconstructed images with the true ones (obtained from Monte Carlo truth distribution). Among other possibilities like fitting a sigmoid function to the distal edge of the profile, we choose to use the \(50\,\%\) location to define DFP. Due to the smoothness and steepness of our reconstructed profiles, this is an on the one hand very simple and on the other hand very robust method to assign a range value to the profile. The dependence of the precision of the reconstructed DFP on the number of impinged protons is presented graphically in Fig. 12(b) and numerically in Table 1. For \(10^{8}\) protons, the error of the distal fall-off estimation is within \(0.6\,\mathrm{mm}\) for all regarded energies and an average precision of the DFP estimation is \(0.72\,\mathrm{mm}\). This value is confronted with precision values reported by other groups working with different setups (both simulation and experimental) in Table 2. In the comparison, we include besides our results also those obtained with a two-dimensional coded mask setup [15], multi-parallel slit (MPS) simulation [10] and experimental [11] results as well as clinical results obtained with knife-edge slit (KES) design [8; 9]. Although precision-wise our results outperform most of the works under comparison, we admit that our simulation model does not take into account certain effects which could deteriorate the result, e.g. the neutron background. However, the authors of [15] showed that this contribution can be efficiently eliminated by employing a Discrete Cosine Transform. We also do not take into account other effects, e.g. fully realistic resolutions, the time structure of a clinical proton beam leading to high prompt gamma rates, as well as statistics reduction due to the dead time of the detector and finite throughput of the data acquisition system, which are obviously included in the experimental results of [8; 9; 11]. We suppose that these effects will not deteriorate our results a lot as the energy resolution is validated against experiments and we could show in [22] that our system should be able to deal with clinical rates.
Figure 13: \(\sigma\)-value of the fitted Gaussian as a function of the number of iterations for: \(x\)- and \(y\)- projections of 2D reconstruction for a source at \((-20,0)\,\mathrm{mm}\) (solid blue and dashed red lines, respectively) and 1D reconstruction for a source at \((0,0)\,\mathrm{mm}\) (dashed-dotted green line).
## 5 Conclusions
So far, the use of coded mask systems for proton therapy monitoring via PGI has been considered only theoretically, i.e., via Monte Carlo simulations. In this paper, we show the experimental results of its practical implementation and evaluation of performance using the MLEM algorithm for image reconstruction. In the first step, we use small-scale prototypes of the detector and test the image reconstruction framework with point-like sources. Experimental results confirmed that the near-field coded-mask imaging is feasible with gamma sources, and our implemented image reconstruction framework is able to reconstruct source positions quite well with both 1D and 2D approaches: a clear peak is always visible, and the reconstructed images are free from artefacts. The offsets between the designed and reconstructed source positions are close to each other for each particular setup which indicates its systematic nature.
The second step comprised Monte Carlo simulations with a realistic source distribution, i.e. obtained from a PMMA phantom irradiated with a proton beam. Here, the simulated detector had a larger size of \(110.6\times 100\) mm\({}^{2}\) and was coupled with a larger mask compared to the small-scale prototypes. This not only lead to a larger FOV, but also increased the setup sensitivity to the details of the imaged source distribution. Our investigation conducted for different beam energies from the range 85.9-107.9 MeV and varying statistics shows promising results. The reconstructed images resembled the Monte Carlo truth PG depth profiles. At the statistics of \(1\times 10^{8}\) impinging protons, the mean precision of beam range estimation in the investigated beam energy range was 0.72 mm (\(1\sigma\)), which makes the setup competitive to other PGI approaches with passive collimation, such as KES or MPS investigated by other groups. Due to the promising results of the simulation of the full-scale prototype, we proceed with the construction of a detector with exactly this design.
## Acknowledgements
The work presented in this paper was supported by the Polish National Science Centre (grants 2017/26/E/ST2/00618 and 2019/33/N/ST2/02780). The exchange of staff and students between Poland and Germany was possible thanks to the support of the Polish National Agency for Academic Exchange (NAWA) as well as the German Academic Exchange Service (DAAD) (project-ID 57562042). In this context, the project on which this report is based was funded by the German Federal Ministry of Education and Research (BMBF). Sensor tile and electronics were provided by the Department of Physics of Molecular Imaging
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Source & Energy [MeV] & \(N_{p}\) & Precision [mm] \\ \hline CM simulation (this work) & 85.9 - 107.9 & \(10^{8}\) & 0.72 \\ CM simulation [15] & 122.7 & \(10^{8}\) & 2.1 \\ MPS experiment [11] & 95.09 & \(3.8\times 10^{8}\) & 1.2 \\ MPS simulation [10] & 160 & \(10^{8}\) & 1.30 - 1.66 \\ KES clinical [9] & 100 - 160 &? & 0.7 - 1.3 \\ KES clinical [8] &? &? & 2.0 \\ \hline \end{tabular}
\end{table}
Table 2: Precision (one standard deviation) of beam range estimation by different groups and different PGI approaches.
Systems, Institute for Experimental Molecular Imaging, RWTH Aachen University. They were developed within the European Union's Horizon 2020 research and innovation programme under grant agreement No 667211. When working on the project, Ronja Hetzel was supported from the German Research Foundation (DFG) project number 288267690, and Andreas Bolke from the DFG Grant COMMA, project number 383681334.
The authors are responsible for the content of this publication.
|
2305.17218 | Meta-MeTTa: an operational semantics for MeTTa | We present an operational semantics for the language MeTTa. | Lucius Gregory Meredith, Ben Goertzel, Jonathan Warrell, Adam Vandervorst | 2023-05-26T19:23:10Z | http://arxiv.org/abs/2305.17218v1 | # Meta-MeTTa: an operational semantics for MeTTa
###### Abstract
We present an operational semantics for the language MeTTa.
## 1 Introduction and motivation
We present the operational semantics for the language MeTTa. MeTTa is designed as a language in which humans and AGIs write the behavior of AGIs, being jointly developed by SingularityNet.io and F1R3FLY.io, as part of SingularityNet.io's OpenCog and Hyperon projects [3][2]. One of the principle motivations of this document is to help developers of MeTTa clients know what is a correct and compliant implementation. The document serves roughly the same function as the JVM specification or Ethereum's Yellow paper [12].
## 2 Towards a common language for computational dynamics
Three of the most successful branches of scientific discourse all agree on the shape of a model adequate for expressing and effecting computation. Physics, computer science, and mathematics all use the same standard shape. A model adequate for computation comes with an algebra of states and "laws of motion."
One paradigmatic example from physics is Hilbert spaces and the Schroedinger equation. In computer science and mathematics the algebra of states is further broken down into a monad (the free algebra of states) and an algebra of the monad recorded as some equations on the free algebra.
Computer science represents laws of motion, aka state transitions, as rewrite rules exploiting the structure of states to determine transitions to new states. Mainstream mathematics is a more recognizable generalization of physics, coding state transitions, aka behavior, via morphisms (including automorphisms) between state spaces.
But all three agree to a high degree of specificity on what ingredients go into a formal presentation adequate for effecting computation.
### Examples from computer science
Since Milner's seminal Functions as processes paper, the gold standard for a presentation of an operational semantics is to present the algebra of states via a grammar (a monad) and a structural congruence (an algebra of the monad), and the rewrite rules in Plotkin-style SOS format [8][9].
#### \(\lambda\)-calculus
#### Algebra of States
\(Term[V]::=V\)
\(\mid\)\(\lambda V.Term[V]\)
\(\mid\)\((Term[V]\ Term[V])\)
The structural congruence is the usual \(\alpha\)-equivalence, namely that \(\lambda x.M\)\(\equiv\)\(\lambda y.(M\{y/x\})\) when \(y\) not free in \(M\).
It is evident that \(Term[V]\) is a monad and imposing \(\alpha\)-equivalence gives an algebra of the monad.
#### \(\pi\)-calculus
#### Algebra of States
\(Term[N]::=\)0
\(\mid\)\(\mathsf{for}(N\gets N)Term[N]\)
\(\mid\)\(N!(N)\)
\(\mid\)\((\mathsf{new}\ N)Term[N]\)
\(\mid\)\(Term[N]\mid Term[N]\)
\(\mid!Term[N]\)
The structural congruence is the smallest equivalence relation including \(\alpha\)-equivalence making \((Term[N],\ |\,\mathsf{0})\) a commutative monoid, and respecting
\((\mathsf{new}\ x)(\mathsf{new}\ x)P\equiv(\mathsf{new}\ x)P\)
\((\mathsf{new}\ x)(\mathsf{new}\ y)P\equiv(\mathsf{new}\ y)(\mathsf{new}\ x)P\)
\(((\mathsf{new}\ x)P)|Q\equiv(\mathsf{new}\ x)(P|Q),x\notin\mathsf{FN}(Q)\)
Again, it is evident that \(Term[N]\) is a monad and imposing the structural congruence gives an algebra of the monad.
TransitionsThe rewrite rules divide into a core rule, and when rewrites apply in context.
\[\begin{array}{c}\mbox{\sc comm}\\ \mbox{\sf for}(y\gets x)P|x!(z)\to P\{z/y\}\\ \\ \mbox{\sc par}\\ \frac{P\to P^{\prime}}{(\mbox{\sf new}\;x)P\rightarrow(\mbox{\sf new}\;x)P^{ \prime}}\\ \\ \frac{\mbox{\sc struct}}{P\mbox{\scriptsize$\equiv$}P^{\prime}\to Q^{ \prime}\mbox{\scriptsize$\equiv$}Q}\\ \\ P\to Q\end{array}\]
For details see [8].
### rho-calculus
Algebra of StatesNote that the rho-calculus is different from the \(\lambda\)-calculus and the \(\pi\)-calculus because it is _not_ dependent on a type of variables or names. However, it does give us the opportunity to expose how ground types, such as Booleans, numeric and string operations are imported into the calculus. The calculus does depend on the notion of a 0 process. In fact, this could be any builtin functionality. The language rholang, derived from the rho-calculus, imports all literals as _processes_. Note that this is in the spirit of the \(\lambda\)-calculus and languages derived from it: Booleans, numbers, and strings are terms, on the level with \(\lambda\) terms.
So, the parameter to the monad for the rho-calculus takes the collection of builtin processes. Naturally, for all builtin processes other than 0 there have to be reduction rules. For brevity, we take \(Z\) = \(\{0\}\).
\[\begin{array}{c}\mbox{\sc PROCESS}\\ Term[Z]::=Z\;\mid\;\mbox{\sf for}(Name[Z]\gets Name[Z])Term[Z]\;\mid\; Name[Z]!(Term[Z])\\ \mid*Name[Z]\;\mid\;Term[Z]|Term[Z]\end{array}\]
\[\begin{array}{c}\mbox{\sc name}\\ Name[Z]::=\mbox{\small$\raisebox{0.0pt}{\mbox{\small$\raisebox{0.0pt}{ \mbox{\small$\raisebox{0.0pt}{\mbox{\small$\raisebox{0.0pt}{\mbox{\small$ \raisebox{0.0pt}{\mbox{\small$\raisebox{0.0pt}{\mbox{\small$\raisebox{0.0pt}{ \mbox{\mbox{\small$\raisebox{0.0pt}{\mbox{\mbox{\small$\raisebox{$ 0.0pt}{\mbox{\mbox{\scriptsize$\raisebox{$\mbox{$\mbox{$\mbox{$}}}$}$}$}$}$}$}}}}}}}}}$}$}} $}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$ \end{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{
TransitionsThe rewrite rules divide into a core rule, and when rewrites apply in context.
\[\begin{array}{c}\mbox{\footnotesize$\frac{\mbox{\footnotesize$x_{t}$}\equiv_{ \mbox{\footnotesize$\mbox{\footnotesize$\mbox{\footnotesize$\mbox{\footnotesize$ \mbox{\footnotesize$\mbox{\footnotesize$\mbox{\footnotesize$\mbox{\footnotesize$\mbox{ \footnotesize$\mbox{\footnotesize$\mbox{\mbox{\footnotesize$\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{\,
The JVM
While its complexity far exceeds the presentations above, the JVM specification respects this same shape. Here is an example from the specification of what the operation aaload does [6].
Register machines and WYSIWYG semanticsOne important point about the JVM versus the previous three examples. The first three examples are examples of WYSIWYG operational semantics in the sense that the states _are_ the terms of the calculi. In the case of the JVM the terms in the language are only part of the state, which includes the stack,
Figure 1: AALOAD instruction specification
the heap, and several registers. WYSIWYG models make static analysis dramatically simpler. Specifically, an analyzer only has to look at terms in the language.
This phenomenon is not restricted to the JVM. Even the famous SECD machine is not strictly WYSIWYG [1]. In fact, the register based states are not strictly a monad, but contain a monad.
## 3 A presentation of the semantics of MeTTa
A presentation of the semantics of MeTTa must therefore provide a monad describing the algebra of states, a structural equivalence quotienting the algebra of states, and some rewrite rules describing state transitions. Such a description is the minimal description that meets the standard for describing models of computation.
Note that to present such a description requires at least that much expressive power in the system used to formalize the presentation. That is, the system used to present a model of computation is itself a model of computation admitting a presentation in terms of an algebra of states and some rewrites. This is why a meta-circular evaluator is a perfectly legitimate presentation. That is, a presentation of MeTTa's semantics in MeTTa is perfectly legitimate. Meta-circular presentations are more difficult to unpack, which is why such presentations are typically eschewed, but they are admissible. In fact, a meta-circular evaluator may be the most pure form of presentation.
But, this fact has an important consequence. No model that is at least Turing complete can be "lower level" than any other.
### Rationale for such a presentation
The rationale for such a presentation is not simply that this is the way it's done. Instead, the benefits include
* an effective (if undecidable) notion of program equality;
* an independent specification allowing independent implementations;
* meta-level computation, including type checking, model checking, macros, computational reflection, etc.
#### 3.1.1 Effective program equality
Of course, the notion we are calling program equality is called bisimulation in the literature. One of the key benefits of having a notion of bisimulation explicitly spelled out is that it makes possible both by-hand and automated proofs of correctness of implementations of MeTTa. We illustrate this in a later section of the paper where we specify a compilation from MeTTa to the rho-calculus and rholang and provide both a clear statement of what it means for the compiler to be correct and a proof that it is so.
One of the motivations for providing the example to is foster within the MeTTa developer community a development methodology known as correct-by-construction. Some of the benefits of correct-by-construction include avoiding bytecode injection attacks, avoiding concurrency issues, and in general avoiding technical debt.
With a clearly spelled out operational semantics the correct-by-construction software development methodology becomes a real practice and not just an ideal to be striven for - except when we're under deadline, and we're always under deadline!
#### 3.1.1 Independent implementations
Hand in hand with being able to prove an implementation correct is the ability to support multiple independent implementations, each of which is provably in compliance with the specification. Pioneered by efforts like the JVM, this approach has been remarkably effective in modern projects, like Ethereum, where the Yellow Paper made it possible for many independent teams to implement compilant Ethereum clients. This network of clients is regularly responsible for the deployment and correct execution of millions of $ of transactions each month.
Perhaps more salient to the MeTTa developer community, it means that no one project needs to do all the development of MeTTa clients. This spreads not only the cost of development around, but the risk. In a word, it makes MeTTa more robust against the failure of any one project or team. Correct-by-construction methodology and the tools of operational semantics dramatically enhance the already proven power of open source development to decentralize cost and risk.
In short, scaling is not just about performance and throughput; it's also about adoption. Adoption at scale is a very error prone process, as anyone familiar with the histories of a wide range of computer-based technologies will attest. Linux users, for example, will bear witness to the historical lag between device drivers for Windows and Mac versus the drivers compliant to the same specs that ran on Linux. Correct-by-construction dramatically reduces the number of errors in multiple independent implementations, and thus makes scaling through adoption a much more tractable proposition.
### Meta-level computation
Presumably efforts by human developers to develop provably correct compilation schemes from MeTTa to other computational models are just the first generation of intelligences that will seek to do so. Over time the hope is that many different kinds of intelligences will model, and hence make amenable to adaptation, MeTTa's model of computation. In particular, AGI's will seek to do the same and at dramatically different scales and timeframes.
### MeTTa Operational Semantics
The complexity of MeTTa's operational semantics is somewhere between the simplicity of the \(\lambda\)-calculus and the enormity of the JVM. Note that MeTTa is not WYSIWYG, however, it is not such a stretch to make a version of MeTTa that is.
_Terms_
\[Term::= \left(Term\;[Term]\right)\] \[| \left\{Term\;[Term]\right\}\] \[| \left(Term\;|\;[Receipt]\;.\;[Term]\right)\] \[| \left\{Term\;|\;[Receipt]\;.\;[Term]\right\}\] \[| Atom\] \[| \left(\right)\] \[| \left\{\right\}\]
We impose the equation \(\left\{\ldots,t,u,\ldots\right\}=\left\{\ldots,u,t,\ldots\right\}\), making terms of this form multisets. Note that for multiset comprehensions this amounts to non-determinism in the order of the terms delivered, but they are still streams. We use \(\left\{Term\right\}\) to denote the set of terms that are (extensionally or intensionally) defined multisets, and \(\left(Term\right)\) to denote the set of terms that are (extensionally or intensionally) defined lists.
We assume a number of polymorphic operators, such as ++ which acts as union on multisets and append on lists and concatenation on strings, and :: which acts as cons on lists and the appropriate generalization for the other data types.
#### 3.1.1 Extensionally vs intensionally defined spaces
We make a distinction between extensionally defined spaces and terms, where each element of the space or term has been explicitly constructed, versus intensionally defined spaces and terms where elements are defined by a rule. The latter we call comprehensions.
We adopt this design for numerous reasons:
* it provides an explicit representation for bindings;
* it provides an explicit representation for infinite terms and spaces;
* it provides an explicit scope for access to remotely accessed data.
Explicit bindingsComprehensions provide a superior framework for the explicit representation of bindings. For example, they significantly generalize \(\mathsf{let}\) and \(\mathsf{letrec}\) constructs. In particular, the generally accepted semantics for \(\mathsf{let}\) and \(\mathsf{letrec}\) do not extend smoothly to streams, while comprehensions were effectively made for infinitary structures, including streams.
Infinite terms and spacesIn fact, since the advent of set comprehensions and continuing through SQL's \(\mathsf{SELECT-FROM-WHERE}\) to Haskell's do-nation and Scala's comprehensions the general mechanism for describing intensionally specified collections has been shown to be a powerful abstraction mechanism. Specifically, we now know that comprehensions are really syntactic sugar for the monadic operations, which makes these an exceptionally flexible framework for representing a notion of binding across a wide range of computational phenomena.
Remotely accessed dataWhether accessing data from a distributed atom space, or a resource on the Internet, or calling a foreign function across a memory boundary, remotely accessed data comes with different failure modes. Data providers can be offline or otherwise inaccessible. Data can be ill-formatted or peppered with an array of challenges, from buffer or register overflows to triggering divergent computation.
Providing a scope for these failure modes has a distinct advantage for defensive computation. Beyond that, however, are issues related with fair merge of divergent behavior. The famous example from PCF was logical disjunction. An evaluation strategy for disjunction that evaluates both arguments will diverge if one of the arguments diverges. An evaluation that returns true immediately if the first argument it evaluates is true will potentially diverge less often. This generalizes to a wide range of situations involving the integration of multiple foreign sources of data.
While there is not a one-size-fits all solution, Oleg Kiselyov has provided a natural mechanism in the monad transformer, LogicT [4]. This provides a policy language for describing merge policies. It is an elegant solution that fits perfectly with comprehensions.
ContextsWe use McBride's notion of the derivative of a polynomial functor to calculate 1-holed contexts for extensionally defined terms. We use \(K\) to range over \(\partial Term\).
Term sequences \([Term]::=\epsilon\) \(|\)\(Term\) \(|\)\(Term\) \([Term]\)
Bindings \(Receipt::=ReceiptLinearImpl\) \(|\)\(ReceiptRepeatedImpl\) \(|\)\(ReceiptPeekImpl\) \([Receipt]::=Receipt\) \(|\)\(Receipt;[Receipt]\) \(ReceiptLinearImpl::=[LinearBind]\) \(LinearBind::=[Name]\)\(NameRemainder\gets AtomSource\) \(|\)\(|\)\(Name?!\) \(|\)\(Name!?([Term])\) \(ReceiptRepeatedImpl::=[RepeatedBind]\) \(RepeatedBind::=[Name]\)\(NameRemainder\Left Atom\)
\[[RepeatedBind]::=RepeatedBind\] \[|\ \ RepeatBind\ \&\ [RepeatedBind]\] \[ReceiptPeekImpl::=[PeekBind]\] \[PeekBind::=[Name]\ NameRemainder\ \leftarrow\ Atom\] \[[PeekBind]::=PeekBind\] \[|\ \ PeekBind\ \&\ [PeekBind]\] \[TermRemainder::=...\ TermVar\] \[|\ \ \epsilon\] \[NameRemainder::=...\ @TermVar\] \[|\ \ \epsilon\]
_Literals and builtins_
\[Atom::= Ground\] \[|\ \ Builtin\] \[|\ Var\] \[Name::=\_\] \[|\ \ Var\] \[|\ \ \forall Var\] \[|\ \ @Term\]
\[[Name]::=\epsilon\] \[|\ \ Name\] \[|\ \ Name,[Name]\] \[BoolLiteral::=\true\] \[|\ \ false\] \[Ground::=BoolLiteral\] \[|\ \ LongLiteral\] \[|\ \ StringLiteral\] \[|\ \ UriLiteral\] \[Builtin::=::=\] \[|\ \ =\] \[|\ \ transform\] \[|\ \ addAtom\] \[|\ \ \mbox{remAtom}\] \[TermVar::=\_\] \[|\ \ Var\]
\[State::=\langle\{Term\},\{Term\},\{Term\},\{Term\}\rangle\]
We will use \(S,T,U\) to range over states and \(\mathsf{i}:=\pi_{1}\), \(\mathsf{k}:=\pi_{2}\), \(\mathsf{w}:=\pi_{3}\)and \(\mathsf{o}:=\pi_{4}\) for the first, second, third, and fourth projections as accessors for the components of states. Substitutions are ranged over by \(\sigma\), and as is standard, substitution application will be written postfix, e.g. \(t\sigma\).
A state should be thought of as consisting of \(4\)_registers_:
* \(\mathsf{i}\) is the input register where queries are issued;
* \(\mathsf{k}\) is the knowledge base;
* \(\mathsf{w}\) is a workspace;
* \(\mathsf{o}\) is the output register.
We separate the input, workspace, and output registers to allow for coarse-graining of bisimulation. An external agent cannot necessarily observe the transitions related to the workspace.
State contextsWe lift term contexts to states. Thus, if \(t=K[u]\) and \(u\in S_{r}\) for \(r\in\{\mathsf{i},\mathsf{k},\mathsf{w},\mathsf{o}\}\), then we write \(K[S]\) for the state in which \(t\) replaces \(u\) in \(S_{r}\).
### Rewrite Rules
\[\begin{array}{l}\text{\sc Query}\\ \frac{\sigma_{i}=\mathsf{unify}(t^{\prime},t_{i}),k=\{(=t_{1}\ u_{1}),\ldots,(=t_ {n}\ u_{n})\}\ \mbox{\tt{++}}\ k^{\prime},\mbox{\sc insensitive}(t^{\prime},k^{\prime})}{ \langle\{K[t^{\prime}]\}\ \mbox{\tt{++}}\ i,k,w,o\rangle\rightarrow\langle i,k,\{K[u_{1}\sigma_{1}] \}\ \mbox{\tt{++}}\ \ldots\ \mbox{\tt{++}}\ \{K[u_{n}\sigma_{n}]\}\ \mbox{\tt{++}}\ w,o\rangle}\\ \\ \frac{\sigma_{i}=\mathsf{unify}(u,t_{i}),k=\{(=t_{1}\ u_{1}),\ldots,(=t_{n}\ u_{n})\}\ \mbox{\tt{++}}\ k^{\prime},\mbox{\sc insensitive}(u,k^{\prime})}{ \langle i,k,\{K[u]\}\ \mbox{\tt{++}}\ w,o\rangle\rightarrow\langle i,k,\{K[u_{1}\sigma_{1}] \}\ \mbox{\tt{++}}\ \ldots\ \mbox{\tt{++}}\ \{K[u_{n}\sigma_{n}]\}\ \mbox{\tt{++}}\ w,o\rangle}\\ \\ \end{array}\]
\[\begin{array}{l}\text{\sc Transform}\\ \frac{\sigma_{i}=\mathsf{unify}(t,t_{i}),k=\{K_{1}[t_{1}],\ldots,K_{n}[t_{n}] \}\ \mbox{\tt{++}}\ k^{\prime},\mbox{\sc insensitive}(t,k^{\prime})}{\langle\{( \mbox{\sc transform}\ t\ u)\}\ \mbox{\tt{++}}\ i,k,w,o\rangle\rightarrow\langle i,k,\{K_{1}[u\sigma_{1}] \}\ \mbox{\tt{++}}\ \ldots\ \mbox{\tt{++}}\ \{K_{n}[u\sigma_{n}]\}\ \mbox{\tt{++}}\ w,o\rangle}\\ \\ \end{array}\]
\[\begin{array}{l}\text{\sc AddAtom1}\\ \langle\{(\mbox{\tt{addAtom}}\ t)\}\ \mbox{\tt{++}}\ i,k,w,o\rangle\rightarrow\langle i,k\ \mbox{\tt{++}}\ \{t\},w,\{()\}\ \mbox{\tt{++}}\ o\rangle\\ \\ \end{array}\]
\[\begin{array}{l}\text{\sc AddAtom2}\\ \langle i_{1},k_{1},w_{1},o_{1}\rangle\rightarrow\langle i_{2},k_{2},w_{2},o_ {2}\rangle,k_{3}=\{(\mbox{\tt{addAtom}}\ t)\}\ \mbox{\tt{++}}\ k_{1}\\ \langle i_{1},k_{3},w_{1},o_{1}\rangle\rightarrow\langle i_{2},\{(\mbox{\tt{ addAtom}}\ t),t\}\ \mbox{\tt{++}}\ k_{2},w_{2},\{()\}\ \mbox{\tt{++}}\ o_{2}\rangle\\ \\ \end{array}\]
\[\begin{array}{l}\text{\sc RemAtom1}\\ \langle\{(\mbox{\sc remAtom}\ t)\}\ \mbox{\tt{++}}\ i,\{t\}\ \mbox{\tt{++}}\ k,w,o\rangle\rightarrow\langle i,k,w,\{()\}\ \mbox{\tt{++}}\ o\rangle\\ \\ \end{array}\]
\[\begin{array}{l}\text{\sc RemAtom2}\\ \langle i_{1},k_{1},w_{1},o_{1}\rangle\rightarrow\langle i_{2},k_{2},w_{2},o_ {2}\rangle,k_{3}=\{(\mbox{\sc remAtom}\ t)\}\ \mbox{\tt{++}}\ \{t\}\ \mbox{\tt{++}}\ k_{1}\\ \langle i_{1},k_{3},w_{1},o_{1}\rangle\rightarrow\langle i_{2},\{(\mbox{\sc remAtom }\ t)\}\ \mbox{\tt{++}}\ k_{2},w_{2},\{()\}\ \mbox{\tt{++}}\ o_{2}\rangle\\ \\ \end{array}\]
\[\begin{array}{l}\text{\sc Output}\\ \frac{\text{\sc insensitive}(u,k)}{\langle i,k,\{u\}\ \mbox{\tt{++}}\ w,o\rangle\rightarrow\langle i,k,w,\{u\}\ \mbox{\tt{++}}\ o\rangle}\\ \\ \end{array}\]
Where \(\mbox{\sc insensitive}(t,k)\) means that \((=t^{\prime}\ u)\in k\Rightarrow\neg\mathsf{unify}(t,t^{\prime})\).
## 4 Ground literals and builtins
As with all practical programming languages, MeTTa hosts a number of computational entities and operations that are already defined on the vast majority of platforms on which an implementation of the language may be written and/or run. Here we describe the ground literals and builtin operations that every compliant MeTTa operation must provide.
### Ground literals
As the grammar spells out every compliant implementation of MeTTa must provide:
* Booleans;
* signed and unsigned 64bit integers;
* 64bit floating point;
* strings
### Polymorphic operations
Every compliant implementation of the MeTTa client must provide the following polymorphic operations:
* \(*:A\times A\to A\) for A ranging over Booleans, integers, floating point;
* \(+:A\times A\to A\) for A ranging over Booleans, integers, floating point, and strings;
### Transition rules
\[\langle\{(+\ b_{1}\ b_{2})\}\mbox{\raisebox{-1.29pt}{\scalebox{1.29}{$ \leftrightarrow$}}}i,k,w,o\rangle\rightarrow\langle i,k,w,\{b_{1}||b_{2}\} \mbox{\raisebox{-1.29pt}{\scalebox{1.29}{$\leftrightarrow$}}}o\rangle\] \[\frac{w=\{(+\ b_{1}\ b_{2})\}\mbox{\raisebox{-1.29pt}{\scalebox{1.2 9}{$\leftrightarrow$}}}w^{\prime}}{\langle i,k,w,o\rangle\rightarrow\langle i,k,w^{\prime},\{b_{1}||b_{2}\}\mbox{\raisebox{-1.29pt}{\scalebox{1.29}{$ \leftrightarrow$}}}o\rangle}\] \[\mbox{\sc BoolMult1}\] \[\langle\{(*\ b_{1}\ b_{2})\}\mbox{\raisebox{-1.29pt}{\scalebox{1.2 9}{$\leftrightarrow$}}}i,k,w,o\rangle\rightarrow\langle i,k,w,\{b_{1} \&b_{2}\}\mbox{\raisebox{-1.29pt}{\scalebox{1.29}{$\leftrightarrow$}}}o\rangle\] \[\mbox{\sc BoolMult2}\] \[\frac{w=\{(*\ b_{1}\ b_{2})\}\mbox{\raisebox{-1.29pt}{\scalebox{1. 29}{$\leftrightarrow$}}}w^{\prime}}{\langle i,k,w,o\rangle\rightarrow\langle i,k,w^{\prime},\{b_{1}\&b_{2}\}\mbox{\raisebox{-1.29pt}{\scalebox{1.29}{$ \leftrightarrow$}}}o\rangle}\] \[\mbox{\sc NumAdd1}\] \[\langle\{(+\ n_{1}\ n_{2})\}\mbox{\raisebox{-1.29pt}{\scalebox{1. 29}{$\leftrightarrow$}}}i,k,w,o\rangle\rightarrow\langle i,k,w,\{n_{1}{+}n_{2} \}\mbox{\raisebox{-1.29pt}{\scalebox{1.29}{$\leftrightarrow$}}}o\rangle\] \[\mbox{\sc NumMud1}\] \[\langle\{(*\ n_{1}\ n_{2})\}\mbox{\raisebox{-1.29pt}{\scalebox{1. 29}{$\leftrightarrow$}}}i,k,w,o\rangle\rightarrow\langle i,k,w,\{n_{1}{+}n_{2} \}\mbox{\raisebox{-1.29pt}{\scalebox{1.29}{$\leftrightarrow$}}}o\rangle\] \[\mbox{\sc NumMult2}\] \[\frac{w=\{(*\ n_{1}\ n_{2})\}\mbox{\raisebox{-1.29pt}{\scalebox{1. 29}{$\leftrightarrow$}}}w^{\prime}}{\langle i,k,w,o\rangle\rightarrow\langle i,k,w^{\prime},\{n_{1}{*}n_{2}\}\mbox{\raisebox{-1.29pt}{\scalebox{1.29}{$ \leftrightarrow$}}}o\rangle}\] \[\mbox{\sc StrAdd1}\] \[\langle\{(+\ s_{1}\ s_{2})\}\mbox{\raisebox{-1.29pt}{\scalebox{1. 29}{$\leftrightarrow$}}}i,k,w,o\rangle\rightarrow\langle i,k,w,\{s_{1}{+}s_{2} \}\mbox{\raisebox{-1.29pt}{\scalebox{1.29}{$\leftrightarrow$}}}o\rangle\] \[\mbox{\sc StrAdd2}\] \[\frac{w=\{(+\ s_{1}\ s_{2})\}\mbox{\raisebox{-1.29pt}{\scalebox{1. 29}{$\leftrightarrow$}}}w^{\prime}}{\langle i,k,w,o\rangle\rightarrow\langle i,k,w^{\prime},\{s_{1}{+}s_{2}\}\mbox{\raisebox{-1.29pt}{\scalebox{1.29}{$ \leftrightarrow$}}}o\rangle}\]
## 5 Bisimulation
Since the operational semantics is expressed as a transition system we recover a notion of bisimulation. There are two possible ways to generate the notion of bisimulation in this context. One uses the Leifer-Milner-Sewell approach of deriving a bisimulation from the rewrite rules [5]. However, the technical apparatus is very heavy to work with. The other is to adapt barbed bisimulation developed for the asynchronous \(\pi\)-calculus to this setting [10].
The reason we need to use some care in developing the notion of bisimulation is that there are substitutions being generated and applied in many of the rules. So, a single label
will not suffice. However, taking a query in the input space as a barb will. This notion of barbed bisimulation will provide a means of evaluating the correctness of compilation schemes to other languages. We illustrate this idea in the section on compiling MeTTa to the rho-calculus.
## 6 The cost of transitions
### Network access tokens
If you're reading this, chances are that you know what an Internet-facing API is, and why it might need to be protected from denial of service attacks. But, just in case you're one of the "normies" that don't know what these terms refer to, let's you, me, Sherman, and Mr. Peabody all take a trip in the WayBack Machine way back to 2005.
In those days there was still a naivete about the infinite potential of free and open information. QAnon, deep fakes, ChatGPT and other intimations that the Internet might just be the modern equivalent of the Tower of Babel were not yet even a gleam in their inventors' eyes. Companies would regularly set up network services that anyone with an Internet connection could access, from anywhere in the world (dubbed Internet-facing). Such services were accessed by sending requests in a particular, well defined format (deriving from the software term application program interface, or API) to an Internet address served by machines in the network service the organization had set up.
It was quickly discovered that such Internet-facing APIs were vulnerable to attack. If a single bad actor sent thousands or millions of requests to the service, or a botnet of millions sent a few requests each to the service, it was possible for the service to become bogged down and unresponsive to legitimate requests. Now, in reality, all this was discovered long before 2005. But, by 2005 a practice for dealing with this kind of attack was more or less well established.
The solution is simple. The network proprietor issues a digital token. A request with a given token embedded in it is honored, up to some number of requests per token. This practice is less onerous and costly than having to issue and maintain authorization credentials for login challenges. Many, many companies do this and have done this for the better part of two decades. Not just software or digital service companies like Google and Microsoft issue tokens like this, Other companies, such as media companies like The New York Times, and The Guardian, also employ this practice. (The hyperlinks above are to their token distribution pages.) The practice is ubiquitous and well accepted. It is intrinsic to the functionality of an open network such as the Web.
Also, it is important to note that many of these services allow for storage of digital content on the networks provided by these services. However, bad actors can still abuse the services by repeatedly uploading illegal content (like child pornography, copyrighted material or even nuclear secrets). So, an entity offering Internet-enabled services must reserve the right to invalidate these tokens in case they discover they are being abused in this or other ways. These utility tokens are essential to comply with a whole host of very good laws.
### Ethereum's big idea
Satoshi's discovery of a new class of economically secured, leaderless distributed consensus protocols, embodied in proof-of-work but also, elsewhere, embodied in proof-of-stake and other consensus algorithms, was a pretty good idea. It led to the Bitcoin network. Buterin's suggestion that Satoshi's consensus be applied to the state of a virtual machine instead of a ledger was a really good idea, and led to the Ethereum network. It creates a distributed computer that runs everywhere and nowhere in particular. Less poetically, every node in the network is running a copy of the virtual machine and the consensus protocol ensures that all the copies agree on the state of the virtual machine.
Like the Internet-facing APIs launched all throughout the 00's and beyond, Ethereum's distributed computer is accessible to anyone with an Internet connection. And, as such, without protection would be vulnerable to denial of service attacks. In fact, it's potentially even more vulnerable because a request to the Ethereum distributed computer is a piece of code. This code could, in principle, run forever, or take up infinite storage space. Vitalik's clever idea, building on the established practice of network access tokens, is to require tokens for each computational or storage step to prevent such abuses.
### MeTTa effort objects
MeTTa takes the same approach. Transitions in the operational semantics cost a computational resource (effort objects, or EOs, for short) that are "purchased" with tokens. This section reprises the operational semantics with the cost of each step spelled out in terms of the structure of EOs.
#### 6.3.1 Resource-bounded Rewrite Rules
We assume a polymorphic cost function # taking values in the domain of EOs. We assume the domain of EOs supports notions of \(+\) and \(-\) making it a _commutative_ group. We use \(\text{EOs}_{\perp}\) to denote EOs extended with \(\perp\) to indicate no EOs assigned. We assume a term representation of elements of EOs.
Additionally, we adopt the notation \(t_{\chi(p,eos)}\) to indicate a pair consiting of a term and an element of the domain of \(\text{EOs}_{\perp}\), signed with a private key \(p\); and assume we can lift the usual functions (e.g., \(\mathsf{unify}\) and \(\mathsf{insensitive}\)) to these signed terms either by projecting to the term component, or by some other more sophisticated mechanism.
Furthermore, we expand the state to include a fifth register consisting of an element of \(\{Term\}\), written \(eos\), which in turn contains terms of the form \((h(p)\;eo)\), where \(h(p)\) is a function of a private key \(p\), and \(eo\) is an element of the domain EOs.
\(k=\{(=t_{1}\ u_{1}),\ldots,(=t_{n}\ u_{n})\}\)\(\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tiny$ \begin{array}{c}{\small\mbox{\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small
Likewise, the builtin operations must come with a cost.
\(\begin{array}{l}\text{BoolAdd1}\\ \frac{eos=\{\left(h(p)\;e^{\prime}\right)\}\dashv+eos^{\prime},\qquad eos^{ \prime\prime}=\{\left(h(p)\;(e+e^{\prime})-\#(b_{1})+\#(b_{2}))\right)\} \dashv+eos^{\prime}}{\left\langle\left\{\left(+\;b_{1}\;b_{2}\right)_{\chi(p, e)}\right\}\dashv+i,k,w,o;eos\right\rangle\xrightarrow{\#(b_{1})+\#(b_{2})} \langle i,k,w,\{b_{1}||b_{2}\}\dashv+o;eos^{\prime\prime}\rangle}\\ \text{BoolAdd2}\\ w=\{\left(+\;b_{1}\;b_{2}\right)_{\chi(p,e)}\}\dashv+w^{\prime}\\ eos=\{\left(h(p)\;e\right)\}\dashv+eos^{\prime},\qquad eos^{\prime\prime}=\{ \left(h(p)\;(e-\#(b_{1})+\#(b_{2}))\right)\}\dashv+eos^{\prime}\\ \langle i,k,w,o;eos\rangle\xrightarrow{\#(b_{1})+\#(b_{2})}\langle i,k,w^{ \prime},\{b_{1}||b_{2}\}\dashv+o;eos^{\prime\prime}\rangle\end{array}\)
\(\begin{array}{l}\text{BoolMULT1}\\ eos=\{\left(h(p)\;e^{\prime}\right)\}\dashv+eos^{\prime},\qquad eos^{\prime \prime}=\{\left(h(p)\;(e+e^{\prime})-\#(b_{1})+\#(b_{2}))\right)\}\dashv+eos^{ \prime}\\ \langle\left\{\left(*\;b_{1}\;b_{2}\right)_{\chi(p,e)}\right\}\dashv+i,k,w,o;eos \rangle\xrightarrow{\#(b_{1})+\#(b_{2})}\langle i,k,w,\{b_{1}\&b_{2}\} \dashv+o;eos^{\prime\prime}\rangle\end{array}\)
\(\begin{array}{l}\text{BoolMULT2}\\ w=\{\left(*\;b_{1}\;b_{2}\right)_{\chi(p,e)}\}\dashv+w^{\prime}\\ eos=\{\left(h(p)\;e\right)\}\dashv+eos^{\prime},\qquad eos^{\prime\prime}=\{ \left(h(p)\;(e-\#(b_{1})+\#(b_{2}))\right)\}\dashv+eos^{\prime}\\ \langle i,k,w,o;eos\rangle\xrightarrow{\#(b_{1})+\#(b_{2})}\langle i,k,w,\{b_ {1}\&b_{2}\}\dashv+o;eos^{\prime\prime}\rangle\end{array}\)
NumAdd1 \(((e+e^{\prime})-\#(b_{1})+\#(b_{2}))>0\)
\(\underline{eos=\{(h(p)\;e^{\prime})\}++\;eos^{\prime},\qquad eos^{\prime\prime}= \{(h(p)\;((e+e^{\prime})-\#(n_{1})+\#(n_{2})))\}++\;eos^{\prime}\)
\(\langle\{(+\;n_{1}\;n_{2})_{\chi(p,e)}\}++i,k,w,o;eos\rangle\xrightarrow{\#(n_{ 1})+\#(n_{2})}\langle i,k,w,\{n_{1}+n_{2}\}++o;eos^{\prime\prime}\rangle\)
NumAdd2 \(w=\{(+\;n_{1}\;n_{2})_{\chi(p,e)}\}++\;w^{\prime}\)
\(\underline{eos=\{(h(p)\;e)\}++\;eos^{\prime},\qquad eos^{\prime\prime}=\{(h(p) \;(e-\#(n_{1})+\#(n_{2})))\}++\;eos^{\prime}\)
\(\langle i,k,w,o;eos\rangle\xrightarrow{\#(n_{1})+\#(n_{2})}\langle i,k,w^{ \prime},\{n_{1}+n_{2}\}++o;eos^{\prime\prime}\rangle\)
NumMult1 \(((e+e^{\prime})-\#(n_{1})+\#(n_{2}))>0\)
\(\underline{eos=\{(h(p)\;e^{\prime})\}++\;eos^{\prime},\qquad eos^{\prime\prime}= \{(h(p)\;((e+e^{\prime})-\#(n_{1})+\#(n_{2})))\}++\;eos^{\prime}\)
\(\langle\{(*\;n_{1}\;n_{2})\}++i,k,w,o;eos\rangle\xrightarrow{\#(n_{1})+\#(n_{ 2})}\langle i,k,w,\{n_{1}*n_{2}\}++o;eos^{\prime\prime}\rangle\)
NumMult2 \(w=\{(*\;n_{1}\;n_{2})_{\chi(p,e)}\}++\;w^{\prime}\)
\(\underline{eos=\{(h(p)\;e)\}++\;eos^{\prime},\qquad eos^{\prime\prime}=\{(h(p) \;(e-\#(n_{1})+\#(n_{2})))\}++\;eos^{\prime}\)
\(\langle i,k,w,o;eos\rangle\xrightarrow{\#(n_{1})+\#(n_{2})}\langle i,k,w,\{n_ {1}*n_{2}\}++o;eos^{\prime\prime}\rangle\)
StrAdd1 \(((e+e^{\prime})-\#(s_{1})+\#(s_{2}))>0\)
\(\underline{eos=\{(h(p)\;e^{\prime})\}++\;eos^{\prime},\qquad eos^{\prime\prime }=\{(h(p)\;((e+e^{\prime})-\#(s_{1})+\#(s_{2})))\}++\;eos^{\prime}\)
\(\langle\{(+\;s_{1}\;s_{2})\}++i,k,w,o;eos\rangle\xrightarrow{\#(s_{1})+\#(s_{ 2})}\langle i,k,w,\{s_{1}+s_{2}\}++o;eos^{\prime\prime}\rangle\)
StrAdd2 \(w=\{(+\;s_{1}\;s_{2})_{\chi(p,e)}\}++\;w^{\prime}\)
\(\underline{eos=\{(h(p)\;e)\}++\;eos^{\prime},\qquad eos^{\prime\prime}=\{(h(p) \;(e-\#(s_{1})+\#(s_{2})))\}++\;eos^{\prime}\)
\(\langle i,k,w,o;eos\rangle\xrightarrow{\#(s_{1})+\#(s_{2})}\langle i,k,w,\{s _{1}+s_{2}\}++o;eos^{\prime\prime}\rangle\)
StrAdd2 \(w=\{(+\;s_{1}\;s_{2})_{\chi(p,e)}\}++\;w^{\prime}\)
\(\underline{eos=\{(h(p)\;e)\}++\;eos^{\prime},\qquad eos^{\prime\prime}=\{(h(p )\;(e-\#(s_{1})+\#(s_{2})))\}++\;eos^{\prime}\)
\(\langle i,k,w,o;eos\rangle\xrightarrow{\#(s_{1})+\#(s_{2})}\langle i,k,w,\{s _{1}+s_{2}\}++o;eos^{\prime\prime}\rangle\)
## 7 Compiling MeTTa to rho
In this section we illustrate the value of having an operational semantics by developing a compiler from MeTTa to the rho-calculus, and resource-bounded MeTTa to rholang. The essence of the translation is to use a channels for each of the registers.
### MeTTa to the rho-calculus
In the translation given below we employ two semantic functions. One, written \(\llbracket\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The astute reader will notice that there is a question about the well-definedness of \([\![-]\!]_{E}\). It is possible that there are multiple transitions out of a single state. The actual function takes a sum of all possible transitions:
\[[\![S]\!]_{E}:=\Sigma_{r\in\{r|S\xrightarrow{r}S^{\prime}\}}[\![S]\!]_{r}\]
where \([\![-]\!]_{r}\) are defined below.
The meaning of a MeTTa computation is given as the composition of the configuration and evaluation functions. That is,
\[[\![\langle i,k,w,o\rangle\!]_{M}=[\![(i,k,w,o)\!]_{C}\mid[\![(i,k,w,o)\!]_{E}\]
The reason for this factorization of the semantics is to facilitate the proof of correctness, as we will see in the proof.
#### 3.1.1 Space configuration
\[[\![\langle\{t\}\texttt{+}\texttt{+}i,k,w,o\rangle\!]_{C}(i,k,w,o) =i!([\![t]\!])\mid[\![\langle i,k,w,o\rangle\!]_{C}(i,k,w,o)\] \[\![\langle i,\{t\}\texttt{+}\texttt{+}k,w,o\rangle\!]_{C}(i,k,w,o) =k!([\![t]\!])\mid[\![\langle i,k,w,o\rangle\!]_{C}(i,k,w,o)\] \[\![\langle i,k,\{t\}\texttt{+}\texttt{+}w,o\rangle\!]_{C}(i,k,w,o) =w!([\![t]\!])\mid[\![\langle i,k,w,o\rangle\!]_{C}(i,k,w,o)\] \[\![\langle i,k,w,\{t\}\texttt{+}\texttt{+}o\rangle\!]_{C}(i,k,w,o) =o!([\![t]\!])\mid[\![\langle i,k,w,o\rangle\!]_{C}(i,k,w,o)\]
**Space evaluation**
\[[\![\langle\{t^{\prime}\}\mbox{\tt\small{\tt\small{\tt\small{\tt\small{\tt\small{ \tt\small{\tt\small{\tt\small{\tt\small{\tt\small{\tt\small{\tt\small{\tt\small{\tt \small{\tt\small{\tt\small{\tt\small{\tttt\small{\tttt\small{\tttttt\small{\tttttttt \small{\tttttt\small{\
### Correctness of the translation
Theorem 7.1 (MeTTa2rho correctness): \[S_{1}\stackrel{{\raisebox{-1.0pt}{\mbox{\tiny$\sim$}}}}{{\raisebox{-1. 0pt}{\mbox{\tiny$\sim$}}}}S_{2}\iff[\![S_{1}]\!]_{M}\stackrel{{ \raisebox{-1.0pt}{\mbox{\tiny$\sim$}}}}{{\raisebox{-1.0pt}{\mbox{\tiny$ \sim$}}}}[\![S_{2}]\!]_{M}\]
Proof: Proof sketch: Essentially, the translation is correct-by-construction. Intuitively, we see that there is a bisimulation relation between MeTTa computations and their translations in the rho-calculus. This bisimulation may be composed with a bisimulation between MeTTa calculations to yield a bisimulation in the rho-translation, and vice versa. The bisimulation bridging between the two domains is effectively represented in the translation. For each different kind of state, the left hand side of each bisimulation pair is the left hand side of the _definition_ evaluation semantic function and the right hand side of the bisimulation pair is the right hand side of the definition. Hence, correct-by-construction. The formal proof uses terms in the input, working, and output registers as barbs for the notion of bisimulation on MeTTa computations, while their translations via the configuration function are barbs in the bisimulation in the rho-calculus.
### Resource-bounded MeTTa to rholang
#### 7.3.1 Space configuration
\[[\![\langle\{t\}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Space evaluation**
\[[\langle\{t^{\prime}_{\chi(p,e)}\}\;{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt \mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt \mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tt\mbox{\tttt\mbox{ \tt\mbox{\mbox{\tt\mbox{\tt\mbox{\tttt\mbox{\tttt\mbox{\tttt\mbox{\
\([\langle\{\{\mbox{addAtom}\;t\}_{\chi(p,e)}\}\mbox{ \small{++}}\;i,k,w,o;eos\rangle]_{E}(i,k,w,o,c)\)
\(=\)
for\(((\mbox{addAtom}\;[\![t]\!])_{\chi(p,e)}\gets i\;\&\;([\![h(p)]\!]\;e^{ \prime})\gets c)\{\)
if\((((\![e]\!]\!+\![e^{\prime}]\!])\!-\![\![\#(t)]\!])\!\!>\!0\}\{\)
\(k!([\![t]\!])\mid[\langle i,k,w,o;eos\rangle]_{E}(i,k,w,o,c)\}\)
else\(\{i!((\mbox{addAtom}\;[\![t]\!])_{\chi(p,e)})\mid c!(([\![h(p)]\!]\;e^{\prime})))\}\)
\([\![\![\langle i,\{\mbox{(addAtom}\;t\}\mbox{ \small{++}}\;k,w,o\rangle]_{AddAtom2}(i,k,w,o,c)\)
\(=\)
for\(((\mbox{addAtom}\;[\![t]\!])\gets k)\{k!([\![t]\!])\mid[\langle i,\{ \mbox{(addAtom}\;t)\}\mbox{ \small{++}}\;k,w,o;eos\rangle]_{E}(i,k,w,o,c)\}\)
\([\![\langle\{\mbox{(remAtom}\;t\}_{\chi(p,e)}\}\mbox{ \small{++}}\;i,\{\mbox{ \small{++}}\;k,w,o;eos\rangle]_{RemAtom1}(i,k,w,o,c)\)
\(=\)
for\(((\mbox{remAtom}\;[\![t]\!])_{\chi(p,e)}\gets i\;\&\;([\![h(p)]\!]\;e^{ \prime})\gets c)\{\)
if\((((\![e]\!]\!+\![e^{\prime}]\!])\!-\![\#(t)]\!])\!\!>\!0\}\{\)
for\(([\![t]\!]\gets k)\{o!([\![()]\!])\mid[\langle i,k,w,o;eos\rangle]_{E}(i,k,w,o,c)\}\)
else\(\{i!((\mbox{remAtom}\;[\![t]\!])_{\chi(p,e)})\mid c!(([\![h(p)]\!]\;e^{\prime}))\}\)
\([\![\langle i,\{\mbox{(remAtom}\;t\}\mbox{ \small{++}}\;\{\mbox{ \small{++}}\;k,w,o\rangle]_{RemAtom2}(i,k,w,o,c)\)
\(=\)
for\(((\mbox{remAtom}\;[\![t]\!])\gets k)\{\mbox{for}([\![t]\!]\gets k)\{o! ([\![()]\!])\}\mid[\![\langle i,\{\mbox{(remAtom}\;t)\}\mbox{ \small{++}}\;k,w,o;eos\rangle]_{E}(i,k,w,o,c)\}\)
### Correctness of the translation
Theorem 7.1 (Resource-bounded MeTTa2rho correctness): \[S_{1}\stackrel{{\sim}}{{\approx}}S_{2}\iff[S_{1}]_{M}\stackrel{{ \sim}}{{\approx}}[\![S_{2}]_{M}\]
Proof: Proof sketch: while all the resource accounting adds to the complexity of the translation, the proof essentially reprises the previous one.
## 8 Conclusion and future work
We have presented two versions of the operational semantics for MeTTa, one that is fit for private implementations that have some external security model, and one that is fit for running in a decentralized setting.
This semantics does not address typed versions of MeTTa. An interesting avenue of approach is to apply Meredith and Stay's OSLF to this semantics to derive a type system for MeTTa that includes spatial and behavioral types [11]. |
2306.16456 | On Translation-Invariant Matrix Product States and advances in MPS
representations of the $W$-state | This work is devoted to the study Translation-Invariant (TI) Matrix Product
State (MPS) representations of quantum states with periodic boundary conditions
(PBC). We pursue two directions: we introduce new methods for constructing TI
MPS representations of a certain class of TI states and study their optimality
in terms of their bond dimension. We pay particular attention to the $n$-party
$W$-state and construct a TI MPS representation of bond dimension $\left
\lfloor \dfrac{n}{2} \right \rfloor +1$ for it. We further study properties of
this class and show that we can can always achieve a bond dimension of $n$ for
TI MPS representation of states in this class. In the framework of studying
optimality of TI MPS representations with PBC, we study the optimal bond
dimension $d(\psi)$ for a given state $\psi$. In particular we introduce a
deterministic algorithm for the search of $d(\psi)$ for an arbitary state.
Using numerical methods, we verify the optimality of our previous construction
for the $n$-party $W$-state for small $n$. | Petr Klimov, Richik Sengupta, Jacob Biamonte | 2023-06-28T18:00:02Z | http://arxiv.org/abs/2306.16456v2 | # On Translation-Invariant Matrix Product States
###### Abstract
This work is devoted to the study Translation-Invariant (TI) Matrix Product State (MPS) representations of quantum states with periodic boundary conditions (PBC). We pursue two directions: we introduce new methods for constructing TI MPS representations for a certain class of TI states and study their optimality in terms of their bond dimension. We pay particular attention to the \(n\)-party \(W\)-state and construct a TI MPS representation of bond dimension \(\left\lfloor\frac{n}{2}\right\rfloor+1\) for it. We generalize the approach implemented for the \(W\)-state to obtain TI MPS representations of a larger class of states satisfying several structural conditions. We further study properties of this class and show that we can can always achieve a bond dimension of \(n\) for TI MPS representation of states in this class. In the framework of studying optimality of TI MPS representations with PBC, we study the optimal bond dimension \(d(\psi)\) for a given state \(\psi\). In particular we introduce a deterministic algorithm for the search of \(d(\psi)\) for an arbitary state. Using numerical methods, we verify the optimality of our previous construction for the \(n\)-party \(W\)-state for small \(n\).
## 1 Introduction
The concept of MPS (Matrix Product State) is an important concept in quantum theory, especially in quantum information theory. In recent years, MPS representations have become an essential tool for investigating condensed matter physics, quantum information theory, and quantum field theory. It arose naturally in tensor networks, and is one of the most studied of tensor networks [1, 2, 3, 4]. It is also sometimes referred to as tensor trains [5]. Moreover, similar concepts are actively used in other areas like classical and quantum machine learning [6, 7, 8, 9, 10].
In general, for a quantum state \(\psi\) (or simply for a vector \(\psi\in(\mathbb{C}^{2})^{\otimes n}\)), the MPS representation is written as
\[|\psi\rangle=\sum_{i_{1},\ldots,i_{N}=1}^{2}\mathrm{tr}\left[A_{i_{1}}^{[1]}A_ {i_{2}}^{[2]}\cdots A_{i_{N}}^{[N]}\right]\!|i_{1},i_{2},\ldots,i_{N}\rangle\:,\]
where \(A_{i}^{[k]}\) are complex matrices. This representation is also sometimes called MPS with PBC (periodic boundary conditions) to distinguish it from the analogous MPS representation with OBC (open boundary conditions).
A convenient and interesting class of MPS representations to consider is TI (translationally invariant) MPS with periodic boundary conditions (PBC) [1]. This representation can be used when the matrices are site-independent i.e \(A_{1}^{[k]}=A_{1}\) for all \(k\). This type of MPS representation uses significantly lower number of matrices in the representation, even though sometimes at a cost of increase in the dimension of the matrices in the representation often referred to as bond dimension. It has been proved that any TI state \(\psi\) admits a TI MPS representation with PBC and moreover an upper bound on \(d(\psi)\) (minimal possible dimension of the matrices in the TI MPS representation with PBC for \(\psi\)) can be obtained that depends on the dimension of the state \(\psi\)[1].
However, a big unsolved problem is obtaining exact estimates or improving existing estimates of the dimension \(d(\psi)\) at least for certain classes of TI states, for example the \(W\)-state. In [1], a TI MPS representation with PBC was provided for the \(W\)-state of order \(n\) with bond dimensions of the order \(O(n)\) with constant factor \(1\). Moreover, in [1] it was hypothesized that the bond dimension for the MPS representation of the \(W\)-state is lower bounded by \(O(n^{\frac{1}{3}})\). In [11], a weaker form of this hypothesis was proved which stated that for all \(\delta>0\) the bond dimension of the \(W\)-state is lower bounded by \(O(n^{\frac{1}{3+\delta}})\).
In **Theorem 1** of the current paper we construct a TI MPS representation with PBC of the \(W\)-state of dimension \(n\) with matrices of size \(\left\lfloor\frac{n}{2}\right\rfloor+1\). This improves the previous result known to the authors on the TI MPS representation with PBC for the \(W\)-state with minimum bond dimension.
In **Theorem 2** we extend the method previously developed by us in the previous theorem to build TI MPS representation with PBC for a large class of "sparse" states. This generalization provides a framework to search for more optimal TI MPS representations with PBC.
**Theorem 3** we demonstrate that all TI MPS representations with PBC, constructed using the approach outlined in Theorem 2, can be transformed into a lower-dimensional representation with matrices of size \(n\times n\). Furthermore, this new reduced representation can be obtained directly from the original representation using the formulae specified in the theorem.
As previously mentioned any TI state \(\psi\) admits a TI MPS representation with PBC, taking this fact into account we explicitly construct a search algorithm for \(d(\psi)\) for an arbitrary state \(\psi\) in **Theorem 4**. The algorithm allows us to deterministically obtain \(d(\psi)\) for a TI MPS with PBC representation of a given state.
The results of our numerical experiments for obtaining \(d(\psi)\) for the \(W\)-state using the algorithm from Theorem 4 hints towards the possibility that the estimate in Theorem 1 could be optimal.
## 2 Notation
Let us formulate the main definitions and objects with which we will work. The standard basis in the vector space \(\mathbb{C}^{2}\) comprises of vectors \((1,0)\) and \((0,1)\). Quantum computation uses the Dirac notation, where they are denoted by \(|0\rangle\) and \(|1\rangle\), respectively, and they are said to form the _computational basis_.
We consider the \(n\) tensor power of the space \(\mathbb{C}^{2}\) denoted by \((\mathbb{C}^{2})^{\otimes n}\), whose basis comprises all possible \(n\) tensor products of the standard basis vectors of \(\mathbb{C}^{2}\) of the form \(|i_{1}\rangle\otimes|i_{2}\rangle\ldots|i_{n}\rangle\), where \((i_{1},\ldots,i_{n})\in\{0,1\}^{n}\). In Dirac notation these products are denoted as \(|i_{1}\ldots i_{n}\rangle\). In other words, formal strings of length \(n\) comprising of numbers \(0\) and \(1\) encode the corresponding basis vectors. In total, there are \(2^{n}\) basis vectors in \((\mathbb{C}^{2})^{\otimes n}\).
We call an arbitrary vector from \((\mathbb{C}^{2})^{\otimes n}\) with a unit norm a _quantum state_. We can interpret vectors with non-unitary norm as non-normalized quantum states. By default, theorems in this paper are suitable both for normalized and non-normalized quantum states. In the case when we intend to underline that the result is specifically for non-normalized or normalized state, we will do so.
Every \(\psi\in(\mathbb{C}^{2})^{\otimes n}\) can be decomposed in the computational basis with complex coefficients as:
\[|\psi\rangle=\sum_{(i_{1},\ldots,i_{n})\in\{0,1\}^{n}}c_{(i_{1},\ldots,i_{n}) }\,|i_{1}i_{2}\ldots i_{n}\rangle \tag{1}\]
or in a more compressed form
\[|\psi\rangle=\sum_{I\in\{0,1\}^{n}}c_{I}\,|I\rangle\,. \tag{2}\]
Thus, we denote the coefficients of the vector \(\psi\in(\mathbb{C}^{2})^{\otimes n}\) as \(c_{I}^{\psi}\) or simply \(c_{I}\), if the state is clear from the context.
We call the quantum state \(\psi\in(\mathbb{C}^{2})^{\otimes n}\)_translationally invariant_ or _TI state_, if the coefficients do not change under cyclic shifts of the basis vectors, i.e.
\[c_{(i_{1},i_{2},\ldots,i_{n})}=c_{(i_{2},i_{3},\ldots,i_{n},i_{1})}\ \forall(i_{1},\ldots,i_{n})\in\{0,1\}^{n}. \tag{3}\]
Let us denote the set of all TI states from \((\mathbb{C}^{2})^{\otimes n}\) as \(TI-(\mathbb{C}^{2})^{\otimes n}\).
For fixed \(\psi\in(\mathbb{C}^{2})^{\otimes n}\) its coefficients can be considered as functions \(c:\{0,1\}^{n}\to\mathbb{C}\) or as a tensor. The latter makes it possible to consider Matrix Product States (MPS) representations of the given state. There are various forms of MPS representations, we will work in this paper with TI MPS representations with Periodic Boundary Conditions (PBC) of TI states.
Following is a general mathematical formulation of MPS representation with PBC for any \(\psi\in(\mathbb{C}^{2})^{\otimes n}\).
**Definition 1**.: MPS representation with PBC for \(\psi\in(\mathbb{C}^{2})^{\otimes n}\) has the form
\[|\psi\rangle=\sum_{i_{1},\ldots,i_{N}=0}^{1}\Tr\Bigl{[}A_{i_{1}}^{[1]}A_{i_{2} }^{[2]}\cdots A_{i_{N}}^{[N]}\Bigr{]}|i_{1},i_{2},\ldots,i_{N}\rangle\,, \tag{4}\]
where \(A_{i_{k}}^{[k]}\) are complex matrices of dimension \(d_{k}\times d_{k+1}\).
When the matrices \(A_{i_{k}}^{[k]}=A_{i_{k}}\) for all \(k=\overline{1,N}\), they are said to be "site-independent", we call such an MPS representation with PBC, a TI (translationally invariant) MPS representation with PBC. In this case, the dimensions of the matrices \(A_{i_{k}}\) coincide, that is, \(d_{i_{k}}=d\). The dimension \(d\) is also called the _bond dimension_. Given that an MPS representation with PBC is uniquely determined by the matrices used in the representation, we can reformulate the definition of TI MPS representation with PBC in the following convenient way.
**Definition 2**.: TI MPS representation with PBC of bond dimension \(d\) for the TI state \(\psi\in(\mathbb{C}^{2})^{\otimes n}\) is a pair of \(d\times d\) complex matrices \(A_{0}\) and \(A_{1}\), such that
\[\Tr(A_{i_{1}}A_{i_{2}}\ldots A_{i_{n}})=c_{(i_{1},\ldots,i_{n})}\quad\forall(i _{1},\ldots,i_{n})\in\{0,1\}^{n}. \tag{5}\]
Note that by TI MPS with PBC we mean site-independent MPS representations for TI states with periodic boundary conditions. Some authors mention it explicitly, but conventionally it is omitted.
Due to the cyclicity of the trace, finding the maximal number of unique equations in 5 is equivalent to solving the famous necklace problem from combinatorics. Using Polya's enumeration theorem [12, 13], the maximal number of unique equations is:
\[\frac{1}{n}\sum_{p|n}\varphi(p)2^{n/p}, \tag{6}\]
where \(\phi(p)\) is the Euler totient function.
TI MPS representations with PBC allow us to store information about the coefficients in two matrices and obtain them by taking the traces of the products of these matrices. There are also other forms of expressing coefficients in terms of MPS.
**Proposition 1**.: Let \(A_{0}\) and \(A_{1}\) determine the TI MPS representation with PBC for TI state \(\psi\in(\mathbb{C}^{2})^{\otimes n}\). Then \(A_{0}^{\prime}=\sqrt[n]{\lambda}A_{0}\) and \(A_{1}^{\prime}=\sqrt[n]{\lambda}A_{1}\) determine TI MPS representation with PBC for \(\lambda\psi\) where \(\sqrt[n]{\lambda}\in\mathbb{C}\) can be any of the \(n\)-th roots of \(\lambda\).
Proof.: From linearity of trace we have
\[\Tr(A_{i_{1}}^{\prime}A_{i_{2}}^{\prime}\ldots A_{i_{n}}^{\prime})=\Tr(\sqrt[ n]{\lambda}A_{i_{1}}\sqrt[n]{\lambda}A_{i_{2}}\ldots\sqrt[n]{\lambda}A_{i_{n}})= \lambda\Tr(A_{i_{1}}A_{i_{2}}\ldots A_{i_{n}}).\]
This proposition in particular allows one to search for TI MPS representations with PBC for quantum states in non-normalized form and obtain the representations of quantum states in the normalized form essentially for free. Specifically, the matrices for the normalized states can be obtained by multiplying a suitable constant factor to matrices obtained for the non-normalized states.
Some quantum states are important in quantum theory, one such state is the \(W\)-state [14].
**Definition 3**.: The \(W\)-state of order \(n\) is defined as
\[|W_{n}\rangle=\frac{1}{\sqrt{n}}\sum\limits_{(i_{1},\ldots,i_{n})\in\{0,1\}^{n_ {i}\sum\limits_{j=1}^{n}i_{j}=1}}|i_{1}\ldots i_{n}\rangle \tag{7}\]
It is easy to see that the \(W\)-state is a TI state. The problem of finding a TI MPS representation with PBC for the \(W\)-state follows from the following definition:
**Definition 4**.: TI MPS representation with PBC of bond dimension \(d\) for the \(W\)-state are pair of complex matrices \(A_{0}\) and \(A_{1}\) such that
\[\Tr(A_{i_{1}}A_{i_{2}}\ldots A_{i_{n}})=\begin{cases}0&\forall(i_{1},\ldots,i_ {n})\in\{0,1\}^{n}\text{ such that }\sum\limits_{j=1}^{n}i_{j}\neq 1\\ \frac{1}{\sqrt{n}}&\forall(i_{1},\ldots,i_{n})\in\{0,1\}^{n}\text{ such that }\sum\limits_{j=1}^{n}i_{j}=1\end{cases} \tag{8}\]
Proposition 1 allows us to search for TI MPS representations with PBC for a state multiplied by some constant instead of the original.
**Remark 1**.: We can find TI MPS representation with PBC of bond dimension \(d\) for non-normalized (multiplied by \(\sqrt{n}c\) where \(c\in\mathbb{C}/\{0\}\)) \(W\)-state of order \(n\) using complex matrices \(A_{0}\) and \(A_{1}\) such that
\[\Tr(A_{i_{1}}A_{i_{2}}\ldots A_{i_{n}})=\begin{cases}0&\forall(i_{1},\ldots,i_ {n})\in\{0,1\}^{n}\text{ such that }\sum\limits_{j=1}^{n}i_{j}\neq 1\\ c&\forall(i_{1},\ldots,i_{n})\in\{0,1\}^{n}\text{ such that }\sum\limits_{j=1}^{n}i_{j}=1 \end{cases} \tag{9}\]
From our previous discussion, it is easy to see that we can obtain a TI MPS representation with PBC for the (normalized) \(W\)-state using \(A^{\prime}_{0}=\frac{1}{\sqrt[n]{c\sqrt{n}}}A_{0}\), \(A^{\prime}_{1}=\frac{1}{\sqrt[n]{c\sqrt{n}}}A_{1}\) with \(c=Tr(A_{0}^{n-1}A_{1})\).
**Definition 5**.: We call \(d(\psi)\) the minimal bond dimension \(d\) such that there exists TI MPS representation with PBC for the TI state \(\psi\).
From [1] it follows that \(d(\psi)\) is well defined (since TI MPS representation with PBC exists for every TI state). If we have some state that is defined for each \(n\) (such as the \(W\)- state) then we can consider the function \(d(\psi(n))\) which for every fixed \(n\) is equal to the minimum dimension \(d\), such that for corresponding state of order \(n\) there exists a TI MPS representation with PBC of bond dimension \(d\). In the general, the problem of determining \(d(\psi)\) for specific states and its asymptotics, as well as the construction of matrices \(A_{0}\) and \(A_{1}\) on which the best scaling is achieved is a difficult problem.
In paper [1], it was shown that for the \(W\)-state \(d(n)=O(n)\) with constant factor \(1\). In [11] it was proved that \(\forall\delta>0,\ d(n)=\Omega(n^{\frac{1}{3+\delta}})\). At the same time, the question regarding the exact asymptotics of \(d(\psi(n))\) and the constant factor remains open.
## 3 Approaches to building a TI MPS
In [15] it was shown that for any quantum state \(\psi\in(\mathbb{C}^{2})^{\otimes n}\) we can construct an MPS representation with PBC. Moreover, in [1] it was shown that for any state it is possible to construct an MPS with Open Boundary Conditions (OBC). Using Theorem 3 from [1] which connects TI MPS with PBC to MPS with OBC, it follows that we can construct TI MPS representation with PBC for any TI state.
However, using general techniques, we find that the MPS representation scales rapidly with the bond dimension of the matrices in the representation. For example, for MPS with OBC the dimension of the constructed matrices \(d(\psi)=O(2^{\frac{n}{2}})\), and for TI MPS with PBC \(d(\psi)=O(n2^{\frac{n}{2}})\).
As far as is known to the authors of the article, there is no previously explored general way to find an exact estimate for the asymptotics of \(d(\psi)\) for a given state \(\psi\in(\mathbb{C}^{2})^{\otimes n}\) or a method for constructing an MPS representation with the dimensions of the matrices that will not grow too fast and whose dimensions will be closer to the theoretical \(d(\psi)\). Moreover, the way to build TI MPS with PBC using MPS with OBC from [1] has the dimensions of the matrices in the representation at best \(O(n)\) with constant factor 1. In principle, we cannot construct representations with better asymptotics or constant factor this way.
If we want to find better TI MPS representation with PBC in terms of the bond dimension for a particular state, we need to look for a solution to the system of equations 5. In this system, there is an exponential number of equations that depend in a non-trivial way on taking the trace of various combinations of products of matrices, which in the general case is quite difficult to solve. Therefore, methods for finding optimal TI MPS with PBC are very important, and it is useful to have ways of constructing MPS at least for some classes of states.
Returning to the \(W\)-state, the predominant part of coefficients in the basis decomposition is 0, which motivates to use for example a nilpotent matrix as one of the matrices so that a large number of coefficients computed as \(\Tr(A_{i_{1}}A_{i_{2}}\ldots A_{i_{n}})\) vanish. As we will show below we can pick the matrix unit \(E_{i,j}\) (a matrix whose \((i,j)-\)th entry is one and the rest of the entries are zero) as one of the matrices and still obtain a considerably good solution.
Based on the Remark 1, it is easy to see that constructing a TI MPS representation with PBC for some non-normalized state allows one to construct a TI MPS representation with PBC of the same bond dimension with the original matrices, normalized by some constant.
Below we construct a TI MPS representation with PBC for the \(W\)-state using the idea of considering a matrix unit as one of the matrices. In order to do that we first prove the following lemma:
**Lemma 1** (Matrix unit lemma).: Let \(A\) be a \(d\times d\) matrix with elements from an arbitrary
field, \(r\in\mathbb{N}\). Then
\[E_{jk}A^{r}E_{jk}=(A^{r})_{kj}E_{jk}, \tag{10}\]
where \((A^{r})_{kj}\) is the \((k,j)-\)th entry of the matrix \(A^{r}\).
Proof.: \[E_{jk}A^{r}E_{jk}=E_{jk}(\sum\limits_{i,l}(A^{r})_{il}E_{il})E_{jk}=E_{jk}(\sum \limits_{i}(A^{r})_{ij}E_{ik})=(A^{r})_{kj}E_{jk}.\]
Now we are ready to formulate the theorem which provides a TI MPS representation with PBC for the \(W\)-state with lower bond dimension than ones known to the authors so far.
**Theorem 1**.: _For arbitrary \(n\in\mathbb{N}\) we can have the following TI MPS representation with PBC for the non-normalized \(W\)-state of order \(n\) using \(\left(\left\lfloor\frac{n}{2}\right\rfloor+1\right)\times\left(\left\lfloor \frac{n}{2}\right\rfloor+1\right)\) matrices:_
\[A_{0}=\begin{pmatrix}1&1&1&\dots&\text{x(n)}\\ 1&0&0&\ddots&0\\ 0&1&0&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&0\\ 0&0&\dots&1&0\end{pmatrix},\quad A_{1}=\begin{pmatrix}0&0&0&\dots&1\\ 0&0&0&\ddots&0\\ 0&0&0&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&0\\ 0&0&0&\dots&0\end{pmatrix},\]
_where \(x(n)\in\mathbb{C}\) is one of the roots of the equation \(\text{Tr}\ (A_{0}^{n})=0\)._
_Furthermore, matrices \(A_{0}^{\prime}=\frac{2^{-\frac{n-\left\lfloor\frac{n}{2}\right\rfloor-2}{n}} }{\sqrt{n}}A_{0}\), \(A_{1}^{\prime}=\frac{2^{-\frac{n-\left\lfloor\frac{n}{2}\right\rfloor-2}{n}} }{\sqrt{n}}A_{1}\) determine a TI MPS representation with PBC for the normalized \(W\)-state of order \(n\)_
Proof.: Here, \(A_{0}=(\sum\limits_{k=1}^{d-1}E_{1,k}+E_{k+1,k})+xE_{1d},A_{1}=E_{1d}\), where \(d=\left\lfloor\frac{n}{2}\right\rfloor+1\).
Without loss of generality, from 9 it follows that we have to verify that the following holds:
\[\text{Tr}(A_{i_{1}}A_{i_{2}}\dots A_{i_{n}})=\begin{cases}0&\forall(i_{1}, \dots,i_{n})\in\{0,1\}^{n}\text{ such that }\sum\limits_{j=1}^{n}i_{j}\neq 1\\ const&\forall(i_{1},\dots,i_{n})\in\{0,1\}^{n}\text{ such that }\sum\limits_{j=1}^{n}i_{j}=1 \end{cases}\]
and prove that \(const=2^{n-\left\lfloor\frac{n}{2}\right\rfloor-2}\).
We consider the following cases based on the number and positions of \(A_{1}\) in the products:
1. (**single \(A_{1}\))**\(I\in\{0,1\}^{n}\) such that \(\sum\limits_{j=1}^{n}i_{j}=1\).
Due to the cyclicity of the trace, our expression will have the form
\[\text{Tr}\big{(}A_{0}^{n-1}A_{1}\big{)}\]
Further, \[\Tr\big{(}A_{0}^{n-1}A_{1}\big{)}=\Tr\big{(}A_{0}^{n-1}E_{1d}\big{)}=(A_{0}^{n-1}) _{d1}.\] In order to prove the needed result, we need to consider the following two properties of our matrices: **Descending Row property:** For \(i\geq 2\), since \((A_{0})_{ij}=\delta_{ij+1}\) it follows that: \[(A_{0}^{k+1})_{il}=\sum_{j}(A_{0})_{ij}(A_{0}^{k})_{jl}=(A_{0}^{k})_{i-1l}\ \ \text{for}\ \ \forall k\in\mathbb{N},\ i=\overline{1,k+1}.\] (11) Essentially, the rows of the matrix \(A_{0}\) "descend" upon taking subsequent powers. **Column summation property:** Let \((C_{1},C_{2},\ldots,C_{d})\) be the columns of a matrix \(B\). It can be directly checked that the columns of \(BA_{0}\) are \((C_{1}+C_{2},C_{1}+C_{3},C_{1}+C_{4},\ldots,C_{1}+C_{D},xC_{1})\). From the column summation property it follows that upon taking subsequent powers of \(A_{0}\) the variable \(x\) "migrates" to the position (1,1) in \(d-1\) steps (multiplications) and from the descending row property it follows that \(x\) migrates to the position \((d,1)\) in another \(d-1\) steps. Prior to taking \(2d-2\) steps all entries in the \((d,1)\) position are constants. It can be summarised as \[(A_{0}^{m})_{d1}=\begin{cases}0,\ m=\overline{1,d-2}\\ 1,\ m=\overline{d-1,d}\\ 2^{k},\ m=d+k,k=\overline{1,d-2}\end{cases}\] (12) Taking \(m=n-1\), we conclude that the necessary \(d\) is such that \(n-1\leq 2d-2\) i.e. \(\frac{n+1}{2}\leq d\). Taking \(d=\left\lfloor\frac{n}{2}\right\rfloor+1\), then \(const=\Tr(A_{0}^{n-1}A_{1})=(A_{0}^{n-1})_{d1}=2^{n-\left\lfloor\frac{n}{2} \right\rfloor-2}\).
2. (**no \(A_{1}\))**\(I=\{0\}^{n}\). From the column summation property it directly follows that \((A_{0}^{m+1})_{dd}=x(n)(A_{0}^{m})_{d1}\). \[(A_{0}^{m})_{dd}=\begin{cases}0,\ m=\overline{1,d-2}\\ x(n),\ m=\overline{d-1,d}\\ 2^{k}x(n),\ m=d+k,k=\overline{1,d-2}\end{cases}\] (13) From the above it follows that if \(\frac{n+1}{2}\leq d,\Tr(A_{0}A_{0}\ldots A_{0})\) is either \(0\) or contains at least one term \(2^{l}x(n)\). Owing to the column summation property, no negative terms arise in the equation \(\Tr(A_{0}A_{0}\ldots A_{0})=0\). Since, we are working over the field of complex numbers, the equation always has a solution. This solution determines the value of \(x(n)\).
3. **(consecutive \(A_{1}\))**\(I\in\{0,1\}^{n}\) such that \(\exists k:i_{k}=i_{k+1}=1\) or \(i_{1}+i_{n}=2\). From the nilpotency of \(A_{1}\), the trace in this case will trivial be \(0\).
* **(sparse \(A_{1}\))**\(I\in\{0,1\}^{n}\) such that \(\not{\exists}\ k:i_{k}=i_{k+1}=1,i_{1}+i_{n}\neq 2\) and \(\sum\limits_{i_{k}\neq i_{k}=1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_where \(p_{m}>0,\ l>0\) and \(\sum p_{m}=n-l\)._
Proof.: As usual, for \(A_{0}\) and \(A_{1}\) to be a TI MPS representation with PBC of the state \(\psi\in(\mathbb{C}^{2})^{\otimes n}\) it is necessary and sufficient for each of the following equations to hold:
\[\Tr(A_{i_{1}}A_{i_{2}}\ldots A_{i_{n}})=c_{(i_{1},\ldots,i_{n})}\quad\forall(i_ {1},\ldots,i_{n})\in\{0,1\}^{n}. \tag{16}\]
Let us divide the system of equations (16) into 3 classes of equations and let us prove that the conditions (15) are necessary and sufficient for (16) to hold using the approach that was used in the theorem above.
1. (**no \(A_{1}\)**) \(I=\{0\}^{n}\). The first point in condition 2) coincides with holding of the corresponding equation in 16.
2. **(consecutive \(A_{1}\))**\(I\in\{0,1\}^{n}\) such that \(\exists k:i_{k}=i_{k+1}=1\) or \(i_{1}+i_{n}=2\). In analogy to Theorem 1, condition 1) is necessary and sufficient for the corresponding equation due to the nilpotency of \(A_{1}\).
3. **(sparse \(A_{1}\))**\(I\in\{0,1\}^{n}\) such that \(\not\exists\ k:i_{k}=i_{k+1}=1,i_{1}+i_{n}\neq 2\). In a manner similar to the deduction in Theorem 1 and using 1, we arrive at \[Tr(A_{1}A_{0}^{p_{1}}A_{1}A_{0}^{p_{2}}\ldots A_{1}A_{0}^{p_{1}})=\lambda^{l} (A_{0}^{p_{1}})_{kj}(A_{0}^{p_{2}})_{kj}\ldots(A_{0}^{p_{l}})_{kj}\] (17) It follows that the second point in condition 2) is equivalent to this class of equations being true.
This theorem opens up a simpler and more efficient way for us to construct TI MPS representations for a large class of states, which has a number of advantages over the general methods for constructing TI MPS representations.
In the formulation of Theorem 2 we can interchange 0 and 1 (i.e. \(A_{0}\leftrightarrow A_{1}\) and \(I=(i_{1},i_{2},\ldots,i_{n})\leftrightarrow I^{\prime}=(1-i_{1},1-i_{2}, \ldots,1-i_{n})\)). The proof of the theorem will remain the same. All results below will also hold upon such interchange.
If the state \(\psi\) satisfies the system of equations 15 along with matrices \(A_{0}\) and \(A_{1}\) then we can denote \(\gamma_{i}=(A_{0}^{i-1})_{k,j}\) and all equations except one become equations in the \(n-1\) variables \(\gamma_{1},\ldots,\gamma_{n-1}\). This naturally leads to the question of whether we can use lower-dimensional matrices instead of the ones guaranteed by Theorem 2. It turns out that for an arbitrary state admitting TI MPS representation with PBC with a scaled matrix unit, one can always construct an \(n\)-dimensional representation.
(Canonical construction of the TI MPS representation with PBC with a scaled matrix unit ).: _Let for \(\psi\in(\mathbb{C}^{2})^{\otimes n}\) there exist a TI MPS representation with PBC with a scaled matrix unit with matrices \(B_{1}=\lambda E_{j,k},B_{0}\) where \(\lambda\in\mathbb{C}/\{0\}\). Then for \(\psi\) there exists a TI MPS representation with PBC with a scaled matrix unit using matrices of dimension \(n\times n\) of the following form:_
\[A_{0}=\lambda\begin{pmatrix}0&\gamma_{n-1}&\gamma_{n-2}&\gamma_{n-3}&\dots&\gamma_ {1}\\ 0&\omega&1&0&\ddots&0\\ 0&0&0&1&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\ddots\\ 0&0&\dots&0&0&0\end{pmatrix},\quad A_{1}=\lambda\begin{pmatrix}0&0&0&0&\dots&0 \\ 0&0&0&0&\ddots&0\\ 0&0&0&0&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\ddots\\ \vdots&\ddots&\ddots&\ddots&0&0\\ 1&0&\dots&0&0&0\end{pmatrix}\]
_where \(\gamma_{1},\dots,\gamma_{n-1},\omega\in\mathbb{C}\) satisfy \(\operatorname{Tr}A_{0}^{n}=\operatorname{Tr}B_{0}^{n}=(\lambda\omega)^{n}\), \((A_{0}^{q})_{1,n}=(B_{0}^{q})_{k,j}=\lambda^{q}\gamma_{q}\) for all \(q=\overline{1,n-1}\)._
Proof.: From the assumptions of the theorem, we have a matrix pair \(B_{1}=\lambda E_{jk}\) and \(B_{0}\), which together with \(\psi\) satisfy the conditions of Theorem 2. We can consider according to Remark 1 a similar TI MPS representation with PBC with a scaled matrix unit for \(\dfrac{1}{(\lambda)^{n}}\psi\) using the matrices \(C_{1}=\dfrac{1}{\lambda}B_{1}=E_{j,k}\) and \(C_{0}=\dfrac{1}{\lambda}B_{0}\). It suffices for us to prove that the matrices \(\dfrac{1}{\lambda}A_{1}=E_{n,1}\) and \(\dfrac{1}{\lambda}A_{0}\) for some \(\gamma_{1},\dots,\gamma_{n-1}\) and \(\omega\) determine a TI MPS representation with PBC for \(\dfrac{1}{(\lambda)^{n}}\psi\). To prove this, we need to check that Theorem 2 holds for \(D_{0}=\dfrac{1}{\lambda}A_{0},D_{1}=\dfrac{1}{\lambda}A_{1}\) and \(\psi^{\prime}=\dfrac{1}{(\lambda)^{n}}\psi\). Additional properties of the relation between the matrices \(A_{0}\) and \(B_{0}\) will be proved in a related way in the process.
We can rewrite \(D_{0}\) in the language of matrix units
\[D_{0}=\sum_{j=1}^{n-1}\gamma_{n-j}E_{1,j+1}+\sum_{j=2}^{n-1}E_{j,j+1}+\omega E _{2,2}.\]
We can check the following conditions of Theorem 2 for \(D_{0},D_{1}\) and \(\psi^{\prime}\).
1. (\(c_{I}^{\psi^{\prime}}=0\), if in \(I\) two consecutive \(i\)' s (upto cyclicity) appear). From the assumptions of the theorem, \(B_{0}\) and \(B_{1}\) determine a TI MPS representation with PBC with a scaled matrix unit for \(\psi\). Therefore, owing to condition 1) from Theorem 2 which states that \(c_{I}^{\psi}=0\) it follows that \(c_{I}^{\psi^{\prime}}=\dfrac{1}{(\lambda)^{n}}c_{I}^{\psi}=0\).
2. (\(\operatorname{Tr}D_{0}^{n}=c_{(0,0,\dots,0)}^{\psi^{\prime}}\)) All the terms in the matrix unit decomposition of \(D_{0}\) other than \(\omega E_{2,2}\) are scaled matrix units \(E_{k,j}\) with \(k<j\). Therefore, due to \(E_{rs}E_{sl}=E_{rl}\), it follows that \(\operatorname{Tr}(D_{0}^{n})=\omega^{n}\) since other matrix units in the decomposition cannot contribute to the diagonal. Therefore, to fulfill the condition, we only need to put \(w=\sqrt[n]{c^{\psi^{\prime}}(0,\dots,0)}\). Additionally, the condition from the theorem \(\operatorname{Tr}A_{0}^{n}=\operatorname{Tr}B_{0}^{n}=(\lambda\omega)^{n}\) can easily be obtained by expressing \(A_{0}\) as \(\lambda D_{0}\) and \(B_{0}\) as \(\lambda C_{0}\).
3. (\(\prod\limits_{m=1}^{l}(D_{0}^{p_{m}})_{1,n}=c_{I}^{\psi^{\prime}}\))
We already have a pair of matrices \(C_{1}\) and \(C_{0}\), which together with \(\psi^{\prime}\) satisfies the conditions of Theorem 2 (with the scaling factor of the matrix unit \(C_{1}\) being \(1\)) and hence \(\prod\limits_{m=1}^{l}(C_{0}^{p_{m}})_{k,j}=c_{I}^{\psi^{\prime}}\) for all \(I=(1,\underbrace{0,\ldots,0}_{p_{1}},1,\underbrace{0,\ldots,0}_{p_{2}}, \ldots,1,\underbrace{0,\ldots,0}_{p_{l}})\) where \(p_{m}>0,l>0\) and \(\sum p_{m}=n-l\).
Assume \(\gamma_{q}=(C_{0}^{q})_{k,j}\), then to prove this condition for \(D_{1}\) and \(D_{0}\) it is sufficient to prove that \((D_{0}^{q})_{1,n}=\gamma_{q}\) when \(q=\overline{1,n-1}\).
Note that
\[D_{0}=\gamma_{n-(2-1)}E_{1,2}+\gamma_{n-(3-1)}E_{1,3}+\ldots \gamma_{2}E_{1,n-1}+\\ +\gamma_{n-(n-1)}E_{1,n}+E_{2,3}+E_{3,4}+\ldots+E_{n-1,n}+\omega E _{22} \tag{18}\]
It is evident that
\[(D_{0}^{q})_{1,n} =(D_{0}(\sum\limits_{j=2}^{n-1}E_{j,j+1}+\omega E_{2,2})^{q-1})_{ 1,n}\] \[=(\gamma_{n-(n-(q-1)-1)}E_{1,n-(q-1)}E_{n-(q-1),n})_{1,n} \tag{19}\] \[=\gamma_{q}.\]
The first equality follows from the rule of matrix unit multiplication and the fact that in (18) for all \(E_{i,j}\) we have \(j>1\). The second equality holds since we only have to pick \(E_{n-1,n}\) from the last bracket as we are concerned with the element \((D_{0}^{q})_{1,n}\), each multiplication starting from the right reduces the row number of the matrix unit by \(1\). For \(\omega\) (the coefficient of \(E_{22}\)) to enter the expression (19), \(2\geq n-(q-2)\) or \(q\geq n\) which does not hold as we have \(q=\overline{1,n-1}\). After \(q-1\) multiplications we are left with the term \(E_{n-(q-1),n}\). Thus, \((A_{0}^{q})_{1,n}=((\lambda D_{0})^{q})_{1,n}=\lambda^{q}\gamma_{q}\).
Thus, for all states for which it is possible to construct a TI MPS representation with PBC with scaled matrix unit, it is possible to do so using matrices of size \(n\times n\). This gives rise to additional possibilities for the construction or verification of TI MPS with PBC. Let us consider an example to illustrate this.
**Example 1**.: A scaled upper-shift matrix \(A_{0}=\frac{1}{\sqrt[2n]{n}}\sum\limits_{k=1}^{n-1}E_{k,k+1}\) and scaled matrix unit \(A_{1}=\frac{1}{\sqrt[2n]{n}}E_{n,1}\) determine a TI MPS representation with PBC for \(W-\)state of order \(n\).
Below we provide sketches of several proofs of this fact to illustrate how the developed apparatus can be used:
1) We can use the definition of TI MPS with PBC and leverage the upper-shift matrix property of "raising the rows" of the matrix unit. This is not a very complicated proof, but would require some minimal verification nonetheless.
2) We can use Theorem 2. It boils down to considering fairly obvious properties of the powers of the upper-shift matrix making the proof almost trivial.
3) Finally, we can use the fact that this example is exactly a special case of the construction in Theorem 3 with coefficients that are directly obtained from the representation from Theorem 1. Theorem 3 provides a less efficient representation than Theorem 1, which however does not prevent one from using this approach here in purely formal terms and getting the results immediately.
**Remark 3**.: Theorem 3 guarantees that a representation can be constructed with matrices of dimension \(n\) (which implies that \(d(\psi)\leq n\)) for those states \(\psi\) that admit TI MPS representation with PBC with a scaled matrix unit. Thus, it allows to construct a more efficient representation if any representation with a scaled matrix unit has been obtained. Finally, we can look for a representation exactly in the canonical construction using Theorem 3. This potentially allows us to search for a solution in a class of matrices of a fairly simple form with a reduced number of degrees of freedom. This especially makes sense if we know that a representation with a matrix unit exists for a given state.
Let us put together the differences between the usual TI MPS with PBC search and the search for representations using matrix units.
1. The resulting system of equations is significantly simpler. Due to the nilpotency of \(A_{i}\), a large number of equations automatically become true for TI states for which 1) holds in Theorem 2. The remaining equations include only a single element of the matrix \(A_{0}\) raised to various powers, the maximum of which is \(n.\) In other words, everything depends on \((A_{0})_{k,j},(A_{0}^{2})_{k,j},\ldots,(A_{0}^{n-1})_{k,j}\) (except for an equation for the trace of the \(n\)-th power of the matrix \(A_{0}\)). Thus, we get rid of the need to consider the traces of different products of these matrices and pass to a simpler system of equations.
2. All equations in the system (except one) when using the Theorem 2 depend on the number of all possible \(I=(1,\underbrace{0,\ldots,0}_{p_{1}},1,\underbrace{0,\ldots,0}_{p_{2}},\ldots, 1,\underbrace{0,\ldots,0}_{p_{l}})\). After discarding identical and incompatible ones and using the fact that any permutation of \((p_{1},p_{2},\ldots,p_{l})\) does not change the LHS of the equation: \[\prod_{m=1}^{l}(A_{0}^{p_{m}})_{k,j}=\lambda^{l}c_{I},\] (20) we can estimate the number of equations from above as the number of partitions of \(n=\sum_{m=1}^{l}p_{l}+l\). This number is asymptotically bounded from above as \(O(\frac{\epsilon^{c_{0}\sqrt{n}}}{n})\) where \(c_{0}=\pi\sqrt{\frac{2}{3}}\) by the Hardy-Ramanujan Asymptotic Partition Formula [16]. This is significantly lower than the number of different equations in 5, which are \(O(\frac{1}{n}\sum_{p|n}\varphi(p)2^{n/p})\) by 6. This becomes clearer when comparing the asymptotics of the logarithms of the number of equations; in the case of the classical formulation, it grows as \(O(n)\), while in the case of Theorem 2 it is bounded from above by \(O(\sqrt{n})\). Furthermore, we can reduce the dimensionality of matrices in a representation to \(n\), if the current representation has a bond dimension greater than \(n\), using Theorem
3. In fact, those states for which such an approach is applicable are guaranteed to allow the construction of representations with a small number of variables and smaller matrix dimensions compared with those guaranteed in the general case, which improves the quality of these representations and simplifies the search.
3. In itself, the use of matrix units is convenient in that the matrix has only one non-zero element, which potentially facilitates its storage in memory. With another matrix that is sufficiently sparse to pair with it, we can get MPS, which can be stored efficiently in terms of memory. For any state allowing such a representation, there exists a representation whose matrices contain \(O(n)\) non-zero elements, according to Theorem 3, which is way lesser than the potential maximum \(O(d^{2})\).
The set of TI states that admit a TI MPS representation with PBC with a scaled matrix unit as one of the matrices forms a subset of all TI states. This subset is not empty (e.g., by Theorem 1). But not all TI states have such a representation. For example, we can consider \(\psi\) with \(c_{I}^{\psi}=1\ \forall I\). It can be observed that \(\psi\) is a \(TI\) state, but at the same time does not satisfy condition 1) in Theorem 2, and therefore does not admit a TI MPS representation with PBC with a scaled matrix unit.
Those TI states \(\psi\), which allow TI MPS representation with scaled matrix unit, must satisfy condition 1) in Theorem 2, that is, those basis vectors \(|i_{1}i_{2}\ldots i_{n}\rangle\) with non-zero coefficients must be "sparse". In this case the TI state itself does not have to be highly sparse in the usual sense of the word, although according to the above, a large count of coordinates are nullified. From the point of view of certain heuristic considerations (in particular, the natural possibility to nullify many coordinates) this way of obtaining TI MPS representations should suit well for those states which are
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & TI MPS with PBC in general form by definition & TI MPS with PBC with scaled matrix unit by Theorem 2 \\ \hline Searching for a representation & Solving a large system of equations on the traces of the product of matrices & Solving a system of elementary polynomial equations for a certain position in the matrix in different degrees \\ \hline For which states can be constructed & All & A certain class of ”sparse” states in terms of basis vectors with nonzero coefficients \\ \hline The dimensionality of the best known general construction for all states that admit such a representation & \(O(n2^{\frac{n}{2}})\) & \(O(n)\) \\ \hline Logarithm of the number of equations to find TI MPS & \(O(n)\) & \(O(\sqrt{n})\) \\ \hline Number of representation matrix elements to store & \(2d^{2}\) & \(d^{2}+1\), by canonical construction can be reduced to \(O(n)\) \\ \hline \end{tabular}
\end{table}
Table 1: Table comparing the straightforward construction of TI MPS with PBC in the general case and TI MPS with PBC with scaled matrix unit
sparse in the usual sense of the word (for example, \(W\)-state for which such a construction works well since it has only \(n\) nonzero coordinates from \(2^{n}\) possible).
This leads to the following open problem:
**Problem 1**.: Does there exist an explicit description of the set of all TI states \(\psi\in(\mathbb{C}^{2})^{\otimes n}\), which have TI MPS representation with PBC in which one of the matrices is a scaled matrix unit \(\lambda E_{i,j},\ i\neq j\) (which is equivalent to satisfying the conditions of Theorem 2)?
We can describe this set as precisely those TI states \(\psi\in(\mathbb{C}^{2})^{\otimes n}\) for which matrix \(A\) is used to satisfy the conditions of Theorem 2. However, this description is not free from the search for a certain matrix \(A\), so a simpler description in an explicit form of this set is preferable.
## 4 Algorithmic finding of the optimal bond dimension in the general case
The methods described above for finding a TI MPS representation with PBC for the \(W\)-state or more generally for a certain class of TI states do not work for all states. Furthermore, it is not guaranteed to be optimal for \(d(\psi)\) and, as far as is known to the authors, methods to construct optimal TI MPS representations with PBC deterministically do not exist at the moment. We build a simple method to construct, by some asymptotics, the optimal \(d(\psi)\).
To this end, we will need Hilbert's nullstellensatz [17, 18]. We provide its weak form below:
**Weak Nullstellensatz**.
_The ideal \(I\subset k[X_{1},\ldots,X_{n}]\) contains \(1\) if and only if the polynomials in \(I\) do not have any common zeros in \(K^{n}\), where \(K\) is an algebraically closed field extension of \(k\)._
This theorem establishes that a system of polynomial equations has a solution if and only if the basis of the ideal generated by it contains \(1\), which makes it possible to deterministically check the existence of solutions for systems of polynomial equations using the construction of Grobner bases [19].
Since the system of equations 5 that the matrices \(A_{0}\) and \(A_{1}\) must satisfy in order to be TI-MPS with PBC for a given state \(\psi\) is a polynomial system of equations in the elements of these matrices, we can formulate the following theorem.
**Theorem 4**.: _For a TI state \(\psi\in(\mathbb{C}^{2})^{\otimes n}\) one can deterministically find \(d(\psi)\) with time complexity \(O(n^{2^{(Mn^{2}2^{n})(1+o(1)))}})\), where \(M\) is some constant, according to the following algorithm:_
_Starting from \(d=1\), we iteratively search for the Grobner basis for the system 5 with respect to the matrices \(A_{0}\) and \(A_{1}\) with arbitrary elements, that is, with \(2d^{2}\) unknowns._
_If this basis contains \(1\), we move to \(d+1\)._
_In the case of absence of \(1\) in the basis, \(d(\psi)=d\)._
Proof.: We know that \(d(\psi)\in O(n2^{\frac{n}{2}})\) (which follows from Theorem 1 and Theorem 3 from [1]).
According to the algorithm described in the theorem, we certainly reach \(d\) for which there is a TI MPS with PBC representation with matrices of this dimension, which would require no more than \(O(n2^{\frac{n}{2}})\) updates of \(d\) (since \(d(\psi)\in O(n2^{\frac{n}{2}})\)). It follows from Weak Nullstellensatz that the algorithm is correct, that is, that the first time the Grobner basis does not contain \(1\), we will have non-trivial solutions, that is, matrices \(A_{0}\) and \(A_{1}\) such that they will determine a TI MPS representation with PBC. The transition of the algorithm from the minimum possible \(d\) to the next minimal unchecked \(d+1\) guarantees that the first \(d\) found will be optimal.
Let us estimate the asymptotics of the algorithm. The complexity of the Buchberger algorithm to find the Grobner basis can be estimated as \(O(k^{2^{m+o(m)}})\) with a maximum degree of polynomials \(k\) and the number of variables \(m\)[20]. In our case, at all steps \(k=n\) (which follows from the fact that in the product of \(n\) matrices all elements are polynomials of degree \(n\) in matrix elements), and \(m=2d^{2}\). Thus, the time complexity of each step is \(O(n^{2^{2d^{2}+o(d)}})\). In the worst case, the complexity of all steps is \(O(\sum\limits_{d=1}^{Cn2^{\frac{n}{2}}}n^{2^{2d^{2}+o(d)}})\), where \(C\) is a constant, which can be roughly estimated from above as \(O(Cn2^{\frac{n}{2}}n^{2^{2(Cn2^{\frac{n}{2}})^{2+o(d)}}})\), which can be estimated as \(O(n^{2^{(Mn^{2}n)(1+o(1)))}})\), where \(M\) is a constant.
As we can see, the estimates of the asymptotics of the time to find \(d(\psi)\) are very large, which tells us both about the potential need to improve this estimate and the possible search for more efficient algorithms. In this regard, the following question can be formulated.
**Problem 2.** Are there more efficient algorithms in terms of the complexity of obtaining \(d(\psi)\) than the one presented in Theorem 4?
A potential improvement in the algorithm running time estimate from Theorem 4 can occur both based on the fact that the initial estimate was obtained from very general estimates for intermediate parts of the algorithm, and based on the special structure of the system of equations obtained from conditions which allow for TI MPS representations with PBC. This may also allow us to potentially modify the original algorithm. Nevertheless, it is also possible to use approaches that do not use the Nullstellensatz.
Despite the fact that we have built an algorithm for obtaining \(d(\psi)\), we still do not know the algorithm for finding the optimal TI MPS with PBC, that is, TI MPS with PBC, in which the matrix dimensions are equal to \(d(\psi)\).
**Problem 3.** Is there an algorithm for constructing a TI MPS representation with PBC, in which the dimensions of the matrices are optimal (i.e. equal to \(d(\psi)\)) for a given TI state \(\psi\in(\mathbb{C}^{2})^{\otimes n}\)?
Despite the fact that we have built an algorithm for finding \(d(\psi)\) for an arbitrary TI state, there are currently no known results on the asymptotic behavior of \(d(n)\) for a \(W\) state. The current results [21] are such that \(d(n)\) can be estimated below as
\(\Omega(n^{\frac{1}{3+d}})\) for any \(\delta>0\), which was proved in [11]. At the same time, the conjecture that \(d(n)\in\Omega(n^{\frac{1}{3}})\)[1] remains unproved. The current best estimate from above known to the authors is the result of Theorem 1, that is, \(d(n)\in O(n)\) with constant factor \(\frac{1}{2}\).
The approach from Theorem 4 allows us to search for \(d(\psi)\) for any state. We present a table with the results of computing this number for the \(W\)-state for small \(n\).
We used Theorem 4 along with the fact that there already exists a representation for the W state with bond dimension \(\left\lfloor\frac{n}{2}\right\rfloor+1\). In this way, we could have started with \(d=1\) and continue till the bond dimension in our computation. We carried out the Groebner basis computation using _Wolfram Mathematica_[22].
It turns out that for all cases except when \(n=6\) the obtained values of \(d(n)\) coincides with dimension of the representation obtained in Theorem 1. The case where \(n=6\) the dimension of the optimal representation is either three or four. This is related with computational difficulties in evaluation of Groebner basis for \(n=6\) and \(d=3\). The results obtained hint that the representation obtained in Theorem 1 can be optimal asymptotically and in terms of the constant factor. Moreover, it may be optimal even in terms of reaching the lower bound \(d(n)\).
The above facts lead to the following problems.
**Problem 4.** Is the estimate obtained in Theorem 1 optimal? Or, in other words, is it true that \(d(n)=\left\lfloor\frac{n}{2}\right\rfloor+1\) for the \(W\)-state? In particular, is it true that \(d(6)=4\)?
**Problem 5.** Are the estimates obtained in Theorem 1 asymptotically optimal? Or, in other words, is it possible to obtain better estimates for \(d(n)\) than \(O(n)\) with a constant factor \(\frac{1}{2}\)?
Also, when optimizing calculations, \(d(n)\) for the \(W\)-state for a number of small values of \(n\) can be calculated, which can be useful in practice.
It can be noted that when calculating specific \(d(n)\) for the \(W\)-state, we see that \(d(n)\leq d(n+1).\) Wich leads us to the following problem.
**Problem 6.** Is \(d(n)\) for a \(W\)-state monotonically non-decreasing?
A similar consideration may help to construct more efficient ways to calculate \(d(n)\)
\begin{table}
\begin{tabular}{|c|c|} \hline \(n\) & \(d(n)\) \\ \hline
2 & 2 \\
3 & 2 \\
4 & 3 \\
5 & 3 \\
6 & 3 - 4 \\
7 & 4 \\
8 & 5 \\
9 & 5 \\ \hline \end{tabular}
\end{table}
Table 2: Table with \(d(n)\) for \(W\)-state for small \(n\).
## 5 Conclusion
In this work, we explore the theory of Matrix Product States (MPS), with a particular focus on the construction of translationally invariant MPS representations with periodic boundary conditions (PBC) for given states. We primarily focus on determining the optimal bond dimension \(d(\psi)\) for a given MPS representation.
Starting with an analysis of the \(W\) state, we develop a representation using the MPS formalism and discuss its implications, particularly with respect to the relationship between the bond dimension and the size of the state. This has repercussions for computational efficiency.
Subsequently, we generalize our approach to examine arbitrary states. By appealing to the Weak Nullstellensatz and the concept of Grobner bases, we develop a deterministic algorithm to find the optimal bond dimension for arbitrary states. Although the time complexity of this algorithm, as it stands, is rather prohibitive, it provides a starting point for future refinements and enhancements.
Despite the successful development of the algorithm, we have also highlighted that there is currently no known method for constructing an optimal TI MPS representation with PBC, a promising area for future research. In parallel, exploring the possibility of refining the complexity estimate of the algorithm, which remains an open question, is equally interesting.
Our investigations further hint that the representation developed for the \(W\) state may be optimal, at least for the range of small values of \(n\) we have explored. The potential optimality of the representation and its implications are yet to be fully realized and call for further exploration.
Our analysis of specific \(d(n)\) for the \(W\) state suggests that \(d(n)\) is a non-decreasing function, a property that, if generalizable, could offer additional tools for constructing more efficient algorithms to calculate \(d(n)\).
The research presented in this work opens up intriguing possibilities for future research, particularly in algorithmic optimization, quantum state representation, and the use of algebraic methods in quantum information theory. While several important questions remain open, we believe that the concepts, methods, and findings discussed here provide a stepping-stone towards a deeper understanding and more effective utilization of Matrix Product States in quantum computation and quantum information theory.
|
2308.11695 | Small Black String Thermodynamics | We consider a cylindrically symmetric solution for the field equations of
Einstein-Hilbert action with a negative cosmological constant in four
dimensions. The small statistical fluctuation in the equilibrium thermodynamics
of the black string solution is investigated. The small black string under the
influence of small statistical fluctuation also follows the first law of
thermodynamics. The behaviour of equation of states for black string changes
significantly due to the fluctuation. The fluctuation causes instability to the
small-sized black string only. Assuming the black string is fluid, the
compressibility of the black string is inversely proportional to the
fluctuation parameter. | Jyotish Kumar, Sudhaker Upadhyay, Himanshu Kumar Sudhanshu | 2023-08-22T17:27:11Z | http://arxiv.org/abs/2308.11695v1 | # Small Black String Thermodynamics
###### Abstract
We consider a cylindrically symmetric solution for the field equations of Einstein-Hilbert action with a negative cosmological constant in four dimensions. The small statistical fluctuation in the equilibrium thermodynamics of the black string solution is investigated. The small black string under the influence of small statistical fluctuation also follows the first law of thermodynamics. The behaviour of equation of states for black string changes significantly due to the fluctuation. The fluctuation causes instability to the small-sized black string only. Assuming the black string is fluid, the compressibility of the black string is inversely proportional to the fluctuation parameter.
Black string; Thermal fluctuation; Stability and compressibility +
## 1 Literature survey and motivation
The radiation and evaporation of the black holes [1; 2] merge three areas of physics: quantum theory, general relativity and thermodynamics. Even though black holes cannot be seen directly, there is no doubt regarding their existence. It is found that the entropy of black holes depends on the horizon's area. Nowadays, it is confirmed that the entropy of large black holes is directly proportional to the horizon's area [3; 4]. However, for the entropy of small black holes, the small statistical fluctuations around the equilibrium play an essential role [5; 6]. This is well-known that the leading-order correction to the entropy is always logarithmic due to small statistical fluctuations around the equilibrium. Recently, thermodynamics of various small black holes under the influence of small statistical fluctuations have been investigated extensively [7; 8; 9; 10; 11; 12].
In general, the black hole solutions are axisymmetric solutions characterised by four parameters: mass, angular momentum, charge and the cosmological constant. There are two possibilities for such axisymmetric solutions. One is a solution with spherical symmetry like the Schwarzschild solution, which has been studied extensively. The other is the cylindrical symmetric solution (black or cosmic string), which is not studied extensively. However, the cylindrical symmetry has been found very crucial in the various contexts of general relativity like the static solutions of Levi-Civita [13; 14] and Chazy-Curzon [15; 16], and the stationary solutions of Lewis [17]. The cylindrical symmetry is used to study cosmic strings in astrophysics [18]. The general cylindrically symmetric static solution of Einstein's equation with cosmological constant was found by Linet in Ref. [19], which was not a black hole solution. Lamos found the first black hole solution with cylindrical symmetry in Ref. [20]. In general, black string and cylindrical black hole are different, but they mimic each other in 4 dimensions.
The spherically symmetric black holes were explored initially. The study of prolate gravitational collapse of cylindrical and other similar objects discovered later during the hoop conjecture formulation, which postulates that horizon formation is possible if the circumference of compressed mass is less than its Schwarzschild circumference in every direction. This excluded the possibility of the formation of a cylindrical black hole. In fact, the hoop conjecture is valid only in the absence of a cosmological constant. However, cylindrically symmetric black holes exist in a negative cosmological constant. Recently, the black strings have been constructed for 4-dimensional Einstein gravity with a negative cosmological constant [20]. The thermodynamic properties of a small black string, where a change in entropy due to thermal fluctuation playing a significant role is not studied yet. This provides us with a motivation for the present investigation which will bridge the gap.
In this paper, we mainly consider a system of a black string with a negative cosmological constant and revisit the thermodynamics of the black string where small statistical fluctuation for the entropy is not relevant, and eventually, entropy follows the area law. The small statistical fluctuation plays a vital role in the thermodynamics of the small black strings, so we have considered the respective modification to the entropy of the black string. Eventually, the black string's whole thermodynamics gets corrected due to modified entropy. We further investigate the change in the behaviour of the thermal equation of states due to thermal fluctuations. Graphically, we observe that the additional terms of entropy of the black string are significant for small horizon black string. On the other hand, the fluctuation decreases the internal energy for the black string, which is significant for larger-sized black strings. The corrected Helmholtz free energy matches its uncorrected counterpart for a more extensive horizon radius. Finally, the small statistical fluctuation causes the small black string radius instability. However, the black string remains stable without considering thermal fluctuation.
The paper is organised in the following manner. In section 2, we recapitulate the considered model describing black string and the cosmological constant. Following the black string mechanics, we discuss the introductory thermodynamics of the system. In section, 3, we investigate the impacts of small statistical fluctuation on the equilibrium thermodynamics of black string. The stability and fluid compressibility of the black string is discussed in section 4. Here, the system of the black string is considered to follow fluid dynamics. Finally, we conclude the discussion in section 5.
## 2 Black string and its thermodynamics
The Einstein-Hilbert action for the four-dimensional uncharged black string in the presence of cosmological constant (setting gravitational constant, \(G=1\)) is given by [21]
\[I=\frac{1}{16\pi}\int d^{4}x\sqrt{-g}(R+6\alpha^{2}), \tag{1}\]
where \(g\) is the determinant of the metric \(g_{\mu\nu}\), \(R\) is the Ricci scalar curvature, and \(\alpha^{2}=-\frac{\Lambda}{3}>0\) denotes the negative cosmological constant.
Varying the Einstein-Hilbert action (1) with the metric yields the field equation as
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=3\alpha^{2}g_{\mu\nu}, \tag{2}\]
where \(R_{\mu\nu}\) is the Ricci tensor.
The general metric solution for the Einstein equation (1) in the cylindrically symmetric and time-independent spacetime \((t,r,\phi,z)\) is written by [20; 21]
\[ds^{2}=-f(r)dt^{2}+g(r)dr^{2}+r^{2}d\phi^{2}+\alpha^{2}r^{2}dz^{2}, \tag{3}\]
here, \(-\infty<t,0\leq r<+\infty,0\leq\phi\leq 2\pi\) and \(z<+\infty\). The metric function has the following form [21]:
\[f(r)=g^{-1}(r)=\left(\alpha^{2}r^{2}-\frac{4M}{\alpha r}\right), \tag{4}\]
where \(M\) is the integration constant related to the black string's Arnowitt-Deser-Misner (ADM) mass density. The singularity at \(r=0\) is the cosmological one, and for a moment, we are not interested in such singularity conditions.
The event horizon of the black string can be obtained from the root of the horizon condition, \(f(r_{+})=0\). For positive \(M\), it has a horizon at \(r_{+}=\frac{1}{\alpha}\left(4M\right)^{\frac{1}{3}}\) in the positive \(r\) direction, it has a naked singularity in the negative \(r\) direction. The situation is reversed for negative \(M\). This property is a new feature of such a black string.
To study thermodynamics of the black string solutions, we employ the Euclidean method [22; 23]. The requirement of the absence of the conical singularity in the Euclidean spacetime (4) causes the Euclidean time must have a period \(T^{-1}\), which satisfies
\[T=\left.\frac{f^{\prime}(r)}{4\pi}\right|_{r=r_{+}}=\frac{3\alpha^{2}r_{+}}{4 \pi}. \tag{5}\]
which is just the Hawking temperature of the black plane solutions (strings) calculated from surface gravity. Here, prime (\({}^{\prime}\)) denotes the derivative concerning \(r\). In terms of the mass of the uncharged black string, the Hawking temperature identifies to \(T=\left(3\alpha/2\pi\right)\left(M/2\right)^{1/3}\), varies as \(M^{1/3}\), which is different from the regular Schwarzschild black holes. This implies that the black string has a different configuration from black holes due to changes in topological structures and, hence, it has a different thermodynamical configuration.
Since the black string mimics a thermodynamical system and, therefore, follows the first law of the thermodynamics [24; 25]
\[dM=TdS_{0}, \tag{6}\]
where \(S_{0}\) is the equilibrium entropy of the black string. Here, we note that for the first law of black string thermodynamics, the work term (pressure and volume) is absent [24; 25].
However, one can have such a work term in extended phase space [24]. Using Eqs. (5) and (6), the first law of thermodynamics leads to
\[S_{0}=\int\frac{dM}{T}=\frac{\pi\alpha r_{+}^{2}}{2}. \tag{7}\]
In terms of the mass of the black string, entropy, \(S_{0}=(\pi/2\alpha)(4M)^{2/3}\), varies as \(M^{2/3}\) and hence different configuration from the black hole.
From the Area-law, \(S_{0}=\sigma/4\) suggests that the area of the event horizon of the black string, \(\sigma\), must have the following expression:
\[\sigma=2\pi\alpha r_{+}^{2}=\frac{\pi}{2\alpha}\left(4M\right)^{\frac{2}{3}}. \tag{8}\]
The thermodynamic volume of the black string can be calculated from the area (entropy) relation as
\[V=\int\sigma dr_{+}=\frac{2}{3}\pi\alpha r_{+}^{3}. \tag{9}\]
We can now estimate the differences in the equilibrium thermodynamics due to the small statistical disturbances that become significant for quantum black bodies.
## 3 Non-equilibrium thermodynamics of black string
It is well-known that the thermodynamics of the black bodies gets logarithmic perturbation which plays a crucial role for the small black holes. Therefore, on this basis, we anticipate that the thermodynamics of black string will also follow the same trend. The nature of the logarithmic correction to the entropy due to statistical fluctuations at the equilibrium is given by [26, 27]
\[S=S_{0}-\beta\ln(S_{0}T^{2}), \tag{10}\]
where \(\beta\) is a label parameter which is equal to \(1/2\) when fluctuation exists and vanishes for the system at equilibrium.
For the given values of Hawking temperature (5) and equilibrium entropy (7), the entropy for the uncharged black string under thermal fluctuation reads
\[S=\frac{\pi\alpha r_{+}^{2}}{2}-\beta\ln\left(\frac{9\alpha^{5}r_{+}^{4}}{32 \pi}\right). \tag{11}\]
To visualise the effects of correction on the nature of entropy, we plot the equilibrium entropy and the corrected entropy of the black string due to thermal fluctuations in Fig. 1. The correction term does not show significant change for the larger radial circumference of the horizon of the black string. Still, for the smaller horizon radius, the corrected entropy of the black string is rather substantial and makes the entropy asymptotically prominent as the horizon ceases to point. The plot shows that entropy takes only a positive value in both cases without and with thermal corrections.
nternal energy of the black string, \(E\), due to the thermal fluctuation at equilibrium is computed as
\[E=\int TdS=\frac{\alpha^{3}r_{+}^{3}}{4}-\beta\left(\frac{3\alpha^{2}r_{+}}{\pi} \right). \tag{3.3}\]
Now, the behaviour of the above expression is depicted in Fig. 2, which studies the effects of fluctuation on the equilibrium internal energy.
The plot shows that the internal energy is an increasing function of the horizon radius. Interestingly, the fluctuation decreases the value of internal energy for the black string, which is not rather significant for small black
Figure 1: The plot of entropy (\(S\)) with horizon radius (\(r_{+}\)) of the black string.
Figure 2: The plot of internal energy (\(E\)) with horizon radius (\(r_{+}\)) of the black string.
strings. The internal energy of a small black string becomes negatively valued if we introduce thermal fluctuation to the system, which is quite an interesting result. This suggests that the thermal fluctuation is taking energy from the system and dissipating away as the system is not doing any work.
The Helmholtz free energy a for non-equilibrium black string can be obtained from the relation, \(F=-\int SdT\). Utilizing the Hawking temperature (5) and the entropy (11), this leads to
\[F=-\frac{\alpha^{3}r_{+}^{3}}{8}+\frac{\beta\alpha^{2}r_{+}}{4\pi}\left[\ln \left(\frac{9\alpha^{5}r_{+}^{4}}{32\pi}\right)^{3}-6\right]. \tag{12}\]
Now, the behaviour of expression (12) is depicted in Fig. 3.
Here, we observe that Helmholtz's free energy of the small-sized black string shows inverse power-law behaviour. However, it matches the equilibrium counterpart for a more extensive horizon radius. A negative value of Helmholtz's free energy may indicate that the final energy of the black string is smaller than its original energy. This result is also in agreement with the internal energy.
For the given Helmholtz free energy (12) and volume (9), the thermodynamic pressure of the black string is calculated in the extended phase space (by identifying the negative cosmological constant with the equilibrium pressure) with the help of relation, \(P=-\left(\frac{\partial F}{\partial V}\right)\), as
\[P=\frac{3\alpha^{2}}{16\pi}+\frac{3\beta\alpha}{8\pi^{2}r_{+}^{2}}\ln\left( \frac{32\pi}{9\alpha^{5}r_{+}^{4}}\right). \tag{13}\]
Now, we plot the pressure (13) concerning the horizon radius in Fig. 4. The pressure due to thermal fluctuation is asymptotically large for small black string. This means that thermal fluctuation for the system of small black string induces more stress on the system.
Figure 3: The plot of Helmholtz free energy (\(F\)) with horizon radius (\(r_{+}\)) of the black string.
owever, the system has constant pressure for large black string for both the equilibrium and non-equilibrium conditions.
The Gibbs free energy is also a substantial quantity for the thermodynamical system. The definition of Gibbs free energy is given by \(G=F+PV\). From the volume (9), the Helmholtz free energy (10) and pressure (11), the Gibbs free energy for the non-equilibrium system is calculated as
\[G=\frac{\beta\alpha^{2}r_{+}}{2\pi}\left[\ln\left(\frac{9\alpha^{5}r_{+}^{4}}{ 32\pi}\right)^{3}-6\right]. \tag{12}\]
The comparison of the equilibrium and non-equilibrium black string is depicted in Fig. 5. Here, we see a significantly different behaviour of Gibbs free energy for the black string in equilibrium and non-equilibrium states. The Gibbs free energy without including thermal correction is zero. This means that no more work can be done. In contrast to the equilibrium case, the Gibbs free energy under thermal fluctuation depends on the horizon radius and becomes more negatively valued for small black strings. This suggests that the thermal fluctuation makes the reaction random or spontaneous.
## 4 Stability and compressibility
With the help of specific heat, the stability and the phase transition during Hawking evaporation process can be analysed. To study the stability of the black string, we follow the signature of (specific) heat capacity. The positive sign of heat capacity indicates the black string is in a stable phase; however, the negative sign of heat capacity shows an unstable phase transition. Here, we calculate the heat capacity from the following definition: \(C=-\left(\frac{\partial E}{\partial T}\right)\). For the given expressions (5) and (10), the heat capacity of black string
Figure 4: The plot of thermodynamic pressure (\(P\)) with horizon radius (\(r_{+}\)) of the black string.
\[C=-\left(\frac{\partial E}{\partial T}\right)=\pi\alpha r_{+}^{2}-4\beta. \tag{10}\]
Here, we can see that the heat capacity takes a positive value (\(C>0\)) except \(r_{+}^{2}<4\beta/\pi\alpha\). However, the heat capacity is always positive for the vanishing value of \(\beta\) (no thermal correction) and shows a parabolic curve. The black string's phase transition from an unstable to a stable state occurs when the horizon radius increases.
To specify the region of stability and instability, we plot the heat capacity concerning \(r_{+}\) as depicted in Fig. 6.
The figure shows that the fluctuation causes instability for a small
Figure 5: The plot of Gibbs free energy (\(G\)) with horizon radius (\(r_{+}\)) of the black string.
Figure 6: The plot of specific heat (\(C\)) with horizon radius (\(r_{+}\)) of the black string.
radius of the black string in the region \(r_{+}^{2}<4\beta/\pi\alpha\). However, no discontinuity is found in the black string's heat capacity.
We considered the black string as a fluid system, so it is essential to derive the compressibility, \(K\), of the black string. This can be calculated with the help of equations (9) and (10) as
\[K=-\frac{1}{V}\left(\frac{\partial V}{\partial P}\right)=\frac{4\pi^{2}r_{+}^{ 2}}{\beta\alpha}\left[2-\ln\left(\frac{9\alpha^{5}r_{+}^{4}}{32\pi}\right) \right]^{-1}. \tag{12}\]
Here, we see that the compressibility of the black string depends on the fluctuation parameters inversely.
## 5 Final remarks
It is possible to extend the spherical black hole solution to a cylindrical black hole (black string) solution [20]. Considering the black string solutions in asymptotically AdS space, the CFT dual to such solutions can also be analysed [28, 29]. We have considered a system of a black string with a negative cosmological constant. Assuming the system is in equilibrium, we have discussed the thermodynamics of black string.
The small statistical fluctuation may play an essential role in the thermodynamics of the cylindrical black hole solutions with negative cosmological constant (black strings), so we have considered the respective modification to the entropy of the black string. The modified entropy eventually attributed to the thermodynamics of black string. We studied the change in the behaviour of the thermal equations of states due to thermal fluctuations. Here, we have found that the correction term of entropy of the black string becomes significant for small horizon black string. The thermal fluctuation introduces more disorderedness to the system of small black strings. On the other hand, the fluctuation decreases the internal energy for the black string, which is significant for larger-sized black strings. However, the Helmholtz free energy becomes negative due to thermal fluctuation for a small black string. The pressure is also calculated for the black string under thermal fluctuation in the extended phase space. The pressure of the small black string becomes very high due to thermal fluctuation. We have calculated the Gibbs free energy also for the system of black string under the influence of thermal fluctuation shows a matching behaviour with the equilibrium counterpart for the sufficiently large horizon radius.
Finally, we have calculated the specific heat for the black string and found that the small statistical fluctuation causes instability for a small radius of the black string. However, excluding thermal fluctuation, the black string remains stable. The compressibility of the black string depends on the fluctuation parameters inversely. Since cylindrical black hole (black string) in \(4D\) has a rich structure. Therefore, the present investigation can be used as a theoretical laboratory which may provide hints about the underlying feature of the interaction between geometry and quantum physics. Investigating the behaviour of \(P-v\)
criticality and quasi-normal modes for the black string solutions under the influence of small statistical fluctuation will also be interesting. This is the subject of future investigations.
|
2301.12697 | Likely detection of gamma-ray pulsations of PSR~J1717+4308A in NGC~6341
and implication of the gamma-ray millisecond pulsars in globular clusters | We report our analysis results for the globular cluster (GC) NGC~6341 (M92),
as a millisecond pulsar (MSP) J1717$+$4308A has recently been reported found in
this GC. The data used are from the Large Area Telescope onboard the {\it Fermi
Gamma-ray Space Telescope (Fermi)}. We detect $\gamma$-ray pulsations of the
MSP at a $4.4\sigma$ confidence level (the corresponding weighted H-test value
is $\sim$28.4). This MSP, the fourth $\gamma$-ray pulsar found in a GC, does
not have significant off-pulse emission and has $\gamma$-ray luminosity and
efficiency $1.3\times10^{34}$\,erg\,s$^{-1}$ and 1.7\% respectively. In order
to have a clear view on the properties of the known GC $\gamma$-ray MSPs, we
re-analyze the \fermi\ LAT data for the other three ones. These four MSPs share
the properties of either having high $\dot{E}$ ($\sim 10^{36}$\,erg\,s$^{-1}$)
or being in the GCs that contain only limited numbers of known MSPs. In
addition, we find that PSRs~J1823$-$3021A and B1821$-$24, in NGC~6624 and
NGC~6626 respectively, have detectable off-pulse $\gamma$-ray emission and PSR
J1835$-$3259B in NGC~6652 does not. Using the obtained off-pulse spectra or
spectral upper limits, we constrain the numbers of other MSPs in the four GCs.
The results are consistent with the numbers of the radio pulsars reported in
them. While at least in NGC~6624 and NGC~6626, the contribution of other MSPs
to their observed $\gamma$-ray emission can not be ignored, our study indicates
that the presence of a bright MSP could be the dominant factor for whether a GC
is detectable at $\gamma$-rays or not. | P. Zhang, Y. Xing, Z. Wang, W. Wu, Z. Chen | 2023-01-30T07:08:20Z | http://arxiv.org/abs/2301.12697v2 | Likely detection of \(\gamma\)-ray pulsations of PSR J1717\(+\)4308A in NGC 6341 and implication of the \(\gamma\)-ray millisecond pulsars in globular clusters
###### Abstract
We report our analysis results for the globular cluster (GC) NGC 6341 (M92), as a millisecond pulsar (MSP) J1717\(+\)4308A has recently been reported found in this GC. The data used are from the Large Area Telescope onboard the _Fermi Gamma-ray Space Telescope (Fermi)_. We detect \(\gamma\)-ray pulsations of the MSP at a \(4.4\sigma\) confidence level (the corresponding weighted H-test value is \(\sim\)28.4). This MSP, the fourth \(\gamma\)-ray pulsar found in a GC, does not have significant off-pulse emission and has \(\gamma\)-ray luminosity and efficiency \(1.3\times 10^{34}\) erg s\({}^{-1}\) and \(1.7\%\) respectively. In order to have a clear view on the properties of the known GC \(\gamma\)-ray MSPs, we re-analyze the _Fermi_ LAT data for the other three ones. These four MSPs share the properties of either having high \(\dot{E}\) (\(\sim 10^{36}\) erg s\({}^{-1}\)) or being in the GCs that contain only limited numbers of known MSPs. In addition, we find that PSRs J1823\(-\)3021A and B1821\(-\)24, in NGC 6624 and NGC 6626 respectively, have detectable off-pulse \(\gamma\)-ray emission and PSR J1835\(-\)3259B in NGC 6652 does not. Using the obtained off-pulse spectra or spectral upper limits, we constrain the numbers of other MSPs in the four GCs. The results are consistent with the numbers of the radio pulsars reported in them. While at least in NGC 6624 and NGC 6626, the contribution of other MSPs to their observed \(\gamma\)-ray emission can not be ignored, our study indicates that the presence of a bright MSP could be the dominant factor for whether a GC is detectable at \(\gamma\)-rays or not.
Gamma-rays(637); Globular star clusters(656); Pulsars (1306)
## 1 Introduction
Thanks to the successful launch of the Large Area Telescope (LAT) onboard _the Fermi Gamma-ray Space Telescope_ (_Fermi_; Atwood et al., 2009), it has been found that the Galactic globular clusters (GCs) can have significant \(\gamma\)-ray emissions. The GC 47 Tucanae was the first one reported with the \(\gamma\)-ray emission (Abdo et al., 2009), then followed with Terzan 5 (Kong et al., 2010). Up to now, \(\gamma\)-ray emissions from approximately 35 GCs have been reported (Tam et al., 2011; Eger and Domainko, 2012; Zhou et al., 2015; Zhang et al., 2016; Tam et al., 2016; Abdollahi et al., 2020; Song et al., 2021; Wu et al., 2022; Yuan et al., 2022; Fermi-LAT collaboration et al., 2022). It is generally recognized that millisecond pulsars (MSPs) contained in the GCs are the sources of the emissions, which is supported by the facts that there are 257 MSPs (spin period \(P\leq 30\) ms) found in 36 GCs (based on the GC pulsar table1) and \(\gamma\)-ray pulsations of three of them have been detected in the LAT data.
Footnote 1: [https://naic.edu/](https://naic.edu/)\(\sim\)pfreire/GCpsr.html
The first reported \(\gamma\)-ray-pulsations case was PSR J1823\(-\)3021A in NGC 6624, which had a significance of \(\sim 7\sigma\)(Freire et al., 2011). Then in NGC 6626 (M28), significances of \(\sim 4.3\sigma\) and \(5.4\sigma\)\(\gamma\)-ray pulsations were respectively reported for PSR B1821\(-\)24 by Wu et al. (2013) and Johnson et al. (2013). Very recently, Zhang, Xing, & Wang (2022) reported the detection of \(\gamma\)-ray pulsations (at a \(5.4\sigma\) significance level) of a newly discovered MSP J1835\(-\)3259B in NGC 6652 (Gautam et al., 2022). The properties of these three MSPs are summarized in Table 1. They share the similarities of having relatively high spin-down energies \(\dot{E}\), their \(\gamma\)-ray luminosities being among the brightest ones of all known \(\gamma\)-ray MSPs (Zhang et al., 2022), and they
probably contribute a dominant fraction to the \(\gamma\)-ray emission of each of the respective host GCs.
More such \(\gamma\)-ray MSPs may be found in GCs, provided accurate radio ephemerides are available. We note that Pan et al. (2020) reported the discovery of PSR J1717+4308A in the GC NGC 6341 (M92) from the observations with the Five-hundred-meter Aperture Spherical radio Telescope (FAST). The MSP, having \(P=3.16\) ms, is in an eclipsing binary system with \(\sim\)0.2-d orbital period. While this MSP is the only pulsar thus-far found in the GC, the \(\gamma\)-ray emission from the GC has been detected and in the Data Release 3 (DR3) of the fourth _Fermi_-LAT source catalog (4FGL-DR3; Fermi-LAT collaboration et al., 2022), the \(\gamma\)-ray source is named 4FGL J1716.8+4310. Given these and the detailed ephemeris available for PSR J1717+4308A (Pan et al., 2020, 2021), we have carried out analysis of the _Fermi_-LAT data for finding the \(\gamma\)-ray pulsations of the MSP. We report the result, a 4.4\(\sigma\) significance detection of the \(\gamma\)-ray pulsations, in this paper.
In addition, considering that \(\gamma\)-ray emissions of the GCs arise from their MSPs, the high-energy studies provide a probe into the population of MSPs in the GCs besides the regular radio compaigns. The numbers of the MSPs in the GCs have been estimated by comparing the isotropic \(\gamma\)-ray luminosity of a GC with the average \(\gamma\)-ray luminosity of the MSPs (Abdo et al., 2009, 2010; Hui et al., 2011; Lloyd et al., 2018; de Menezes et al., 2019; Zhang et al., 2020; Song et al., 2021). Recently, Wu et al. (2022) have added the \(\gamma\)-ray spectral information of the MSPs into the estimation, which presumably have provided better constraints on the numbers of MSPs in more than 20 GCs. The numbers estimated in this latter work indicate that for approximately 10 GCs, only one MSP may be required in them to explain the observed \(\gamma\)-ray emissions. Combined with the likely dominant contributions of the detected three GC MSPs to the \(\gamma\)-ray emissions of their respective host GCs (Freire et al., 2011; Wu et al., 2013; Johnson et al., 2013; Zhang et al., 2022), an interesting question may be raised as if the \(\gamma\)-ray emissions of most GCs are similarly due to the presence of one bright \(\gamma\)-ray MSP (i.e., not reflecting the MSP numbers). This possibility may explain why some of GCs, while with high encounter rates, do not have detectable \(\gamma\)-ray emissions (e.g., see Wu et al., 2022); it would be rather because they do not contain one sufficiently bright MSP. To investigate this possibility, we also conducted time analysis of the LAT data for the three known GC \(\gamma\)-ray MSPs. For each of them, the off-pulse emission was obtained, so as to set a constraint on the number of other MSPs in each of the host GCs (see, e.g., Johnson et al., 2013). This part of analysis and the results are also reported in this paper.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Name & \(P\) & \(\dot{P}/10^{-20}\) & \(\dot{E}/10^{34}\) & \(L_{\gamma}/10^{34}\) \\ & (ms) & (s s\({}^{-1}\)) & (erg s\({}^{-1}\)) & (erg s\({}^{-1}\)) \\ \hline J1823\(-\)3021A & 5.44 & 338 & 83 & \(7.0\pm 1.8^{\rm a}\) \\ B1821\(-\)24 & 3.05 & 161 & 220 & 4.0\(\pm 1.0^{\rm b}\) \\ J1835\(-\)3259B & 1.83 & 6.65 & 43 & \(5.04\pm 0.44^{\rm c}\) \\ J1717+4308A & 3.16 & 6.11 & 7.7 & \(1.30\pm 0.17\) \\ \hline \end{tabular} 1
\end{table}
Table 1: Properties of GC MSPs with \(\gamma\)-ray pulsations detected
Figure 1: TS maps of a \(3^{\circ}\times 3^{\circ}\) region centered at 4FGL J1716.8+4310 in 0.1–500 GeV, where the _left_ was based on 4FGL-DR3, showing a \(\gamma\)-ray source located south-east of the center, and the _right_ was calculated when the new source was considered and removed (while 4FGL J1716.8+4310 was kept). The position of the new source and its \(3\sigma\) uncertainty are marked by the green plus and circle respectively, and the white dashed circle indicates the tidal radius of NGC 6341. The pixel scale of the both TS maps is \(0.\!\!^{\circ}1\,{\rm pixel}^{-1}\).
Below we first describe the data analysis for PSR J1717+4308A in NGC 6341 (M92) and report the results in Section 2. We describe the analysis for the three GCs that contain a known \(\gamma\)-ray MSP and present the results in Section 3. In Section 4, we discuss the results and implications for MSPs in GCs and provide a summary.
## 2 Data analysis and results for PSR J1717+4308A
### LAT data selection and best-fit model
We selected 14 year _Fermi_-LAT Pass 8 _Front+Back_ events (\(\rm{evclass}=128\) and \(\rm{evtype}=3\)) in a time range of from 2008 August 4 16:29:16.8 (MJD 54682.69) to 2022 September 25 22:22:29.0 (MJD 59847.93) in the energy range of 0.1-500 GeV. The region of interest (RoI) was \(20^{\circ}\times 20^{\circ}\) with the center at the position of 4FGL J1716.8+4310 (R. A. = \(17^{\rm h}16^{\rm m}48^{\rm s}.79\), Decl. = +43\({}^{\circ}\)10\({}^{\prime}\)22\(\farcs\)78). The events with zenith angles \(<90^{\circ}\) were kept to avoid the limb contamination from the Earth. We used the expression of "DATA_QUAL \(>\) 0 & LAT_CONFIG == 1" to filter out the events with quality flags of "bad" and to save high-quality events in the good time intervals. The instrument response function of "P8R3_SOURCE_V3" and the software package of Fermitools-2.0.19 were used in the analysis.
Based on 4FGL-DR3, we constructed a model file using the script make4FGLxml.py2. The model file contained the spectral forms of the catalog sources around 4FGL J1716.8+4310 and the respective parameters. All spectral parameters of the sources within \(5^{\circ}\) of the target were set free, and for the sources within \(5^{\circ}-10^{\circ}\) of the target and those \(10^{\circ}\) outside but identified as variables, their normalization parameters were set free. In addition, the normalizations of the two diffuse emission components (Galactic and extragalactic) were also set free. All other parameters of the sources in the RoI were fixed at the values provided in 4FGL-DR3. The target is described with a log-parabola (LP) model, \(dN/dE=N_{0}(E/E_{b})^{-[\alpha+\beta\log(E/E_{b})]}\), in 4FGL-DR3. The catalog parameter values are given in Table 2.
Footnote 2: [https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/](https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/)
The binned maximum likelihood analysis was performed to the whole data. In the results we found a new \(\gamma\)-ray source that was not previously reported or listed in the _Fermi_-LAT catalogs. As shown in the Test Statistic (TS) map (Figure 1) we calculated for the source region, this source is located south-east of the target and had a TS value of \(\sim\)40. Using tool _gtfindsrc_, its position was determined to be R. A. = \(17^{\rm h}18^{\rm m}29^{\rm s}.79\), Decl. = \(+42^{\circ}38^{\prime}37^{\prime\prime}.38\) (J2000.0). We added a point source with a spectral form of power-law (PL), \(dN/dE=N_{0}(E/E_{0})^{-\Gamma}\), at its position in our model file. Performing the likelihood analysis again, the source could be well fitted and removed in the TS map (Figure 1). Detailed analyses and results for this new source will be reported elsewhere (Zhang et al. in preparation). The best-fit parameter values for 4FGL J1716.8+4310, after adding the new source in our model file, are given in Table 2. Comparing them with the catalog values, there are small differences but within the uncertainties.
As the emission should have an MSP origin, we also considered the typical pulsar model, a PL with an exponential cutoff (PLEC; Abdo et al., 2013), \(dN/dE=N_{0}(E/E_{0})^{-\Gamma}\exp[-(E/E_{c})^{b}]\). Replacing with this PLEC in our model file for 4FGL J1716.8+4310, the binned likelihood analysis was performed, in which the parameter \(b\) was fixed at 2/3 (Abdo et al., 2013). The resulting best-fit parameters are given in Table 2. The TS value from this PLEC model is nearly equal to that from the LP model, and thus this model can also be
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & \multicolumn{3}{c}{Parameter values} \\ \hline LP & \(\alpha\) & \(\beta\) & \(E_{b}\) (GeV) & TS \\ & \(2.03(31)^{*}\) & \(0.45(24)^{*}\) & \(1.17^{*}\) & – \\ & \(1.88(13)\) & \(0.71(11)\) & \(1.17(16)\) & \(63.7\) \\ \hline PLEC & \(\Gamma\) & \(b^{\dagger}\) & \(E_{c}\) (GeV) & \\ & \(1.88(17)\) & \(0.67\) & \(3.00(52)\) & \(63.9\) \\ \hline \end{tabular} \({}^{*}\)Parameter values of the LP model given in 4FGL-DR3, among which \(E_{b}\) does not have an uncertainty. \({}^{\dagger}\) Value was fixed at 2/3 in our analysis.
\end{table}
Table 2: Likelihood analysis results for NGC 6341
Figure 2: \(\gamma\)-ray spectrum in 0.1–500 GeV obtained for 4FGL J1716.8+4310. The best-fit LP and PLEC model spectra are shown as the yellow dashed and black solid lines respectively.
used for the emission of the source. The average photon flux obtained was \(2.43\pm 0.31\times 10^{-9}\,\rm photon\,cm^{-2}\,s^{-1}\) in 0.1\(-\)500 GeV, giving an integrated energy flux of \(1.51\pm 0.19\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\). Using a distance of 8.50\(\pm\)0.07 kpc for NGC 63413, the \(\gamma\)-ray luminosity \(L_{\gamma}\simeq 1.30\pm 0.17\times 10^{34}\,\rm erg\,s^{-1}\) (assuming isotropic emission).
Footnote 3: [https://people.smp.uq.edu.au/HolgerBaumgardt/globular/fits/ngc63413](https://people.smp.uq.edu.au/HolgerBaumgardt/globular/fits/ngc63413)
### Spectrum extraction
We extracted a spectrum of 4FGL J1716.8+4310 from the whole LAT data. The energy range of 0.1-500 GeV was divided into 12 equal logarithmically spaced energy bins, and the likelihood analysis was performed to the data of each bins. In this analysis, only the normalizations of the sources within 5\({}^{\circ}\) of the target, which included the new nearby source we have found, were set as free parameters, while all other parameters were fixed at their values obtained in the above likelihood analysis. The obtained spectrum is shown in Figure 2, for which we kept the spectral data points that have their values 2 times greater than their uncertainties and otherwise showed 95% flux upper limits. Comparing the best-fit LP and PLEC model to the spectrum, the latter model appears to be able to fit the spectrum as well. Given this and the nearly equal TS values from the two models, we adopted the PLEC model in the following data analysis.
### Timing Analysis
Since the \(\gamma\)-ray emission from NGC 6341 is weak (TS\(\simeq\)64) and there are nearby sources with comparable brightnesses, we followed Abdo et al. (2010) and selected the events within an aperture radius of \(max[6.68-1.76\log_{10}(E_{MeV}),1.3]^{\circ}\simeq 3\fdg 16\) in 0.1-500 GeV so as to maximize the signal-to-noise ratio. The selected \(\gamma\)-ray photons were assigned with the weights, which were the probabilities of originating from
Figure 3: Timing analysis results for PSR J1717+4308A in 0.1-500 GeV. Panel A: weighted \(\gamma\)-ray pulse profile. The on-pulse phase ranges, \(\rm P_{on_{1}}\) and \(\rm P_{on_{2}}\), and off-pulse range \(\rm P_{off}\) are marked as pink, green, and blue shaded regions respectively. The background counts is shown as the pink dash-dotted line. Panel B: two-dimensional phaseogram, where the color bar indicates the probabilities of \(\gamma\)-ray photons originating from NGC 6341 (the largest probability value is 85.6%). Panel C: weighted cumulative H-test curve over the whole time period of the _Fermi_-LAT data. Panel D: schematic radio pulse profile of PSR J1717+4308A drawn based on that reported in Pan et al. (2020) for comparison.
NGC 6341 given by tool _gtsrcprob_. The weighted photons were then phase-folded using the ephemeris for PSR J1717+4308A provided in Pan et al. (2020, 2021) by employing Tempo2 (Hobbs et al., 2006) with _Fermi_ plug-in (Ray et al., 2011). The folded pulse profile in 16 phase bins is shown in Panel A of Figure 3, for which the counts and uncertainties of the phase bins were calculated using the method described in Abdo et al. (2013). The two-dimensional phaseogram is shown in Panel B of Figures 3, and the largest probability value of the photons was 85.6%.
Following the method described by Kerr (2011), H-test statistic was calculated using the probabilities as the weights. The curve of the cumulative H-test value over the whole _Fermi_-LAT data time period is shown in Panel C of Figure 3. The maximum H-test value was \(\sim\)28.4, corresponding to a \(p\)-value of \(1.17\times 10^{-5}\) (\(\simeq 4.4\sigma\)). It can be noted that the H-test curve is flat and close to zero before year 2014. We suspect that since the ephemeris for the MSP was derived from the FAST observations conducted from 2017 October and the MSP is in a redback binary (Pan et al., 2020), the type showing high and random orbital variability (see, e.g., Ridolfi et al., 2016), the phases of the photons before year 2014 could not be accurately given.
### Phase-resolved Analysis
Given the folded pulse profile, we divided it into the on-pulse and off-pulse regions. For the former, two peak phase regions, \(P_{\rm on_{1}}\) and \(P_{\rm on_{2}}\), were chosen as in phase ranges of 0.62-0.88 and 1.00-1.12 respectively. The latter, \(P_{\rm off}\), was chosen in a phase range of 0.44-0.62. The likelihood analysis was performed to the data in the three phase ranges respectively, in which the PLEC model was used. In \(\rm P_{\rm on_{1}}\), the resulting parameters were \(\Gamma=2.05\pm 0.28\) and \(E_{c}=3.00\pm 0.66\) (\(b=0.67\) was fixed), consistent with those from the whole data (Table 2); the photon flux was \(6.6\pm 1.3\times 10^{-9}\)\(\rm photon\,cm^{-2}\,s^{-1}\) with a TS value of \(\sim\)49. In \(\rm P_{\rm on_{2}}\), the TS value was only \(\sim\)25 (the photon flux was \(4.6\pm 1.4\times 10^{-9}\,\rm photon\,cm^{-2}\,s^{-1}\)), and in \(\rm P_{\rm off}\), TS\(\sim\)1 was obtained. The nearly zero TS value in \(\rm P_{\rm off}\) is expected as the counts in the phase bins are close to the background value (=32.8; cf., Figure 3); the latter was estimated from diffuse sources and neighbor point sources following Abdo et al. (2013).
We calculated the corresponding TS maps for the three phase intervals, which are shown in Figure 4. The TS maps support the timing analysis results. A simple direct comparison is that while the phase intervals of \(\rm P_{\rm on_{2}}\) and \(\rm P_{\rm off}\) are 0.12 and 0.18, the TS values are 25 and 1 respectively. The \(\gamma\)-ray emission from NGC 6341 thus seems to originate, within current measurement precision, exclusively from PSR J1717+4308A. In the future as more data are collected, which improves the sensitivity, it would become clear if there is significant off-pulse emission from this GC (see the case of NGC 6624 below in Section 3.2).
## 3 Data Analysis for Three Known GC \(\gamma\)-ray MSPS
In order to separate a known pulsar's emission from that of the host GC, we carried out the similar data analysis for NGC 6624 and NGC 6626 as for NGC 6341. The analysis results for NGC 6652 were taken from that reported in Zhang et al. (2022).
For each of the two target GCs, the LAT data of similar time periods in the same energy range (0.1-500 GeV) were selected. Likelihood analysis was performed to the whole data to determine the best-fit parameters, in which a PLEC model was used. We then updated the model files for each of them.
Figure 4: TS maps in 0.1–500 GeV calculated from the data in two on-pulse (_left_ and _middle_ panels) and one off-pulse (_right_ panel) phase ranges. The tidal radius region of NGC 6341 is marked as the white dashed circle, and the new \(\gamma\)-ray source is also indicated as that in Figure 1. The pixel scale of the TS maps is \(0\fdg 1\,\rm pixel^{-1}\).
### Timing Analysis
To construct the pulse profiles of PSRs J1823\(-\)3021A (Biggs et al., 1994) and B1821\(-\)24 (Lyne et al., 1987; in NGC 6624 and NGC 6626 respectively), we selected photons within an aperture radius of 6\({}^{\circ}\) centered at each of the positions given in Fermi-LAT collaboration et al. (2022). The ephemerides of the two MSPs are provided by the Fermi Science Support Center4. The weights of the photons were the probabilities of originating from a target calculated using tool _gtsrcprob_, and the pulse phases of them were assigned by employing Tempo2 with the _Fermi_ plug-in. Given the \(\sim\)14-yr long time period of the data, the ephemerides were updated from running Tempo2 by us. The resulting folded pulse profiles and two-dimensional phaseograms for PSRs J1823\(-\)3021A and B1821\(-\)24 are shown in Figure 5. The weighted H-test values were \(\sim\)1240 and \(\sim\)213 for J1823\(-\)3021A and B1821\(-\)24 respectively.
Footnote 4: [https://fermi.gsfc.nasa.gov/ssc/data/access/lat/ephems/](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/ephems/)
### Phase-resolved analysis
Based on the pulse profiles we obtained, which are very similar to those shown in Freire et al. (2011) and Johnson et al. (2013), we defined the on-pulse (P\({}_{\rm on}\)) and off-pulse (P\({}_{\rm off}\)) phase ranges: P\({}_{\rm on}\simeq\)0.56-1.13 and 0.94-1.63, P\({}_{\rm off}\simeq\)0.13-0.56 and 0.63-0.94 respectively for the MSPs in NGC 6624 and NGC 6626. Binned likelihood analysis was performed to the data sets respectively, where the PLEC model was used. The results are given in Table 3. Significant off-pulse emissions from each of the GCs were detected, while we note that in the initial discovery of \(\gamma\)-ray pulsations of PSR J1823\(-\)3021A that used 2-yr _Fermi_ LAT data, no off-pulse emission was detected from NGC 6624.
We calculated the 0.1-500 GeV TS maps to examine the appearances of the off-pulse emissions, which are
Figure 5: Pulse profiles and two-dimensional phaseograms obtained for PSRs J1823\(-\)3021A (_left_) and B1821\(-\)24 (_right_) respectively. Based on the pulse profiles, the on-pulse (P\({}_{\rm on}\)) and off-pulse (P\({}_{\rm off}\)) phase ranges are defined.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline GC & \(F_{\gamma}/10^{-11}\) & \(L_{\gamma}/10^{34}\) & \(\Gamma\) & \(E_{c}\) \\ & (erg cm\({}^{-2}\) s\({}^{-1}\)) & (erg s\({}^{-1}\)) & & (GeV) \\ \hline NGC 6624 & & & & \\ P\({}_{\rm on}\) & 2.2(0.3) & 16.8(3.0) & 1.9(0.4) & 3.0(0.3) \\ P\({}_{\rm off}\) & 0.5(0.1) & 4.2(0.9) & 1.5(0.2) & 3.0(0.1) \\ NGC 6626 & & & & \\ P\({}_{\rm on}\) & 2.6(0.2) & 9.0(0.7) & 1.4(0.1) & 1.0(0.2) \\ P\({}_{\rm off}\) & 1.1(0.4) & 3.8(1.4) & 1.4(0.5) & 1.3(1.1) \\ \hline \hline \end{tabular} 1
[FOOTNOTE:1]Footnote 1: footnotetext: \({}^{\rm{}^{\rm{}^{\rm{}^{\rm{\,}^{\rm{}^{\rm{\,}^{\rm{\,}^{\rm{\,}^{\rm{ \,}^{\rm{\,}^{\rm{\,}^{\rm{\,}^{\rm{\,}^{\rm{\,}^{\rm{\,}^{\rm{\,}^{\rm{ \,}^{\rm{\,}^{\rm{\,}^{\rm{\,}^{\rm{\,}}^{\rm{\,{\,}^{\rm{\,}}^{ \rm{\,}^{\rm{\,}^{\rm{\,}^{\rm{\,}^{\rm{\,}}^{\rm{\,}^{\rm{\,}}^{\rm{{ }^{\rm{\,}}^{\rm{\,{\,}}^{\rm{\,{}^{\rm{\,}}^{\rm{\,}^{\rm{{}^{\rm{}}^{ \rm{\,}}^{\rm{\,{\,}}^{\rm{{\,}^{\rm{\,}}^{\rm{{\,}}^{\rm{\,{}^{\rm{}}^{ \rm{\,}^{\rm{\,}}^{{\rm{\,}}^{\rm{{\,}^{\rm{\,}}^{{\rm{\,}}^{\rm{{\,}}^{ \rm{{\,}^{\rm{\,}^{{\rm{}^{\rm{}}^{\rm{\,}}}^{{\rm{\,{}^{\rm{}^{\rm{}}^{{\rm{}}^{ \rm{\,}}^{{\rm{\,}}^{{\rm{\,}}^{{\rm{\,}^{\rm{{}}^{\rm{\,}}^{{\rm{}}^{{ \rm{\,}}^{{\rm{\,{}^{\rm{}}^{\rm{\,}}^{{\rm{{}^{\rm{}}}^{\rm{{\,}}^{{\rm{}^{ \rm{{}}}^{\rm{{\rm{}}^{\rm{{}^{\rm{}}}^{{\rm{}}^{{\rm{}^{\rm{}}}^{{ \rm{\rm{}}}^{{\rm{{}^{\rm{}}^{\rm{{}^{\rm{}}^{\rm{{}}^{\rm{{}}^{\rm{{}}^{ \rm{{\rm{}}}}^{{\rm{{}^{\rm{{}}}^{\rm{{}^{\rm{{}}}^{\rm{{}^{\rm{{}}}}^{{\rm{{}}^{ \rm{{}}^{\rm{{\rm{}}}^{{\rm{{}}^{\rm{{}}^{\rm{{}}}^{{\rm{{}}^{\rm{{}}}^{{\rm{{}}}^{{ \rm{{}}}^{{\rm{{}^{\rm{{}}}^{\rm{{{}}}^{\rm{{\rm{{}}}}^{{\rm{{}}^{\rm{{{}}}}^{{ \rm{{{}}}^{\rm{{\rm{{}}}^{\rm{{{}}}^{\rm{{\rm{{}}}}^{{\rm{{\rm{}}}}^{{\rm{{ }}}^{{\rm{{\rm{{}}}}^{{\rm{{}}^{\rm{{{\rm{}}}}^{{\rm{{\rm{}}}}^{{\rm{{ }}}^{{\rm{{\rm{{}}}}^{{\rm{{\rm{}}}}^{{\rm{{\rm{{}}}}^{{\rm{{{}}}^{\rm{{ }}}^{{\rm{{\rm{{}}}}^{{\rm{{\rm{{}}}}^{{\rm{{\rm{{}}}}^{{\rm{{\rm{{}}}}}^{{\rm{{ }}}^{{\rm{{\rm{{{}}}}^{\rm{{{\rm{{}}}}}^{{\rm{{\rm{{{}}}}^{\rm{{{\rm{{}}}}}^{{\rm{{ }}}^{{\rm{{{\rm{{}}}}}^{{\rm{{{\rm{{}}}}}^{{\rm{{\rm{{{}}}}^{\rm{{{ }}}^{\rm{{{\rm{{}}}}^{{\rm{{{\rm{}}}}^{\rm{{{\rm{{}}}}^{\rm{{{ }}}}^{{\rm{{{\rm{{}}}}^{{\rm{{\rm{{}}}}^{{\rm{{{}}}}^{{ \rm{{\rm{{}}}}^{{\rm{{\,{{}}}}^{{\rm{{{\rm{{}}}}}^{{\rm{{{\rm{}}}}}^{{\rm{{{ }}}^{{\rm{{\rm{{}}}}^{\rm{{{{}}}^{\rm{{{\,{{}}}}}^{{\rm{{ }}^{{\rm{{{{\,}}}}}^{{\rm{{{{}}}}^{\rm{{{{{{}}}}}^{\rm{{{{{}}}}}^{\rm{{ }}^{{\rm{{{{{}}}}^{\rm{{{{{}}}}^{\rm{{{{{}}}}^{\rm{{{{{}}}}}^{\rm{{{ }}}^{\rm{{{{{{}}}}^{\rm{{{{{{}}}}}^{\rm{{{{{}}}}^{\rm{{{{{}}}}}^{\rm{{{ }}^{\rm{{{{{}}}}^{\rm{{{{{}}}}^{\rm{{{{{}}}}^{\rm{{{{{}}}}^{\rm{{{ }}}^{\rm{{{{}}}^{\rm{{{{}}}}^{\rm{{{{}}}^{\rm{{{ }}}^{\rm{{{{}}}^{\rm{{{{}}}^{\rm{{{{}}}^{\rm{{{{}}}}^{\rm{{{ }}^{\rm{{{{}}}}^{\rm{{{{}}}^{\rm{{{{}}}^{\rm{{{}}}^{\rm{{{ }}}^{\rm{{{{}}}^{\rm{{{}}}^{\rm{{{\rm{{}}}}}^{\rm{{{{ }}}^{\rm{{{\rm{{}}}}^{\rm{{{\rm{{}}}}^{\rm{{{{}}^{\rm{{{}}}^{\rm{{{ }}}^{\rm{{{{}}}^{\rm{{{{}}}^{\rm{{{\rm{{}}}}^{\rm{{{ }}^{\rm{{{}}}^{\rm{{{}}^{\rm{{{}}}^{\rm{{{}}^{\rm{{ }}^{\rm{{{}}^{\rm{{{}}}^{\rm{{{}}}^{\rm{{{{}}^{\rm{{{}}}^{\rm{{{}}}^
shown in Figure 6. Within NGC 6624, TS\(\sim\)50 emission is present, while there is also some residual emission (TS\(\sim\)30) north-east of the GC. The best-fit parameters determined for the off-pulse emission are consistent with those obtained for the on-pulse one, although with large uncertainties. As already illustrated by Johnson et al. (2013) in their analysis for NGC 6626, we also found off-pulse emission similarly appearing off the GC (Figure 6). Using tool _qftfindsrc_, we determined its position: R. A. = 18\({}^{h}\)25\({}^{m}\)07\({}^{s}\).0, Decl. = \(-\)24\({}^{\circ}\)46'05\({}^{\prime\prime}\).9 (J2000.0), with a 2\(\sigma\) uncertainty of 0\(\fdg\)11. This position, consistent with that obtained by Johnson et al. (2013) within the uncertainties, is \(\sim\)0\(\fdg\)11 away from the center of NGC 6626 while within the GC's tidal radius. Johnson et al. (2013) discussed the possible origin for the emission. Since our purpose is to constrain the number of other MSPs in the GC, we performed the binned likelihood analysis to the off-pulse data with the position used. The results (where TS\(\simeq\) 139) are given in Table 3, and it can be noted that the cutoff energy \(E_{c}\) was poorly constrained. We tested to use a PL model in the analysis, and obtained a TS value of \(\simeq\)132 (photon index \(\Gamma\simeq 2.35\pm 0.08\)). Comparing the results from the PLEC and PL model, the spectral curvature of the emission had a significance of only \(\sim\)2.6\(\sigma\). In any case given our study purpose and the source spectrum (below we extracted), we adopted the results from the PLEC model.
Pulse-phase resolved analysis for the emission of NGC 6652 was conducted by Zhang et al. (2022), and in a small off-pulse phase range they defined, no emission was detected (TS\(\simeq\)1). Using the results, we calculated a
Figure 6: TS maps in 0.1–500 GeV calculated from the off-pulse phase ranges for NGC 6624 (PSR J1823\(-\)3021A; _left_), NGC 6626 (PSR B1821\(-\)24; _middle_), and NGC 6652 (PSR J1835\(-\)3259B; _right_). In each panel, the center and the region of the tidal radius for the GC are marked by the green plus and dashed circle respectively. The pixel scale of the maps is 0\(\fdg\)1 pixel\({}^{-1}\). For the off-pulse emission of NGC 6626, its position and 2\(\sigma\) uncertainty are marked by the black cross and dash-dotted circle respectively.
Figure 7: \(\gamma\)-ray spectra in 0.1–500 GeV obtained for NGC 6624 (_left_ panel) and NGC 6626 (_right_ panel) and those during the on-pulse phase range of PSRs J1823\(-\)3021A and B1821\(-\)24 respectively. In each panel, the best-fit PLEC model spectra are shown as the gray dashed and blue solid lines respectively.
TS map in 0.1-500 GeV and include it in Figure 6. The TS map confirms the non-detection of any emission in the off-pulse phase range.
We extracted spectra of NGC 6624 and NGC 6626 from the whole data and those from the data in the on-pulse and off-pulse phase ranges of PSRs J1823\(-\)3021A and B1821\(-\)24 respectively. For NGC 6652, we obtained its spectral upper limits in the off-pulse phase range. The energy range of 0.1-500 GeV was divided into 12 equal logarithmically spaced energy bins. The maximum likelihood analysis was performed to the data in each energy bin. The obtained spectra of NGC 6624 and NGC 6626, as well as the on-pulse ones, are shown in Figure 7, and those off-pulse ones of NGC 6624 and NGC 6626 and spectral upper limits of NGC 6652 are shown in Figures 8 & 9. For the spectra, we only kept the flux data points with TS\(\geq\)4, and the otherwise upper limits were at a 95% confidence level.
## 4 Discussion and Summary
### Psr J1717+4308A in the GC NGC 6341
Given the discovery and timing results for the MSP J1717+4308A in the GC NGC 6341 reported by Pan et al. (2020, 2021), we have conducted detailed analysis of the _Fermi_-LAT data. A new \(\gamma\)-ray source close to the GC has been found. With this new source included in our analysis, we have studied the emission of the GC and found that it can be fitted with the PLEC model, one typically used for describing pulsars' emission. A 4.4\(\sigma\) pulsational signal has been detected at PSR J1717+4308's spin period, although the H-test curve indicates that the early data before approximately year 2014 did not contribute to the significance of the signal. This problem is likely caused by that the ephemeris of the MSP has been established in recent radio observations. When more _Fermi_-LAT data are collected in the near future, the significance would be
Figure 8: _Left_ panels: off-pulse \(\gamma\)-ray spectra of PSR J1823\(-\)3021A (upper) and B1821\(-\)24 (lower). The best-fit models, which take the number of MSPs into account, are shown as red curves. In the lower panel, an example is shown when the addition of one more MSP causes the model spectrum (blue dashed curve) greater than an upper-limit data point. _Right_ panels: \(\chi^{2}\) values from 1000 runs of spectral fitting, among which the 5% smallest values are marked as golden data points. The 5% limits (black lines) give a range of 1–5 and 1–6 respectively for the numbers of MSPs in NGC 6624 (upper) and NGC 6626 (lower).
expected to increase accordingly and the detection of the pulsational signal will be more highly confirmed.
Based on the pulse profile of the MSP we have obtained, phase-resolved analysis of the \(\gamma\)-ray data has been conducted. No significant emission has been detected in the off-pulse phase range, and the TS maps calculated from the on-pulse and off-pulse phase ranges support the detection of the pulsations of the MSP.
Considering that there is negligible emission in the off-pulse phase range, the phase-averaged luminosity of the MSP is \(L_{\gamma}\simeq 1.3\times 10^{34}\,\rm erg\,s^{-1}\). The derived spin-down energy of the MSP is \(\dot{E}=7.7\times 10^{34}\,\rm erg\,s^{-1}\), giving a \(\gamma\)-ray efficiency \(\eta\) of 0.17. This efficiency value is typical for \(\gamma\)-ray MSPs (e.g., Wu et al., 2022). In Figure 10, we show this pulsar's value along with the other three known GC \(\gamma\)-ray MSPs, in which the luminosities in the P\({}_{\rm on}\) phase ranges of PSRs J1823\(-\)3021A and B1821\(-\)24 (Table 3) and the phase-averaged luminosity of PSR J1835\(-\)3259B (given no significant off-pulse emission from this pulsar) are used. In addition in the figure, we also show all the other radio pulsars in 27 GCs whose \(\dot{E}\) values can be calculated, i.e., positive \(\dot{P}\) values given in the GC pulsar table (note that the observed \(\dot{P}\) values are mostly caused by dynamical effects and thus the \(\dot{E}\) values are rough estimates; see, e.g., Freire et al., 2017 and references therein). Among the GCs, 16 have reported \(\gamma\)-ray emissions and the other 11 do not. To plot the radio pulsars in the figure, we assume 1) the off-pulse \(\gamma\)-ray luminosities for each of NGC 6624 and NGC 6626 (Table 3) as the luminosities for other known radio pulsars respectively in the two GCs (Abbate et al., 2022; Douglas et al., 2022; and references therein), and 2) the \(\gamma\)-ray luminosity of each 12 GCs (in which no \(\gamma\)-ray pulsars are identified) as that of each of their radio pulsars (where the values are from Wu et al., 2022). In NGC 6652, PSR J1835\(-\)3259B is the only pulsar with \(\dot{E}\)(DeCesar et al., 2015; Gautam et al., 2022). As can be seen, while the previous three MSPs are among the ones with the highest \(\dot{E}\), PSR J1717+4308A has comparably low \(\dot{E}\). One common feature may be noted for the four \(\gamma\)-ray pulsars are that they are either supposed to be bright given high \(\dot{E}\) (cf., PSR B1821\(-\)24), or their host GCs are observed at radio or predicted at \(\gamma\)-rays (Wu et al., 2022) to contain only several pulsars (i.e., possibly less contamination from other pulsars when searching for pulsational signals). This feature could be the reason for the detectability of the \(\gamma\)-ray pulsations of these four pulsars.
### Implication for \(\gamma\)-ray emissions of GC MSPs
With the likely \(\gamma\)-ray detection of PSR J1717+4308A, we take these four MSPs as the representative cases and investigate the implication for the MSPs that would exist in the four host GCs. We use the off-pulse spectra of NGC 6624 and NGC 6626 and the spectral upper limits for NGC 6652 and NGC 6341 (since the latter two have not been found with significant off-pulse \(\gamma\)-ray emissions) and perform the fitting to the spectra or upper limits. The method and related MSP samples used have been fully described in Wu et al. (2022) and the parameters for the four GCs are given in Table 4. From fitting the off-pulse spectra, the numbers of other MSPs in NGC 6624 and NGC 6626 are constrained to be 1-5 and 1-6 respectively (Figure 8). The numbers well match those of the known radio pulsars (cf., Figure 10; Abbate et al., 2022; Douglas et al., 2022; and references therein), while it should be noted that several pulsars in the two GCs do not have the spin-down rate infor
Figure 9: Constraints on the numbers of MSPs in NGC 6652 (_left_) and NGC 6341 (_right_) from their spectral upper limits. The blue dashed line in each panel shows an example when the model spectrum is greater than some upper-limit data points; the constraints on the numbers of MSPs are thus obtained.
mation5 (i.e., not shown in Figure 10). Further, there is a caveat that the off-pulse emissions are assumed not containing any of the two MSPs, which may not be true and possibly lead to over-estimates for the numbers of other MSPs in the two GCs. The numbers of other MSPs in NGC 6652 and NGC 6341 are constrained to be \(\leq\)2 and \(\leq\)1 respectively. The constraints match the numbers of known pulsars in the two GCs also well as they have thus-far been reported to contain 2 and 1 pulsars respectively (including PSR J1835\(-\)3259B and PSR J1717+4308A; Gautam et al., 2022; Pan et al., 2020).
Footnote 5: [https://www.naic.edu/](https://www.naic.edu/)\(\sim\)pfreire/GCpsr.html
Thus for the four GCs, the estimated MSP numbers roughly match the observed ones (which may suggest that the detectable pulsars in them have likely been found completely from radio and \(\gamma\)-ray surveys). The results help indicate that while the emission from one bright MSP can be dominant in the observed \(\gamma\)-ray emission of the host GC, the contribution from other MSPs (at least for NGC 6624 and NGC 6626) is not negligible. The estimation for the number of \(\gamma\)-ray MSPs in a GC should be carried out with caution, probably on a case-by-case basis by taking into account the information provided from radio observations. We have
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Name & Age\({}^{*}\) & Dist\({}^{\dagger}\) & \(\chi^{2}\)/N & \(N_{\rm MSP}^{\gamma}\) & \(N_{\rm MSP}^{5\%}\) \\ & (Gyr) & (kpc) & & & \\ \hline NGC 6624 & 11.7 & 8.0 & 2.6/5 & 1 & 1–5 \\ NGC 6626 & 14.0 & 5.4 & 4.3/5 & 4 & 1–6 \\ NGC 6652 & 11.3 & 9.5 & & \(\leq\)2 & \\ NGC 6341 & 12.8 & 8.5 & & \(\leq\)1 & \\ \hline \end{tabular} Notes. \(N_{\rm MSP}^{5\%}\) provides a range given by 5% lowest \(\chi^{2}\) values in 1000 fitting runs (see Wu et al., 2022 for details). \({}^{*}\) Ages of NGC 6624 and NGC 6626 are from Oliveira et al. (2020), and Kerber et al. (2018) respectively, and those of NGC 6652 and NGC 6341 from VandenBerg et al. (2013). \({}^{\dagger}\) Distances are from Baumgardt and Vasiliev (2021).
\end{table}
Table 4: Numbers of other \(\gamma\)-ray MSPs (\(N_{\rm MSP}^{\gamma}\)) estimated from the off-pulse spectra or upper limits for the four GCs
Figure 10: \(\gamma\)-ray efficiencies of PSRs J1823\(-\)3021A, B1821\(-\)24, J1835\(-\)3259B, and J1717+4308A. Other known radio pulsars in the GCs are included; for those in the GCs with \(\gamma\)-ray emissions, the \(\gamma\)-ray luminosities of the GCs (but for NGC 6624 and NGC 6626, the \(\gamma\)-ray luminosities in the P\({}_{\rm off}\) phase ranges given in Table 3) are used, and for those in the GCs without \(\gamma\)-ray emissions (marked by downward arrows), a flux upper limit of \(10^{-12}\,{\rm erg\,s^{-1}\,cm^{-2}}\) is used. The \(\gamma\)-ray luminosities and flux upper limit are derived and given in Wu et al. (2022).
also learned from the four GCs that the presence of one bright MSP can be the dominant factor for whether a GC has detectable \(\gamma\)-ray emission or not.
We might extend the thinking to all the GCs, that is the pulsars found in them are close to be complete, given the particular effort that has been made towards finding GC pulsars in the past with different facilities (e.g., Pan et al., 2021; Abbate et al., 2022; Douglas et al., 2022; Gautam et al., 2022). Then according to Figure 10, for which a companion table (Table 5) is made to provide more detailed information, we should make an effort to search for \(\gamma\)-ray MSPs in Terzan 5, NGC 6266, NGC 6440, 47 Tuc, and probably NGC 5904 and NGC 6752 as well; the latter two contain known MSPs with \(\dot{E}\) similar to that of PSR J1717+4308A. Here we consider those pulsars falling in the range of \(\eta\sim\)0.1-1. Also presented in Figure 10 (as well as in Table 5) are 11 GCs containing known radio pulsars but not having detectable \(\gamma\)-ray emissions, where a flux upper limit of \(10^{-12}\,\rm erg\,cm^{-2}\,s^{-1}\) is used (estimated from the work in Wu et al., 2022). It can be seen that there is one MSP, PSR J1312+1810E, in NGC 5024 with \(\dot{E}>\)10\({}^{35}\,\rm erg\,s^{-1}\). No emission has been found from this GC (e.g., Yuan et al., 2022), while it can be noted that the distance to the GC is large, \(\sim\)18 kpc (Harris, 1996; Baumgardt and Vasiliev, 2021), which may explain the non-detection at \(\gamma\)-rays of the GC (or the MSP). In any case, this MSP is of interest as its \(\eta\) already reaches \(\sim 10^{-3}\).
### Summary
We have detected the \(\gamma\)-ray pulsations of PSR J1717+4308A in the GC NGC 6341 (M92) at a 4.4\(\sigma\) confidence level. No significant off-pulse emission from the MSP has been found, which suggests that the MSP's emission likely contributes dominantly to the observed \(\gamma\)-ray emission of the GC. The detection has added the fourth one to the group of GC \(\gamma\)-ray MSPs. Based on the four cases, we may suggest that the detectability of a \(\gamma\)-ray MSP in a GC relies on sufficiently high \(\dot{E}\) (maybe greater than \(10^{35}\,\rm erg\,s^{-1}\)) and probably low number of other pulsars as well.
We have re-analyzed the _Fermi_-LAT data for the three previously known GC \(\gamma\)-ray MSPs and obtained their off-pulse spectra and spectral upper limits. Fitting the spectra or upper limits with the method used in Wu et al. (2022), which include the spectral upper limits on NGC 6341, we have constrained the numbers of other MSPs respectively in the four GCs. The numbers are in consistency with those of known radio pulsars in the four GCs, suggesting that probably most of pulsars in them have already been found. While at least in NGC 6624 and NGC 6626, the \(\gamma\)-ray emission from other pulsars is not negligible, the four GCs show that a bright MSP can be a dominant factor for whether a GC is detectable at \(\gamma\)-rays or not. Our study has also suggested a few targets for finding more GC \(\gamma\)-ray MSPs.
We thank the anonymous referee for helpful comments. This work is supported in part by the National Natural Science Foundation of China No. 12163006 and 12233006, the Basic Research Program of Yunnan Province No. 202201AT070137, and the joint foundation of Department of Science and Technology of Yunnan Province and Yunnan University (202201BF070001-020). Z. W. acknowledges the support by the Basic Research Program of Yunnan Province (No. 202201AS070005), the National Natural Science Foundation of China (12273033), and the Original Innovation Program of the Chinese Academy of Sciences (E085021002).
|
2301.03942 | Effects of surface roughness on the propulsive performance of pitching
foils | The hydrodynamic influence of surface texture on static surfaces ranges from
large drag penalties (roughness) to potential performance benefits (shark-like
skin). Although, it is of wide-ranging research interest, the impact of
roughness on flapping systems has received limited attention. In this work, we
explore the effect of roughness on unsteady performance of a harmonically
pitching foil through experiments using foils with different surface roughness,
at a fixed Strouhal number and within the Reynolds number (Re) range of
15k-30k. The foils' surface roughness is altered by changing the distribution
of spherical-cap shaped elements over the propulsor area. We find that the
addition of surface roughness does not improve the performance compared to a
smooth surface over the Re range considered. The analysis of the flow fields
shows near identical wakes regardless of the foil's surface roughness. The
performance reduction mainly occurs due to an increase in profile drag.
However, we find that the drag penalty due to roughness is reduced from 76% for
a static foil to 16% for a flapping foil at the same mean angle of attack, with
the strongest decrease measured at the highest Re. Our findings highlight that
the effect of roughness on dynamic systems is very different than that on
static systems, thereby, cannot be accounted for by only using information
obtained from static cases. This also indicates that the performance of
unsteady, flapping systems is more robust to the changes in surface roughness. | Rodrigo Vilumbrales-Garcia, Melike Kurt, Gabriel D. Weymouth, Bharathram Ganapathisubramani | 2023-01-10T12:47:39Z | http://arxiv.org/abs/2301.03942v1 | # Effects of surface roughness on the
###### Abstract
The hydrodynamic influence of surface texture on static surfaces ranges from large drag penalties (roughness) to potential performance benefits (_shark-like skin_). Although, it is of wide-ranging research interest, the impact of roughness on flapping systems has received limited attention. In this work, we explore the effect of roughness on unsteady performance of a harmonically pitching foil through experiments using foils with different surface roughness, at a fixed Strouhal number and within the Reynolds number (\(Re\)) range of \(15k-30k\). The foils' surface roughness is altered by changing the distribution of spherical-cap shaped elements over the propulsor area. We find that the addition of surface roughness does not improve the performance compared to a smooth surface over the \(Re\) range considered. The analysis of the flow fields shows near identical wakes regardless of the foil's surface roughness. The performance reduction mainly occurs due to an increase in profile drag. However, we find that the drag penalty due to roughness is reduced from 76% for a static foil to 16% for a flapping foil at the same mean angle of attack, with the strongest decrease measured at the highest \(Re\). Our findings highlight that the effect of roughness on dynamic systems is very different than that on static systems, thereby, cannot be accounted for by only using information obtained from static cases. This also indicates that the performance of unsteady, flapping systems is more robust to the changes in surface roughness.
F lapping foils, surface roughness, propulsive performance
## 1 Introduction
Surface roughness is ever-present in engineering applications leveraging fluid-structure interactions. Its implications on the flow and the consequent drag generation have been widely studied in the related literature. From the influence of roughness in pipe flow (Achenbach 1971), to its effects on the trajectory of
a golf ball (Chowdhury _et al._, 2016), roughness plays a vital role in any application involving fluid-structure interaction considerations. For example, surface roughness can be detrimental to the performance of wind turbines. Sagol _et al._ (2013) found that the accumulation of contamination agents in the blades leads to a reduction in power extraction, while, Ehrmann _et al._ (2017) reported a performance decrease linked to an increase in roughness density and height. On the other hand, the use of roughness elements can lead to a drag reduction and certain performance gains for unsteady propulsion systems. Previous studies inspired from swimmers and flyers show that, from shark skin or dolphin skin (Dean & Bhushan, 2010; Wainwright _et al._, 2019) to feathers on a wing of a gliding bird (Van Bokhhorst _et al._, 2015), roughness in varying shapes and texture modifies the fluid flow over propulsor surfaces, leading to a reduction in drag or a decrease in flow separation. In engineering applications, Gad-el Hak & Bushnell (1991) analysed the effects of roughness turbulators and found a \(C_{L}/C_{D}\) increase when compared to a smooth foil for \(Re\leq 100000\). Also, the use of surface riblets can lead to a decrease in skin friction when aligned in the flow direction (Bechert _et al._, 1997), achieving a drag reduction of up to 8% (Walsh, 1982). When configured properly, surface roughness can be beneficial. It can reduce drag production and potentially improve the overall performance. Surface roughness can also have detrimental effects. Tailoring the surface roughness to have an improved performance requires a better understanding of the effect of shape, size, and area distribution of roughness elements on both the force production and the flow.
The drag-reduction potential of surface roughness on aquatic swimmers have been explored mainly for static surfaces. For example, sharks can reduce their skin friction when their riblets are aligned with the flow (Dean & Bhushan, 2010). Bixler & Bhushan (2013) pointed out that the riblets lift and pin the vortices generated in the viscous sublayer, leading to a decrease in drag. Bechert _et al._ (2000) observed a drag reduction for interlocking 3D riblets. Afroz _et al._ (2016) concluded that'shark-like' textures can act like a passive flow separation control mechanism. Du _et al._ (2022) found smaller separated regions and adverse pressure gradients for the flow over a foil covered with tilted biomimetic shark scales. The effect of the shape and size of the rough elements were analysed by Domel _et al._ (2018), highlighting the importance of the dentile shape, as they found a drag reduction only for the smaller of the three considered. Although surface roughness has shown promising potential for static bodies, its role in unsteady systems is still not clear. Shark-skin surfaces have been shown to increase the self-propelled swimming speed and reduce the drag of a flapping foil (Oeffner & Lauder, 2012; Domel _et al._, 2018), but only when small dentiles are used, whereas the larger elements can lead to an increase in drag. Wen _et al._ (2014) reported a reduction in energy consumption due to a formation of stronger leading-edge vortices. Guo _et al._ (2021) found that, for static foils towed at a constant velocity, the roughness elements resulted in a considerably thicker boundary layer when compared to the smooth foil, while, for static foils in acceleration, the changes due to roughness in the wake characteristics were considerably smaller. Mostly, previous work conclude that shark-inspired surfaces can improve the performance of an unsteady body, but the potential benefit is strongly dependent on the shape and size of shark dentiles, which often appear in highly complex geometries. Therefore, it is still to be seen if such performance improvement can be achieved with simple, commercially available roughness elements, located on the surface of an unsteady foil in harmonic motions.
In this study, we analyse experimentally the effects of surface roughness on the propulsive performance of a pitching foil by using simple roughness elements. In Section 2, we define the methodology and experimental setup used to actuate three different foils with varying roughness characteristics. We investigate the effects of Reynolds number in the range of \(15,000\leqslant Re\leqslant 30,000\) and report the propulsive performance of a pitching aerofoil in terms of thrust production (\(C_{X}\)) and efficiency (\(\eta\)). In Section 3, we detail the force and flow measurement results obtained for flapping foils, and draw a comparison between dynamic and static foil cases.
## 2 Experimental setup and methodology
Force and flow measurements are conducted in a recirculating water flume at the University of Southampton, with a test section of 8.1 m length, 1.2m width and 0.9m depth. A surface plate is installed at the foil tip and the foil is placed right above the bottom wall to prevent tip vortex formation and enforce nominally two-dimensional flow over the foil as shown in Figure 1A.
Three foils with a rectangular planform and a NACA0012 cross-section were 3d-printed, with a chord-length of \(c=0.16m\) and an aspect ratio of \(AR=2.5\). Spherical-cap shaped roughness element with a width (diameter) of \(W=0.05c\) and height of \(H=0.01c\)(Dean & Bhushan, 2010) were placed on pressure and suction sides of the foils. As shown in Figure 1C, in addition to the smooth foil, two different roughness levels are considered by varying the area occupied by the spherical-cap elements to 36% and 70% of the foil planform area.
Each foil was actuated with a stepper motor (Applied Motions STM23S) in sinusoidal pitching motions, about a point \(0.08c\) distance from the leading edge.
Figure 1: Schematics of the experimental setup in the water flume (A), the actuation arm (B), foils with three different roughness area coverage ratio (C), and the forces acting on the foil (D).
The prescribed motion is defined by \(\theta(t)=\theta_{0}sin(2\pi f_{0}t)\), where \(\theta_{0}\) is the pitching amplitude and \(f_{0}\) is the flapping frequency. The pitching amplitude \(\theta_{0}\), Strouhal number \(St=2Af_{0}/U\), and reduced frequency \(k=2\pi f_{0}c/U\) were fixed all throughout the experiments at \(\theta_{0}=7.5^{\circ}\), \(St=0.25\) and \(k=6\), respectively, to ensure the foils to perform in the high-efficiency, thrust producing regime (Zurman-Nasution _et al._, 2021; Muscutt _et al._, 2017; Kurt & Moored, 2018). A Reynolds number sweep (\(Re=Uc/\nu\) where \(\nu\) is the kinematic viscosity) was conducted within the range of \(15,000\leq Re\leq 30,000\) by varying the flow velocity. In this range, the propulsive performance was previously reported to be \(Re\) independent (Senturk & Smits, 2019). A summary of the parameters used in this study is given in Table 1. The forces and moments acting on the foils were measured with a six-axis force sensor (ATI Gamma IP65). The motion was tracked using a rotary, incremental encoder (US Digital E5) attached on the motor shaft (Figure 1 B). Each trial was conducted for a total of 100 flapping cycles and repeated five times. The measured forces were filtered using a Butterworth filter with a low-pass frequency of five times the flapping frequency. The power was calculated as a multiplication of pitching moment and the angular velocity which was derived from the measured angular displacement. The instantaneous and time-averaged performance metrics are the average values from 500 flapping cycles, measured over five trials. To distinguish instantaneous forces from time-averaged results, the latter is denoted by \(\overline{(.)}\). The reported streamwise force (thrust) (\(C_{X}\)) and power (\(C_{P}\)) coefficients, and efficiency (\(\eta\)) are defined as,
\[C_{X}=\frac{F_{X}}{\frac{1}{2}\rho U^{2}c},\qquad C_{P}=\frac{P}{\frac{1}{2} \rho U^{3}c},\qquad\eta=\frac{C_{X}}{C_{P}} \tag{1}\]
where \(\rho\) is the density of water and \(U\) represents the free-stream flow velocity.
The force measurements were synchronised with planar Particle Image Velocimetry (PIV) measurements (cameras: _LaVision MX 4MP_, lasers: _Litron Nano PIV_). The field-of-view captures the entire foil and up to one chord-length in foil's wake. The software _Davis 10_ was used to cross-correlate the acquired particle image pairs (with 24\(\times\)24 pixels with 50% overlap). The flapping cycle was divided into twenty-two phases and twenty-five cycles were acquired per phase. The velocity fields corresponding to each phase was then averaged over 25 cycles.
## 3 Results
### Flow-field and force production analysis of foils with different roughness area coverage ratios
Figure 2 compares the out-of-plane vorticity and the instantaneous performance coefficients, \(C_{X}\) and \(C_{P}\) for all the roughness cases considered at \(Re=25.000\). The first column (A,D), the second column (B,E) and the third column (C,F)
\begin{table}
\begin{tabular}{c c c c c} _Re_ & 15,000 & 20,000 & 25,000 & 30,000 \\ \(U\) [m/s] & 0.10 & 0.14 & 0.17 & 0.21 \\ \(f_{0}\) [Hz] & 0.62 & 0.83 & 1.03 & 1.24 \\ \(St\) & 0.25 & 0.25 & 0.25 & 0.25 \\ \end{tabular}
\end{table}
Table 1: Experimental parameters used in the current study
present the evolution of the vorticity field around three pitching foils with \(0\%\), \(36\%\) and \(70\%\) surface roughness at \(t/T=0.15\), and \(t/T=0.50\), respectively. Surprisingly, change in the roughness does not lead to any significant alteration in the vorticity fields. Regardless of the roughness coverage, all foils produce a reverse von Karman-street where two counter-rotating vortices per flapping cycle are shed from the trailing edge into the wake, as widely observed in the related literature for smooth foils (Muscutt _et al._, 2017; Kurt & Moored, 2018; Zurman-Nasution _et al._, 2021). Figure 2 G-H presents the evolution of cycle-averaged thrust and power coefficients over one flapping cycle. Similar to the flow fields, the performance coefficients show only minor differences between the smooth foil and the foils with roughness. Although, we have only revealed the analysis associated with a single \(Re\), these results hold across the \(Re\) range considered here. In the supplementary material, we present the evolution of the flow-field over one flapping cycle at \(Re=15,000\) and \(Re=25,000\) as videos for comparison. Overall, these results from force and flow field measurements show that incorporating surface roughness does not have a strong influence on the development of the wake. Other parameters, such as Strouhal number or kinematics (Schnipper _et al._, 2009) are known to significantly affect the evolution of the vortex structures, which can minimise the adverse effects on performance induced by the roughness elements.
Figure 3 introduces the spectral analysis conducted for \(C_{X}\) presented in Figure 2
Figure 2: PIV results for \(Re=25000\). \(t/T=0.15\) (A,D,G) and \(t/T=0.50\) (B,E,H) for the Smooth (A,D), \(36\%\) (B,E) and \(70\%\) (C,F). Instantaneous \(C_{X}\) (H) and instantaneous \(C_{P}\) (I)
G-H. The power spectra of the thrust force at \(Re=25,000\) is shown in (A). The crosses indicate the location of the peak frequency for each foil. In (B), we introduce the dominant frequency ratio in the form of \(f/f_{0}\), where \(f_{0}\) is the prescribed pitching frequency across the \(Re\) range considered. This analysis shows that the dominant frequency in thrust production corresponds to the pitching frequency \(f_{0}\) for all the \(Re\) values, and contains similar energy density for all the foils. This result combined with the similarities observed in both the wake and the instantaneous forces indicates that the performance of the foils is highly dominated by \(f_{0}\), hence, by the kinematics. The dominant effects of the frequency and the kinematics observed in our study are similar to the findings by Zurman-Nasution _et al._ (2021), who reported that compared to flapping frequency and kinematics, shape-related parameters such as sweep angle, have negligible effects on propulsive performance.
Figure 4 presents the change in cycle-averaged performance coefficients for foils with different roughness coverages against \(Re\). Starting with \(\overline{C_{X}}\) (Figure 4A), we compare our results with other NACA0012 studies conducted by Mackowski & Williamson (2015) (\(Re=16,600\), \(0.1\leqslant St\leqslant 0.4\)), and Senturk & Smits (2019) (\(500\leqslant Re\leqslant 36,000\), \(0.2\leqslant St\leqslant 0.6\)). Thrust, \(\overline{C_{X}}\), obtained for the smooth foil increases slightly with Reynolds number, a trend similar to previous studies. The thrust values obtained at \(St=0.25\) in the current study fall within the findings by Senturk & Smits (2019) at \(St=0.2\) and \(St=0.4\). In the inset enclosed by a blue box, it is shown that \(C_{X}\) decreases with the addition of surface roughness across the \(Re\) range. In Figure 4B, our results indicate higher efficiency values than Mackowski & Williamson (2015) and Senturk & Smits (2019), which could be due to differences in the pivot point location (at \(0.08c\) distance from the leading edge here, and at \(0.25c\) distance in previous studies). For each roughness case, \(\overline{C_{P}}\) slightly increases against \(Re\), but regardless of the \(Re\), increase in roughness causes a decrease in \(\overline{C_{P}}\). The efficiency also decreases as the surface roughness increases, similar to thrust and power. Although the flow fields show negligible alterations with the change in roughness, the cycle averaged forces point to a performance reduction as the roughness increases. The thrust decrease observed for \(36\%\) and \(70\%\) roughness coverages compared to smooth foil can be related to an increase in the profile drag. To further explore this effect, in the next figure,
Figure 3: (A) Fast Fourier Transform (FFT) analysis of the instantaneous \(C_{X}\) at \(Re=25,000\). The cross indicates the location of the peak for each case. The vertical dashed bar denotes \(f/f_{0}=1\). (B) Peak frequency across the \(Re\) values considered. A value of 1 denotes that the thrust force signal peak \(f\) is equal to the input pitching frequency \(f_{0}\).
we have compared our flapping foil results with static foil measurements carried out using the same foils within the same \(Re\) range.
### Comparison between static and flapping regimes
In this section, we introduce the data collected for static foils and compare it with the pitching foil results to further explore why there is a change in thrust production with a change in roughness coverage. The static data was acquired within the same \(Re\) range and roughness coverages as the flapping cases, for an angle of attack (\(\theta\)) range of \(-4^{\circ}\leqslant\theta\leqslant 20^{\circ}\). To compare both scenarios, we have selected an angle of attack value equal to the average \(\theta\) experienced by the foil during half the pitching cycle (red dashed-line in Figure 5A, denoted as \(\theta_{s}\)). Next, we develop a comparison parameter or _penalty_, that evaluates the change in streamwise force generated by the smooth and rough foils. Given that static
Figure 4: A) \(\overline{C_{X}}\) obtained in the current study (blue range) and compared with previous studies against \(Re\): Senturk & Smits (2019) (gray) and Mackowski & Williamson (2015) (dark-gray). The data enclosed by the blue box presents an inset of \(\overline{C_{X}}\) data for the \(Re\) range of \(15,000\leqslant Re\leqslant 30,000\). B) \(\overline{C_{P}}\) results (hexagon) and \(\eta\) (cross) for current and previous studies, marked by the same colours used in A.
Figure 5: A) static \(C_{X}\) vs angle of attack \(\Theta\) measured using static foils at \(Re=25,000\). Smooth foil presented in light-blue, \(36\%\) in medium-blue, and \(70\%\) in dark-blue. The red dashed-line indicates the \(\theta\) used to compare with the unsteady regime, defined as \(\theta_{s}=4.75^{\circ}\) B) Thrust and drag penalty due to roughness for both the flapping (triangles) and static results (crosses) for the \(36\%\) case (medium blue) and the \(70\%\) case (dark blue).}
state will produce drag (for all three foils) and the flapping cases produce thrust, we present the penalty in its absolute value to help with the comparison. Since we have found surface roughness to be detrimental for \(\overline{C_{X}}\) for all cases considered, a positive penalty value in the static state indicates an increase in drag due to roughness elements, while \(Penalty>0\) in the flapping regime means a decrease in thrust caused by the roughness elements. Here, \(Penalty\) is defined as the relative change in thrust for a rough foil compared to the smooth, \(|(C_{X,rough}-C_{X,smooth})|/C_{X,smooth}\).
The penalty parameter is presented in Figure 5 for static foil (crosses) and flapping foil measurements (triangles). The addition of surface roughness increases the drag production in the static state across the \(Re\) range considered. At \(Re=30,000\), it reaches a \(76\%\) drag penalty for the \(36\%\) roughness and \(43\%\) penalty for the \(70\%\) roughness coverage, compared to the smooth foil. On the other hand, the flapping foils with roughness generate less thrust across the \(Re\) range compared to the smooth foil. At \(Re=30,000\), the thrust decreases by \(35\%\) and \(16\%\) for \(70\%\) and \(36\%\) roughness coverages, respectively. However, the flapping state appears to be more robust to \(Re\) changes. It reduces the penalty observed for static foils, especially for the \(36\%\) coverage. At \(Re=30,000\), foils with \(36\%\) coverage experiences a roughness penalty of \(76\%\) in the static state compared to a \(16\%\) penalty in the flapping state, which could be explained by the dominant effect that \(St\) and the kinematics have on force production.
To further analyse the data presented in Figure 5, we present the out-of-plane vorticity (\(\omega_{Z}\)) in Figure 6. The first row consists of the cycle-averaged unsteady pitching \(\omega_{Z}\), and the positive vorticity regions are enclosed with isolines. In second and third rows, we present the flow-field data measured at \(\alpha=6^{\circ}\) for a pitching foil and a static foil, respectively. The first three columns correspond to \(0\%\) (smooth), \(36\%\) and \(70\%\) roughness coverages, respectively. The fourth column introduces a comparison between different roughness cases with overlapped \(\omega_{Z}\) isolines. The comparison of all three flapping cases suggests that the addition of surface roughness does not introduce major changes in shedding shear layers. In contrast, in the static state, foils with \(36\%\) and \(70\%\) roughness have thicker shear layer in time-average compared to the smooth case, similar to the findings by Guo _et al._ (2021). The presence of thicker shear layers for the roughness cases can be the culprit of \(76\%\) drag penalty shown in Figure 5.
## 4 Conclusions
In this study, we have analysed the influence of surface roughness on the propulsive performance of flapping foils, using force and flow measurements. Three NACA0012 foils with different roughness coverage ratios have been constructed, and tested within the Reynolds number range of \(15,000\leq Re\leq 30,000\). We have found that the addition of surface roughness is detrimental to thrust production and efficiency of a pitching foil. The foils with \(36\%\) and \(70\%\) roughness produce \(16\%\) and \(35\%\) less thrust, respectively, compared to the smooth foil. We have determined that \(Re\) does not play an important role on neither the thrust nor efficiency for the \(Re\) range and roughness coverage ratios considered. Although we have seen no significant change in the wake flow, the foils with roughness experience a decrease in thrust and efficiency, which can be explained by an increase in profile drag associated with the roughness elements. We have compared the effects of roughness on static and flapping states, finding
that the former is considerably more sensitive to it. The roughness penalty for 36% roughness coverage is reduced from 76% in the static state to 16% for flapping. The strongest decrease occurs at the highest Re, highlighting that the effect of roughness on flapping systems is very different than on static systems. This shows that the performance of flapping systems is more robust to the changes in surface roughness.
**Declaration of Interests**: The authors report no conflict of interest. **Acknowledgements**
This research was supported financially by the Office of Naval Research Global Award N62909-18-1-2091, the Engineering and Physical Sciences Research Council (Grant No: EP/R034370/1) and the doctoral training award.
**Data availability statement**
All data supporting this study will be made openly available from the University of Southampton repository upon publication.
|
2308.01486 | Path Shadowing Monte-Carlo | We introduce a Path Shadowing Monte-Carlo method, which provides prediction
of future paths, given any generative model. At any given date, it averages
future quantities over generated price paths whose past history matches, or
`shadows', the actual (observed) history. We test our approach using paths
generated from a maximum entropy model of financial prices, based on a recently
proposed multi-scale analogue of the standard skewness and kurtosis called
`Scattering Spectra'. This model promotes diversity of generated paths while
reproducing the main statistical properties of financial prices, including
stylized facts on volatility roughness. Our method yields state-of-the-art
predictions for future realized volatility and allows one to determine
conditional option smiles for the S\&P500 that outperform both the current
version of the Path-Dependent Volatility model and the option market itself. | Rudy Morel, Stéphane Mallat, Jean-Philippe Bouchaud | 2023-08-03T00:51:38Z | http://arxiv.org/abs/2308.01486v1 | # Path Shadowing Monte-Carlo
###### Abstract
We introduce a _Path Shadowing Monte-Carlo_ method, which provides prediction of future paths, given any generative model. At any given date, it averages future quantities over generated price paths whose past history matches, or'shadows', the actual (observed) history. We test our approach using paths generated from a maximum entropy model of financial prices, based on a recently proposed multi-scale analogue of the standard skewness and kurtosis called 'Scattering Spectra' [1]. This model promotes diversity of generated paths while reproducing the main statistical properties of financial prices, including stylized facts on volatility roughness. Our method yields state-of-the-art predictions for future realized volatility and allows one to determine conditional option smiles for the S&P500 that outperform both the current version of the Path-Dependent Volatility model and the option market itself1.
Footnote 1: This work is supported by the PRAIIRE 3IA Institute of the French ANR-19-P3IA-0001 program and the ENS-CFM models and data science chair.
volatility prediction, option pricing, wavelets
## I Introduction
Modelling future price scenarios is crucial for risk control, for pricing and hedging contingent claims (like options), and, possibly, for detecting arbitrage opportunities. Recently, machine learning auto-regressive models [2, 3, 4] manage to learn from data the distribution \(p(x|x_{\text{past}})\) of log-prices \(x\) conditioned on past history \(x_{\text{past}}\). When trained with a prediction loss, such models generally achieve excellent prediction results. However, their training requires very large amount of data which are usually not available for financial prices.
On the other hand, low-parameterized generative models, i.e. models \(p_{\theta}(x)\) of \(p(x)\) with few parameters \(\theta\), have been extensively studied in the financial literature [5, 6, 7, 8, 9]. However, two main challenges come to the fore. First, these models may not reproduce some important statistics of real financial prices due to oversimplified or flawed assumptions, or due to the fact that they are calibrated on external data such as observed option smiles. Second, it may not be straightforward to condition these models on the realized past at a specific date, in other words, obtaining a model of \(p(x|x_{\text{past}})\). Whereas conditioning is eased by considering Markovian models with a small number of factors [5, 9], such a strong assumption is often much too simplistic.
In this paper, we attempt to address both challenges. Our main contribution is to introduce a new method, that we call _Path Shadowing Monte-Carlo_ (PS-MC), which can be used within any generative model of \(p(x)\) to yield a model of \(p(x|x_{\text{past}})\). Our approach for modelling the distribution \(p(x)\), summarized in section II, is to define a minimal set of statistics describing financial prices that should be reproduced by the generating process. Such statistics should be well-estimated on limited data, while specifying'relevant' properties of the process, in a sense made precise below. This bias-variance tradeoff was addressed in our previous work [1] in the general case of multi-scale processes, where it was shown that a good description of financial prices can be achieved by multi-scale analogues of the classical skewness and kurtosis, called _Scattering Spectra_, which is motivated in section II.
A model based on these statistics captures all important stylized facts such as fat-tail distributions, intermittency, leverage effect and the 'Zumbach effect' [1]. Section III characterizes the average shape of option smiles generated by our Scattering Spectra model and shows that it accounts in particular for the power-law behaviour of the at-the-money skew as a function of maturity, which were recently argued to be a specific feature of rough volatility models [7, 10]. The power-law behaviour of the kurtosis, first reported in [11], is also remarkably well accounted for.
'Path shadowing' is presented in section IV. It consists in softening the conditioning on a given past history \(x_{\text{past}}\). In a nutshell, it amounts to scanning a large generated dataset, in search of paths whose history closely'shadows' the actual history (see Fig. 1 for an illustration). A Path Shadowing Monte-Carlo method then averages the quantity of interest over the future of such matching paths. The term'shadowing' is freely inspired by the shadowing principle in chaotic dynamical systems [12, 13, 14, 15]. Intuitively, it states that a path which is uniformly close to a true orbit will stay close (shadow) a
Fig. 1: Path shadowing. Given current past history \(\widetilde{x}_{\text{past}}\) (red), we scan for paths \(x\) (gray) in a generated dataset whose past history satisfies \(x_{\text{past}}\approx\widetilde{x}_{\text{past}}\). Such paths \(x\) are said to _shadow_\(\widetilde{x}_{\text{past}}\), they provide insights on the future. Predictions are obtained through Monte-Carlo on such shadowing paths.
true path for all time.
This method can effectively be seen as a kernel method, with a causal path embedding to reduce the dimensionality of recent past history. Unlike other recent kernel methods, such as signature kernels [16, 17] that rely on a low-parametric model for \(p(x|x_{\text{past}})\), Path Shadowing Monte-Carlo relies solely on a model of \(p(x)\). It thus circumvents the exact conditioning of a generative model to a given past history \(x_{\text{past}}\). Its performance depends directly on the accuracy of this generative model and its ability to produce a variety of paths with correct statistical dependencies. Section IV-C shows that when performed with our maximum entropy Scattering Spectra model of financial prices, PS-MC yields state-of-the-art volatility prediction.
Section V uses Path Shadowing Monte-Carlo for obtaining _conditional_ option smiles (i.e. option prices at a given date) through Hedged Monte-Carlo with shadowing paths. By construction, such smiles depend only on the log-price process distribution \(p(x)\) and provide a counterpart to smiles obtained from option market data. A 'trading game' then allows us to show that our option smiles correctly anticipate non-trivial future price movements, and compares favourably with state of the art models such as the Path-Dependent Volatility model (PDV) of ref. [9]. Codes for both our generative model and Path Shadowing Monte-Carlo are available at [https://github.com/RudyMorel/shadowing](https://github.com/RudyMorel/shadowing).
## II A Multi-Scale Statistical Model for Financial Prices
Statistical models of financial prices aim at reproducing statistics of the price process only. Price time series exhibit numerous non-Gaussian features, which are difficult to capture within standard low-parametric models, whose number of parameters have been incrementally increased in the literature over the past decades, see e.g. [5, 6, 7, 9, 18]. An alternative route is to define a set of characteristic statistics of (log-)prices and impose that they should be accurately reproduced by the model. We denote as \(\Phi(x)\) such statistics, for example the empirical mean and variance of log-returns \(\Phi(x)=(\left\langle\delta x(t)\right\rangle_{t},\left\langle|\delta x(t)|^{ 2}\right\rangle_{t})\) where, throughout this paper, \(\left\langle\dots\right\rangle_{t}\) denotes empirical averages over time \(t\). Section II-A presents maximum entropy models that allow defining models from a given vector of statistics \(\Phi(x)\). In the simple case of mean and variance, the maximum entropy model coincides with the Gaussian random walk.
The set \(\Phi(x)\) must be chosen carefully and is governed by a bias-variance tradeoff. It should contain enough relevant statistics of prices for the model to be realistic and accurate - reducing the model bias. However, such statistics must be well estimated on the single available historical realization of \(x\) - reducing the model variance. The construction of a set \(\Phi\) that meets these requirements, called the _Scattering Spectra_ (SS), was proposed in [1] in the general case of multi-scale processes, which fortunately includes financial data [6, 19, 20].
We show in section II-B that such statistics correspond to low moment multi-scale analogue of the classical skewness and kurtosis of log-returns. We show that even the most recent low-dimensional parametric models fail to accurately account for these statistics. Such discrepancies turn out to be highly relevant when one wants to predict future realized volatility and option smiles, and highlights the limitations of traditional models, which our approach allows one to overcome.
In [1], we have shown that a Scattering Spectra model properly captures the main properties of financial log-returns, in particular of the S&P500 (the US major stock index). In the following, we show that it also quantitatively reproduces the average behavior of option smiles of different maturities, in particular the maturity-dependent skewness that reflects volatility roughness [10] and the so-called skew-stickiness ratio [21, 22].
### _Maximum Entropy Models_
We denote as \(\widetilde{x}\in\mathbb{R}^{N}\) the observed historical realization of log-prices over \(N\) days. Given a vector of \(d\) statistics \(\Phi(\widetilde{x})\in\mathbb{R}^{d}\) estimated on \(\widetilde{x}\), a maximum entropy model \(p_{\theta}\) with moment constraint \(\mathbb{E}_{p_{\theta}}\{\Phi(x)\}=\Phi(\widetilde{x})\), if it exists, has an exponential probability distribution [23]
\[p_{\theta}(x)=Z_{\theta}^{-1}e^{-\langle\theta,\Phi(x)\rangle}. \tag{1}\]
for certain \(\theta\in\mathbb{R}^{d}\). Estimating the parameters \(\theta\) of model (1) is computationally expensive, in particular when the number of statistics \(d\) is large [24, 25]. To avoid this issue, we consider in this paper microcanonical maximum entropy models which approximate model (1). These models, together with a sampling algorithm are described in Appendix A.
Maximum entropy models depend only on the vector of statistics \(\Phi(x)\). The model accuracy can be improved by enriching the set \(\Phi(x)\). However, we must take into account the problem of estimating \(\Phi(x)\) from the single realization of the process \(\widetilde{x}\). The SS model imposes \(\mathbb{E}_{p_{\theta}}\{\Phi(x)\}=\Phi(\widetilde{x})\), thus for \(p_{\theta}\) to be a good approximation of the true distribution \(p\), one needs \(\Phi(\widetilde{x})\) to be close to the true \(\mathbb{E}_{p}\{\Phi(x)\}\). This amounts to having low-variance statistics \(\Phi\). A good choice of \(\Phi\) is presented in the next section.
### _The Scattering Spectra (SS)_
A standard way of characterizing the price process is through their trend, volatility, skewness and kurtosis. These are obtained from moments of order \(1\), \(2\), \(3\) and \(4\) on log-returns
\[\mathbb{E}\{\delta x(t)\}\,\ \mathbb{E}\{\delta x(t)^{2}\}\,\ \mathbb{E}\{\delta x(t)^{3}\}\,\ \mathbb{E}\{\delta x(t)^{4}\}\]
However such moments do not characterize the time-structure of log-returns, but rather their one-point distribution. One could consider the same moments on multi-scale increments
\[\delta_{\ell}x(t)=x(t)-x(t-\ell) \tag{2}\]
for different lags \(\ell\), but we still obtain a poor description of \(x\). For example, these moments do not pick up time-asymmetry, since changing \(\delta x(t)\) into \(\delta x(-t)\) leaves these moments unchanged. Another disadvantage of multi-scale increments (2) is that they exhibit as many scales \(1\leq\ell\leq N\) as the number
of days \(N\), which seems redundant, specially in view of the known scale-invariant properties of \(x\).
The _Scattering Spectra_ introduced in [1] capture the main non-Gaussian properties of financial prices: fat tailed log-return distributions, sign-asymmetry, time-asymmetry, volatility clustering and volatility roughness. It consists of \(d=\mathcal{O}(\log_{2}^{3}N)\) statistics only, that are low-order moments (order \(1\) and \(2\) only) and can thus be accurately estimated on the historical realization \(\widetilde{x}\) of size \(N\).
We present here the main steps for building such \(\Phi\) and we refer the reader to [1] for more details about the construction.
**Step 1. Wavelet increments.**
Log-prices variation have interesting structure at all scales. However, it is not necessary to consider all scales \(\ell\) in (2) to characterize them efficiently. Standard increments \(\delta_{\ell}x(t)\) (2) are obtained by convolution of \(x\) with the filter \(g_{\ell}=\delta_{0}-\delta_{\ell}\). Wavelet increments replace \(g_{\ell}\) by wavelet filters \(\psi_{j}\) obtained by dilation of a regular mother wavelet \(\psi\)
\[W_{j}x(t)=x\star\psi_{j}(t)\ \ \text{where}\ \ \psi_{j}(t)=2^{-j}\psi(2^{-j}t). \tag{3}\]
The mother wavelet \(\psi\) has a zero average \(\int\psi(t)\mathrm{d}t=0\) and its Fourier transform \(\widehat{\psi}(\omega)=\int\psi(t)\,e^{-i\omega t}\,\mathrm{d}t\), which is real, is mostly concentrated at frequencies \(\omega\in[\pi,2\pi]\). All numerical calculations in this paper are performed with a complex Battle-Lemarie wavelet [26, 27]. Fig. 7 shows the real and imaginary parts of \(\psi\) as well as its Fourier transform. We refer the reader to Appendix B for more properties.
Analogous to (2), wavelet increments (3) can be seen as multi-scale increments at scales \(\ell=2^{j}\) with \(j=1,\ldots,J\). However, scales are now defined as bins of frequencies \([2^{-j}\pi,2^{-j+1}\pi]\) corresponding to the supports of wavelet filters \(\psi_{j}\). The largest scale \(2^{J}\) is chosen to be smaller than the size \(N\) of \(\widetilde{x}\). This yields at most \(\log_{2}N\) scales instead of \(N\) lags \(\ell\).
Histograms of generalized increments \(W_{j}x\) can be constrained by order 1 and order 2 moments \(\mathbb{E}\{|W_{j}x(t)|\}\), \(\mathbb{E}\{|W_{j}x(t)|^{2}\}\) which are estimated through empirical averages. The quantity
\[\Phi_{1}(x)[j]=\frac{\left\langle|W_{j}x(t)|\right\rangle_{t}^{2}}{\left\langle |W_{j}x(t)|^{2}\right\rangle_{t}} \tag{4}\]
is a low-moment measure of kurtosis. Compared to its order 4 counterpart, it is less sensitive to large values. The more peaked at zero the distribution, the smaller the value of \(\Phi_{1}(x)\) and the higher the kurtosis [28]. The order \(2\) moment is
\[\Phi_{2}(x)[j]=\left\langle|W_{j}x(t)|^{2}\right\rangle_{t} \tag{5}\]
and quantifies the average volatility at scale \(2^{j}\) on the period.
**Step 2. Time-scale dependencies.**
Multi-scale increments \(W_{j}x(t)\) are indexed by time \(t\) and scale \(2^{j}\). Such map exhibits dependencies across time and scales that are crucial to characterize the distribution of financial prices. For example, volatility clustering is attested by the fact that \(W_{j}x(t)\) has long-range time correlations. Beyond this well-known stylized fact, the authors of [1] have shown that scale dependencies are crucial to fully characterize the non-Gaussian nature of time series. Natural descriptors for such scale dependencies are order \(2\), \(3\) and \(4\) moments
\[\mathbb{E}\{Wx\,Wx^{*}\}\,\ \mathbb{E}\{Wx\,|Wx|^{2}\}\,\ \mathbb{E}\{|Wx|^{2} \,|Wx|^{2}\}\]
where the products are taken across times \(t,t^{\prime}\) and scales \(j,j^{\prime}\). In practice, estimating order \(3\) and order \(4\) moments is very difficult because of the variance induced by large events. In order to circumvent this problem, we replace \(|Wx|^{2}\) by \(|Wx|\) and define the following non-linear correlations of wavelet increments
\[\mathbb{E}\{Wx\,Wx^{*}\}\,\ \mathbb{E}\{Wx\,|Wx|\}\,\ \mathbb{E}\{|Wx|\,|Wx|\} \tag{6}\]
Owing to the compression properties of wavelets, the first matrix \(\mathbb{E}\{Wx\,Wx^{*}\}\) is quasi-diagonal and its diagonal coefficients are already estimated by (5), see [1].
**Step 3. Low-moment multi-scale skewness and kurtosis.**
Just like for standard skewness and kurtosis that are normalized moments, we normalize the second and third matrices \(\mathbb{E}\{Wx\,|Wx|\}\) and \(\mathbb{E}\{|Wx|\,|Wx|\}\) in (6) by \(\mathbb{E}\{|Wx|^{2}\}\). One can show that the only non-negligible coefficients in the third matrix are obtained for \(t=t^{\prime}\) and \(j\geq j^{\prime}\), and are estimated through
\[\Phi_{3}(x)[j,j^{\prime}]=\frac{\left\langle W_{j}x(t)\,|W_{j^{\prime}}x(t)| \right\rangle_{t}}{\left\langle|W_{j}x(t)|^{2}\right\rangle_{t}^{2}\,\left\langle |W_{j^{\prime}}x(t)|^{2}\right\rangle_{t}^{2}}. \tag{7}\]
These are analogous to the standard low-moment skewness \(\mathbb{E}\{Y|Y|\}\) of a normalized random variable \(Y\) with \(\mathbb{E}\{Y^{2}\}=1\). Other than sign-asymmetry, these complex coefficients also measure time-asymmetry through their phase. Indeed, if log-returns are time-reversible \(\delta x(-t)\stackrel{{ d}}{{=}}\delta x(t)\) then \(\operatorname{Im}\Phi_{3}(x)=0\). One typical example is the leverage asymmetric correlation.
The fourth matrix \(\mathbb{E}\{|Wx|\,|Wx|\}\) in (6) contains kurtosis information. If \(x\) is Gaussian, then for different scales
Fig. 2: Standard statistics of log-returns in the Scattering Spectra (SS) model (orange) compared to S&P observed data (blue). Top graphs: time series of the S&P and generated by the model. Bottom graphs: (a) Histogram of daily log-returns \(\delta x\). (b) Structure functions \(\left\langle|\delta_{\ell}x(t)|^{q}\right\rangle_{t}\). (c) Leverage correlation \(\left\langle\delta x(t-\tau)|\delta x(t)|^{2}\right\rangle_{t}\) on normalized increments.
\(j\neq j^{\prime}\) the Gaussian processes \(W_{j}x\) and \(W_{j^{\prime}}x\) are decorrelated, thus independent. It follows that \(\mathbb{E}\{|W_{j}x||W_{j^{\prime}}x|\}=\mathbb{E}\{|W_{j}x|\}\mathbb{E}\{|W_{j^ {\prime}}x|\}\) and these coefficients boil down to the low-moment kurtosis (4). For the log-price process \(x\), these coefficients capture long-range non-Gaussian correlation between volatility at different scales \(j,j^{\prime}\) and different times \(t,t^{\prime}\).
However, the matrix \(\mathbb{E}\{|Wx|\,|Wx|\}\) contains too many coefficients to be accurately estimated on a single realization \(\widetilde{x}\). We again rely on compression properties of wavelets to approximate such matrix by cascading a second wavelet operator \(W\), which yields a quasi-diagonal matrix \(\mathbb{E}\{W|Wx|\,|Wx|^{*}\}\) where we define generalized increments of volatility as
\[W_{j_{2}}|W_{j_{1}}x|(t)=|x\star\psi_{j_{1}}|\star\psi_{j_{2}}(t).\]
The non-negligible diagonal coefficients are estimated through an empirical average which yields for \(j_{1}\leq j_{1}^{\prime}<j_{2}\)
\[\Phi_{4}(x)[j_{1},j_{1}^{\prime},j_{2}]=\frac{\big{\langle}W_{j_{2}}|W_{j_{1}} x|(t)\,W_{j_{2}}|W_{j_{1}^{\prime}}x|(t)\big{\rangle}_{t}}{\langle|W_{j_{1}}x(t)| ^{2}\rangle_{t}^{\frac{1}{2}}}. \tag{8}\]
These are analogous to the classical low-moment kurtosis. These complex coefficients also capture time-asymmetry through their complex phase. If the log-return process \(\delta x\) is time-reversible then \(\mathrm{Im}\,\Phi_{4}(x)=0\).
We therefore define our Scattering Spectra \(\Phi\) as the collection of (i) estimated average volatility (5), (ii) multi-scale skewness (7) and (iii) multi-scale kurtosis (4,8)
\[\Phi(x)=\big{(}\Phi_{1}(x),\Phi_{2}(x),\Phi_{3}(x),\Phi_{4}(x)\big{)}. \tag{9}\]
In total, \(\Phi\) consists of \(\mathcal{O}(\log_{2}^{3}(N))\) order \(1\) and order \(2\) statistics for a trajectory of size \(N\) and can be estimated with low-variance.
Note that \(\Phi\) does not rely explicitly on the one-point distribution of increments \(\delta_{\ell}x(t)\). Numerical experiments have shown that slight discrepancies may appear, in particular in order \(0\) moments \(\mathbb{P}(\delta_{\ell}x(t)>0)\) which explicitly appear in low-moment smile expansions [28]. We thus complement \(\Phi_{3}(x)\) with the moments
\[\mathbb{P}(\delta_{\ell}x(t)>0)\]
for \(\ell=2^{j},j=1,\ldots,J\), that are estimated through empirical averages \(\langle\text{sigmoid}(\delta_{\ell}x(t))\rangle_{t}\) where \(\text{sigmoid}(x)=(1+e^{-x})^{-1}\). This adds very few coefficients to our scattering spectra \(\Phi(x)\).
The Scattering Spectra (9) thus provide an enriched set of statistics that can be used to quantify model error and interpret any discrepancy. As an example, we revisit through this lens the, low-parametric, Path-Dependent Volatility (PDV) model introduced by Guyon & Lekeufack [9]. Although more parsimonious in terms of number of parameters, several stylized facts are in fact not accurately reproduced by such a model, see a more precise discussion in Appendix D, Fig. 11.
Based on the Scattering Spectra \(\Phi\), we have at our disposal a statistical model of financial prices that can be used to generate faithful synthetic time series (see section II-A). For the S&P time series \(\widetilde{x}\) of size \(N=5827\) days, the Scattering Spectra model (SS model) contains \(248\approx N/20\) real coefficients, which is the dimension of \(\Phi(x)\). Log-return trajectories \(\delta x\) generated from the SS model are shown in Fig. 2. Validation of the SS model can be achieved by measuring observables not included in our set \(\Phi(x)\) and checking whether or not they are correctly reproduced. Standard statistics such as fat tails, volatility clustering, leverage effect and structure functions, were indeed shown to be captured by the model [1]. These are reproduced in Fig. 2. While \(\Phi(x)\) is composed of order \(1\) and order \(2\) moments only, the SS model accurately accounts for up to order \(5\) moments, which is quite remarkable. Another way to describe the multi-scale statistical properties of price time series is through maturity dependent option smiles, which we discuss in the next section.
## III The Average Smile as an Alternative Statistical Characterization
In this section we validate the SS model by considering historical option pricing as an alternative, intuitive way to characterize the multi-scale, non-Gaussian statistics of price time series. The _average smile_ is the unconditional option smile obtained by pricing hedged options using all historical snippets of prices of length equal to the maturity of the option [28, 29]. Even if real option smiles must be conditioned on a specific past price path [9] and are therefore almost never equal to the _average smile_, its shape reveals some interesting, non-trivial properties of prices time series, such as volatility 'roughness' (see below).
Option pricing is performed through the Hedged Monte-Carlo method [29], that converts historical probabilities (either real or synthetic) into 'risk-neutral' ones. Options are hedged daily, with zero interest rate, on the 6000 price snippets of lengths 150 days available from 2000 to 2023 all rescaled such that the initial price is 100. The average implied volatilities \(\sigma(T,K)\) are obtained from option prices \(\mathcal{C}(T,K)\). Fig. 3 compares, for different maturities \(T\), the _average smiles_ using observed S&P data and those generated with the SS model. We see that the model indeed reproduces the overall shape of the smile very well. Intuitively, the level of the average smile, its asymmetry, its concavity and its term structure are captured by \(\Phi_{2}\) (5), \(\Phi_{3}\) (7) and \(\Phi_{1}\) and \(\Phi_{4}\) (4,8) 2. We have also compared the S&P average smiles with the recent Path-Dependent Volatility model of [9], calibrated such as to reproduce the same SS as best as possible - see Appendix D for more details. As a general comment, the PDV model underestimates the kurtosis of the process and correspondingly fail to capture accurately the right wing of the average smile, see Fig. 3c.
Footnote 2: Appendix C shows in more details the parameterization of the model by studying the sensitivity of the smile to the Scattering Spectra statistics \(\Phi(x)\).
We now turn to a more refined analysis of the slope and curvature of these average smiles. We denote as \(\sigma_{\text{ATM}}(T)=\sigma(T,100)\) the at-the-money volatility and
\[\mathcal{M}:=\frac{\ln(\frac{K}{100})}{\sigma^{\text{ATM}}\sqrt{T}}\]
the _rescaled_ log-moneyness. The slope \(\mathcal{S}_{T}\) and curvature \(\kappa_{T}\) of a smile at maturity \(T\) are defined by the order \(2\) expansion around the moneyness \(\mathcal{M}=0\)
\[\sigma(\mathcal{M},T):=\sigma_{\mathrm{ATM}}(T)\bigg{(}1+\mathcal{S}_{T} \mathcal{M}+\kappa_{T}\mathcal{M}^{2}+o(\mathcal{M}^{2})\bigg{)}\]
In the literature, it is customary to define the ATM skew \(\mathrm{Skew}_{T}\) as the slope of the smile as a function of _unscaled_ log-moneyness, i.e. \(\mathrm{Skew}_{T}:=\mathcal{S}_{T}/\sqrt{T}\). For most stochastic volatility models, such skew is found to be regular when \(T\to 0\), whereas rough volatility models predict a singular behavior \(\mathrm{Skew}_{T}\propto T^{H-1/2}\) where \(H\) is the Hurst exponent of volatility, argued to be small, \(H\approx 0.1\)[7, 30, 31].
Fig. 3(d) shows the absolute value of the ATM skew of the average smile for different maturities. In [18, 32] it was shown using option market data that the ATM skew exhibits two power-law regimes pertaining to short and long maturities, with a cutoff between the two regimes around 20 days. Interestingly, we also observe this behavior on the average smile of the S&P, which only depends on the price process, with no reference to actual option markets. Both the PDV model and the SS model track rather well such a behavior. The scaling of log-volatility increments characteristic of rough volatility models [7] or multifractal models [6] is in fact encoded in the model through the coefficients \(\mathbb{E}\{|W_{j_{2}}|W_{j_{1}}x|(t)|^{2}\}\) included in \(\Phi_{4}(x)\) (8). These consider instantaneous volatility \(|W_{j_{1}}x|\) at scale \(2^{j_{1}}\) and its increments at scales \(2^{j_{2}}\).
The SS model furthermore captures two more subtle stylized facts of option smiles:
* The ATM curvature \(\kappa_{T}\), related to a low moment kurtosis [28], is well captured and behaves as a slow power-law of \(T\) (first reported in [11]), both for the S&P and within the SS model. The PDV model, on the other hand, strongly underestimates \(\kappa_{T}\) (see Fig. 3(e)).
* The instantaneous change of ATM volatility \(\sigma_{\mathrm{ATM}}(T)\) induced by a change in underlying price can be linearly regressed on \(\delta x\). This defines the skew-stickiness ratio \(R_{T}\) through the following regression [21, 22] \[\delta\sigma_{\mathrm{ATM}}(T)=-R_{T}\mathrm{Skew}_{T}\times\delta x+\epsilon\]
As shown in [22], \(R_{T}\) has a non-trivial dependence on maturity. Fig. 3(f) shows that both the PDV model and the SS model again reproduce quite well such a dependence.
## IV Path Shadowing Monte-Carlo & Volatility Predictions
The _average smile_ exercise of the previous section is interesting insofar as it allows one to test the ability of various models to capture the distribution of price changes over different maturities. However, it fails to inform us on the power of the model to actually predict, at a given date, the distribution of price changes forward in time. Of course, this is what Finance is all about and we now introduce a framework to do precisely that.
We first assume that the real world is at least approximately stationary, in the sense that it can be approximated by a statistical model with fixed, time-independent parameters. Of course, this can only be true if the model is rich enough to generate intermittent time series that superficially appear non-stationary - such as the ones shown in Fig. 2, with alternating periods of high and low volatility that are actually described by the _same_ model.
If this is the case, then given the past history \(\widetilde{x}_{\text{past}}\) at current time \(t\), a model-free method for predicting the unknown future \(\widetilde{x}_{\text{future}}\) is to look for occurrences similar to \(\widetilde{x}_{\text{past}}\) in the historical realization of log-prices. If such occurrences can be found, what happened thereafter provides some information about the unknown future \(\widetilde{x}_{\text{future}}\) at the current time \(t\).
Finding exact occurrences of course happens with vanishing probability. We therefore introduce an embedding \(h(\widetilde{x}_{\text{past}})\) that reduces the dimensionality of past trajectories and define _shadowing paths_ as paths \(x\) whose past history \(h(x_{\text{past}})\) is close to \(h(\widetilde{x}_{\text{past}})\) in a certain sense. Furthermore, instead of scanning the historical past, we propose in this section to scan a long dataset of paths generated using the model presented in section II.
These shadowing paths are then used as inputs of our proposed _Path Shadowing Monte-Carlo_ (PS-MC) method, which allows us to obtain state-of-the-art predictions for future realized volatility. The method will be extended in the next section V to option pricing.
### _The Path Shadowing Monte-Carlo Method_
First, at a given time \(t\), we separate a log-price path \(x\in\mathbb{R}^{N}\) into its past \(x_{\text{past}}=(x(u),u\leq t)\) and future \(x_{\text{future}}=(x(u),u>t)\)
\[x=(x_{\text{past}},x_{\text{future}})\]
Fig. 3: Average option smiles estimated using S&P price data (a), in our Scattering Spectra (SS) model (b) and in the Path-Dependent Volatility (PDV) model (c). The SS model qualitatively captures the two regimes of the ATM skew as a function of maturity (d), with a cross-over around 20 days [18, 32], as well as the very slow decaying power-law of the ATM kurtosis [11] (e) and the behavior of the skew-stickiness ratio (f). The PDV model, on the other hand, correctly captures skewness effects (d) but fails to capture the amplitude and term structure of the ATM kurtosis (e).
Let \(q(x_{\text{future}})\) be a quantity we want to predict, for example the average realized variance in the next \(T\) days \(q(x_{\text{future}}):=\sum_{u=t+1}^{t+T}|\delta x_{u}|^{2}/T\). In the following section, we write \(q(x)=q(x_{\text{future}})\) for simplicity. An optimal prediction of \(q(x)\) for the mean square error as a function of the observed past \(\widetilde{x}_{\text{past}}\) is given by the conditional expectation
\[\mathbb{E}\{q(x)\ |\ x_{\text{past}}=\widetilde{x}_{\text{past}}\} \tag{10}\]
with \(\mathbb{E}\) the expectation under the true distribution \(p(x)\) of log-prices. The goal is to estimate such conditional expectation.
Let us for a moment omit the conditioning on the past. The standard Monte-Carlo method estimates expectations using a finite number of realizations \(x^{1},\ldots,x^{n}\) drawn from \(p(x)\) as
\[\frac{1}{n}\sum_{k=1}^{n}q(x^{k})\]
which converges to \(\mathbb{E}\{q(x)\}\) as \(n\to+\infty\) under independence of \(x_{1},\ldots,x_{n}\) and integrability of \(q(x)\).
In theory, such method could apply to estimate (10), however it would require finding paths \(x^{k}\) such that \(x^{k}_{\text{past}}=\widetilde{x}_{\text{past}}\), which is all but impossible for data of finite size.
To tackle this problem, we relax strict conditioning on \(\widetilde{x}_{\text{past}}\) and consider paths \(x\) whose past \(x_{\text{past}}\) is _close_ to \(\widetilde{x}_{\text{past}}\) in a certain sense. In order to account for possible long-range dependencies, we would like to consider a long past \(\widetilde{x}_{\text{past}}\). However, finding paths \(x_{\text{past}}\) at a given distance from \(\widetilde{x}_{\text{past}}\) becomes exponentially difficult as the size of the path grows - this is the curse of dimensionality. In order to control the dimension, we consider a path embedding \(h(x_{\text{past}})\in\mathbb{R}^{M}\). Given a threshold \(\eta>0\) we define the set of \(\eta\)_-shadowing paths_ as
\[H_{\eta}(\widetilde{x}_{\text{past}})=\{x\in\mathbb{R}^{N}\ |\ \|h(x_{\text{past}})-h( \widetilde{x}_{\text{past}})\|<\eta\} \tag{11}\]
For example, \(h(x)=\delta x\) considers past log-returns, and hence log-price paths up to an additive constant. Our choice of \(h\) is detailed in the next section. The term _shadowing_ is freely inspired by the shadowing principle in chaotic dynamical systems [12, 13, 14, 15]. The idea is that for a certain small \(\eta\), paths in \(H_{\eta}(\widetilde{x}_{\text{past}})\) can be considered as true realizations of the process that can be used to faithfully compute observables.
_Path Shadowing Monte-Carlo_, illustrated in Fig. 1, is a Monte-Carlo method on shadowing paths. It is a predictive method since shadowing paths span over the future. Unlike a standard Monte-Carlo method, not all paths should have the same weight since \(\|h(x^{k}_{\text{past}})-h(\widetilde{x}_{\text{past}})\|\) is not uniform in \(k\). This is to say, certain paths _shadow_ more accurately \(x_{\text{past}}\) than others and should be considered as more _likely_ to be extensions of the observed \(\widetilde{x}_{\text{past}}\). Path Shadowing Monte-Carlo estimates (10) by averaging \(q(x)\) on paths \(x^{1},\ldots,x^{n}\) with weights \(w_{1},\ldots,w_{n}\). This yields the following estimator
\[\frac{1}{n}\sum_{k=1}^{n}w_{k}q(x^{k}), \tag{12}\]
called the Nadaraya-Watson estimator [33, 34]. In the following, we choose Gaussian weights, to wit
\[w_{k}=c\,\exp\left[-\frac{\|h(x^{k}_{\text{past}})-h(\widetilde{x}_{\text{past }})\|^{2}}{2\eta^{2}}\right]\]
with \(c\) such that \(\frac{1}{n}\sum_{k=1}^{n}w_{k}=1\). The set of shadowing paths \(H_{\eta}(\widetilde{x}_{\text{past}})\) (11) can be defined as the set of all paths that are one standard deviation away from \(\widetilde{x}_{\text{past}}\) for the Gaussian kernel.
The following theorem proves the convergence of the estimator (12) under standard hypotheses.
**Theorem 1** (Path Shadowing Monte-Carlo Method): _If \(q\) is continuous with \(\mathbb{E}\{|q(x)|\}<+\infty\) and the distribution \(p\) of \(x\) is continuous with \(p(x)>0\) for all \(x\in\mathbb{R}^{N}\), then given \(x^{1},\ldots,x^{n},\ldots\) independent realizations of \(x\), the Path Shadowing Monte-Carlo estimator with \(h(x)=x\) converges almost surely_
\[\frac{1}{n}\sum_{k=1}^{n}w_{k}q(x^{k})\longrightarrow\mathbb{E}\{q(x)\ |\ x_{\text{past}}= \widetilde{x}_{\text{past}}\}\]
_for a certain limit \(n\to+\infty\) and \(\eta\to 0\)._
The proof is in Appendix E. The continuity assumptions as well as the assumption that \(p>0\) are technical and can be softened; note that a \(p_{\theta}\) of the form (1) satisfies them. We refer the reader to [35] for convergence theorems under more generic hypotheses.
Path Shadowing Monte-Carlo can be seen as a kernel method on log-price paths using a Gaussian kernel. It is a fully non-local method in the sense that the collected paths \(x^{k}\) may be far away in the past from \(\widetilde{x}_{\text{past}}\), possibly in a generated dataset of paths, contrary to non-local means [36] that only consider neighborhoods of a patch. Such non-locality ensures in practice the independence condition in theorem 1.
### _Generating Shadowing Paths_
Collecting enough shadowing paths from the historical realization of S&P is unrealistic. Indeed, financial price paths are noisy, two different observed paths are likely to be distant from one another, signifying a high-entropy underlying process. Thus, on limited data, the set \(H_{\eta}(\widetilde{x}_{\text{past}})\) will contain almost no paths for reasonable values of \(\eta\), required to be small for the method to converge.
We thus propose to scan for paths \(x^{1},\ldots x^{n}\) in a long generated dataset of log-prices, allowing us to take \(n\gg 1\) and selective shadowing threshold \(\eta\ll 1\). This however immediately introduces a modelling error, due to the fact that we are estimating (10) where \(\mathbb{E}\) is now the expectation with regard to the model distribution and not the true distribution \(p(x)\).
A good model should generate trajectories that capture the long-range dependencies in order for the shadowing paths to have predictive power on the future of \(\widetilde{x}_{\text{past}}\). It should also generate a variety of realistic scenarios in order to find enough paths in \(H_{\eta}(\widetilde{x}_{\text{past}})\) for small \(\eta\). As discussed in section II the SS model precisely meets these requirements: it promotes diversity of generated paths through entropy maximization while reproducing main statistical properties of financial prices. Furthermore, should our generative algorithm produce occasionally unrealistic paths, they would be given a very small weight \(w\) and would not contribute to the estimation of \(q(x)\). Shadowing Monte-Carlo is thus robust to outliers in the generated set of paths. Note also that this dataset can be
generated once and can be scanned again and again for several prediction dates.
A crucial point for PS-MC to give good results is to understand how the path embedding \(h\) affects the notion of path proximity. Such embedding should be chosen adequately. It should pick relevant features of \(x_{\text{past}}\) to predict the quantity of interest \(q(x)\), while remaining low-dimensional such that enough paths can be found in \(H_{\eta}(\widetilde{x}_{\text{past}})\) for small \(\eta\). The naive embedding \(h(x)=\big{(}\delta x(t),-M+1\leq t\leq 0\big{)}\in\mathbb{R}^{M}\) limits the past horizon \(M\), since \(M\) is then also the dimensionality of \(h\).
We propose an embedding \(h\) that again leverages the scale-invariance of \(x\) in the same way our Scattering Spectra framework does, and incorporates the influence of distant past while remaining low-dimensional. It consists of multi-scale increments in the past with a power-law decaying weight
\[h_{\alpha,\beta}(x)=\Big{(}\frac{x(t)-x(t-\ell)}{\ell^{\beta}}\,\ \ell=\lfloor \alpha^{m}\rfloor\,\ m=1,2,\dots\Big{)} \tag{13}\]
for a certain \(\alpha>1\) so that the past is progressively coarse-grained, and \(\beta\geq 0\) so that the far away past is progressively discounted. For a given \(\widetilde{x}_{\text{past}}\) we choose \(\eta\) to be equal to \(\widehat{\eta}\|h(\widetilde{x}_{\text{past}})\|\), which amounts to normalize the distance \(\|h(x_{\text{past}})-h(\widetilde{x}_{\text{past}})\|\) by \(\|h(\widetilde{x}_{\text{past}})\|\). Such \(h\) is a discretization of a continuous embedding that satisfies scaling and dilation equivariance, see Appendix F. In practice we truncate the progression \(\lfloor\alpha^{1}\rfloor,\lfloor\alpha^{2}\rfloor,\dots\) in order for the span of \(h\) to be bounded by \(126\) trading days in the past (corresponding to half a year).
The main parameters are thus \(\alpha,\beta\) and \(\widehat{\eta}\). Parameter \(\alpha\) determines the dimensionality of the path embedding \(h_{\alpha,\beta}\), small \(\alpha\) yields high-dimensional embedding. Parameter \(\beta\) (not to be confused with the parameters \(\beta_{0,1,2}\) of the PDV model) rules the relative importance of distant past to recent past in the selection of shadowing paths. Large \(\beta>0\) means that the recent past bears more weight. Finally, there is a bias-variance trade-off in the choice of \(\widehat{\eta}\). When \(\widehat{\eta}\ll 1\) only the closest path will be used for averaging, leading to large variance estimator (12). When \(\widehat{\eta}\gg 1\) then all paths are averaged uniformly, including paths whose past \(x_{\text{past}}\) has nothing to do with \(\widetilde{x}_{\text{past}}\), thus deteriorating the bias of our PS-MC estimator.
### _Volatility Prediction_
As a meaningful application of Path Shadowing Monte-Carlo, we consider in this section the prediction of the future daily realized variance over \(T\) days, for several values of \(T\):
\[q_{T}(x)=\frac{252}{T}\sum_{u=t+1}^{t+T}|\delta x_{u}|^{2}.\]
We consider all 2112 dates \(t\) from January 2015 to March 2023. For each of them we consider the realized variance over \(T=1,7,25,75,150\) days.
Our PS-MC method (12) uses paths generated from the model presented in section II. We compute the Scattering Spectra statistics (9) on observed 3772 S&P log-prices from January 2000 to December 2014, such that all our predictions are _out-of-sample_. From such statistics we generate 32 768 trajectories of same size 3772 (see Fig. 8 for examples), that represents \(n\approx\) 115 million paths \(x^{1},\dots,x^{n}\) of size 126+150 days. For a given \(\widetilde{x}_{\text{past}}\) we scan such dataset and select the 50 000 closest paths in the sense of the distance induced by embedding (13), parameterized by \(\alpha,\beta\). While this scanning step is computationally demanding, it can be fully parallelized. We then perform a weighted average on those closest paths, parameterized by \(\widehat{\eta}\).
Parameters \(\alpha,\beta,\widehat{\eta}\) are calibrated using our generative model itself, in order to avoid any over-fitting on the limited train data from S&P. We choose those parameters such that \(q_{T}(x)\) is optimally predicted within the model. More precisely, we take 1100 snippets \(\widetilde{x}_{\text{past}}\) from the generated dataset, for which we have access to \(\widetilde{x}_{\text{future}}\). We obtain an estimate of \(q_{T}\) for these 1100 dates and \(T=7,25,75,150\). We then choose the best \(\alpha,\beta,\widehat{\eta}\) so as to maximize the \(R^{2}\) score between prediction and true values of the SS model itself. This yields the following optimal parameters: \(\alpha=1.15,\beta=0.9,\widehat{\eta}=0.075\) and a path embedding \(h_{\alpha,\beta}\) of dimension 34.
Let us note that these optimal values barely change when predicting realized variance at different maturities. This is because of the scale-invariance of both the model and of the path embedding. Note also that \(\alpha=1.15\) in (13) means that the values \(\ell=1,2,3,4\) appear multiple times. This amounts to ascribing an even larger weight to small time lags.
The prediction quality is measured through the \(R^{2}\) score on future volatility estimates and are shown in Table I for different maturities \(T\). As a simple benchmark we consider the realized variance on the \(T\) previous days as a predictor of \(q_{T}(x)\). As a second, more challenging, benchmark we consider the recent Path-Dependent Volatility (PDV) model introduced in [9], which reads
\[\sqrt{q_{T}(x)}=\beta_{0}+\beta_{1}R_{1,t}+\beta_{2}\sqrt{R_{2,t}}\] \[\text{with}\ \ R_{1,t}=k_{1}\star\delta x(t)\,\ R_{2,t}=k_{2} \star|\delta x|^{2}(t),\]
where \(k_{1}\) and \(k_{2}\) are both linear combinations of exponential kernels, acting on past returns and past absolute returns3. We take the very same kernels as specified in [9] but we determine
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Number of days \(T\) & \(1\) & \(7\) & \(25\) & \(75\) & \(150\) \\ \hline \hline Benchmark & -0.16 & 0.43 & -0.05 & -0.58 & -0.79 \\ \hline PDV (SS regressions) & 0.35 & 0.53 & 0.17 & -0.44 & -0.95 \\ \hline PDV (linear regression) & **0.36** & 0.55 & 0.29 & 0.0 & -0.07 \\ \hline SS Path Shadowing & 0.32 & **0.56** & **0.33** & **0.07** & **0.01** \\ \hline \end{tabular}
\end{table} TABLE I: Prediction of realized daily volatility (\(R^{2}\) score). The PS-MC method based on the Scattering Spectra (SS) model outperforms the recently introduced PDV model at all time-scales \(\geq\) 7 days, and upholds predictive power up to a period of \(\approx\) 150 days. For \(T=1\) day, however, the PDV model performs best. The benchmark is the realized variance on the \(T\) previous days. The PDV model was calibrated using two different methods, see Appendix D.
the linear regression coefficients \(\beta_{0},\beta_{1},\beta_{2}\) in two different ways, see Appendix D. The first one amounts to reproducing as well as possible the Scattering Spectra, as we did in section III to reproduce the average smile. The corresponding values of \(\beta_{0,1,2}\) are given in Table IIIa. Unfortunately, this leads to rather poor results as far as volatility prediction is concerned, specially for large \(T\), see Table I.
In order to favour the PDV model as much as possible, we used a different calibration aiming at best regressing \(\sqrt{q_{T}}\) for each maturity \(T\) separately, on the same train period as for the SS model, i.e. from January 2000 to December 2014, see Table IIIo for calibrated values.
Using the very same shadowing paths for all maturities, our Path Shadowing Monte-Carlo method outperforms both the naive benchmark and the PDV model for all maturities from \(T>7\) days, and ties with PDV for \(T=7\) days, see Table I. In particular, our method upholds predictive power up to \(\approx 150\) days, which none of the two other methods are capable of. This is, we believe, due to the scale-invariance of both the SS model and our choice of path embedding (13). Again, we insist on the fact that the PDV model parameters are refitted for each maturity \(T\) whereas the SS model is calibrated once and for all. Of course, the SS model contains many more parameters than the PDV model, and we have not attempted to modify the shape of the kernel \(k_{1}\) and \(k_{2}\) introduced in [9], so there is clearly room for improvements there as well, which could be investigated in future work.
These results vindicate both the generative model presented in section II and the PS-MC method of the present section. In particular the Scattering Specra generative model, now based on 182 parameters determined on the period 2000-2014, is _not_ over-fitting the training dataset.
## V Option Pricing & Trading Games
In section III, we have used the Hedged Monte-Carlo method to price options _unconditionally_ within the SS model, i.e. by averaging over all possible price paths of a given length \(T\). This allowed us to obtain _average smiles_ as a function of maturity, which we compared to those obtained using the same procedure but with real S&P trajectories.
Now, at a given date, option prices reflect anticipations of the market, conditioned on present market conditions - in particular the past price trajectory - and any available information about the future, such as earning announcements, dividends, political events, etc. Of course, such events cannot be directly captured by a purely statistical model, however faithful it might be. Still, it is interesting to run the exercise of pricing option smiles that anticipates the future solely based on the past of underlying price process.
In this section we investigate this question by combining the SS model presented in section II with Path Shadowing introduced in section IV. Option prices are then obtained by upgrading the Hedged Monte-Carlo method [29] with, as an input, shadowing paths generated by the model. The overall level of the resulting smiles at time \(t\) is nothing but the prediction of the future realized volatility for \(t^{\prime}\in[t,t+T]\), which was already investigated in the previous section. We now extend such prediction to the full shape of the smile. We assess the quality of our smiles by trading a buy-sell signal on options whenever the model option smile is telling us that the option is under-priced or over-priced compared to another smile model, or of the option market itself.
### _Path Shadowing Hedged Monte-Carlo_
Hedged Monte-Carlo (HMC) [29] enables one to use time series of prices to compute the option prices. It iteratively determines the optimal price and hedging strategy by minimizing the expected financial risk of a portfolio containing the option to be priced and its hedge, at all times \(t=T-1,T-2,\ldots,0\). The expected risk is computed as an average over paths, which in the present context are the shadowing paths, based on the notion of distance induced by the path embedding \(h\) (13). This defines the Path Shadowing Hedged Monte-Carlo (PS-HMC) that can be used in a versatile way to price any derivative contract. We use the same Gaussian weights (12) and the very same parameters \(\alpha,\beta,\eta\) detailed in section IV-C, that are optimal for volatility prediction within the model itself.
Fig. 4 shows the resulting smiles as a function of rescaled log-moneyness, for different maturities and at 3 typical dates. As one would have hoped, the level, but also the slope and the curvature of those smiles do depend on the chosen date, and more precisely on the actual path trajectory of the price before that date.
### _A Trading Game_
In order to assess the quality of the smiles predicted by the SS model, we set up the following trading game. We trade call options at several dates \(t\) on the option market. We neglect bid-ask spread and consider the option price \(\mathsf{c}^{\text{MKT}}(t,T,K)\) to be the quoted mid-price. We denote \(\sigma^{\text{MKT}}(t,T,K)\) the observed implied volatility computed from \(\mathsf{c}^{\text{MKT}}(t,T,K)\) and \(\sigma^{\text{model}}(t,T,K)\) the implied volatility computed within the model that we decide to trade with.
We test the following trading strategy: buy the corresponding option from the market whenever we deemed it under-priced, i.e. \(\sigma^{\text{MKT}}(t,T,K)<\sigma^{\text{model}}(t,T,K)\) or sell it if we
Fig. 4: Conditional smiles in a Scattering Spectra (SS) model at 3 different dates 2018-10-23, 2019-06-14 and 2020-02-26. They are computed through hedged Monte-Carlo on shadowing paths generated using the SS model. Note that the level, the slope and the curvature of those smiles all strongly depend on the chosen date.
deem it over-priced, i.e. \(\sigma^{\text{MKT}}(t,T,K)>\sigma^{\text{model}}(t,T,K)\). We then follow the hedged option until maturity and register the corresponding profit or loss associated to the trade.
The buy-sell signal of such strategy is thus given by
\[\epsilon_{t}=\left\{\begin{array}{ll}+1&\text{if }\sigma^{\text{MKT}}(t,T,K)< \sigma^{\text{model}}(t,T,K),\\ -1&\text{if }\sigma^{\text{MKT}}(t,T,K)>\sigma^{\text{model}}(t,T,K).\end{array}\right.\]
The un-hedged forward \(\text{P}\&\text{L}_{t}(T,K)\) of one transaction is obtained as
\[\text{P}\&\text{L}_{t}(T,K)=v_{t}\epsilon_{t}\bigg{(}(e^{x_{t+T}}-K)_{+}- \mathcal{C}^{\text{MKT}}(t,T,K)\bigg{)}\]
where \(v_{t}\) is the volume of option traded. In order to remove non-stationary effect due to the long-term change in the value of the underlying, we take \(v_{t}=100/S_{t}=100e^{-x_{t}}\) which amounts to trade options on percentage of variation of the underlying rather than the underlying itself.
To reduce the variance of the strategy, we hedge the option using a simple Black-Scholes delta-hedge with a constant volatility \(0.2\). Such delta-hedge gives zero profit on average but reduces greatly (though not optimally, see [29, 37] for details) the variance of \(\text{P}\&\text{L}_{t}\).
In the following, we will consider the model smile \(\sigma^{\text{model}}\) to be given either by the smile computed in the SS model using PS-HMC (model=SS) or the smile computed in a Path-Dependent Volatility model [9] (model=PDV), both using HMC. The two models (PDV and SS) are calibrated using the same data in the train period i.e. January 2000-December 2014. As in the previous section, the PDV model parameters are furthermore _re-optimized_ for each maturity \(T\) independently as in Table IIIb, whereas the SS model is parameterized once and for all with the Scattering Spectra determined in the train period.
The trading game is then in both cases played out-of-sample, for all 2112 dates \(t\) from January 2015 to May 2023. We choose \(5\) maturities \(T=8,\,25,\,50,\,75,\,150\) and \(9\) strikes at constant rescaled log-moneyness \(\mathcal{M}=-2,-1.5,-1,-0.5,0,0.5,1,1.5,2\).
Detailed \(\text{P}\&\text{Ls}\) over 3 different periods of 3 year each are shown in Fig. 5. Their variance across dates \(t\) is shown in Fig. 12 in Appendix G. For most maturities and strikes, the trading game using the SS model yields positive \(\text{P}\&\text{Ls}\) and clearly outperforms the trading game using the PDV model. In fact, one can directly play the SS model against the PDV model without any reference to the actual option market, fully confirming that the SS model outperforms PDV model for almost all maturities and strikes, see Appendix G and in particular Fig. 14.
Since the \(\text{P}\&\text{Ls}\) are of the same order of magnitude across different strikes and maturities, we average them over all the maturities and strikes. Table II shows such grand averages and reveals that the trading game using the SS model is significantly more profitable than using the PDV model, with furthermore less variance across different periods. This is confirmed by the aggregated \(\text{P}\&\text{Ls}\) over time, see Fig. 6.
ties \(p(x|x_{\text{past}})\) from any generative model of \(p(x)\). Combined with our statistical model of prices, PS-MC provides state-of-the-art volatility predictions. Shadowing paths can also be used to obtain option smiles that depend only on the distribution of the price process. A trading game allowed us to show that the Scattering Spectra model correctly anticipates future price movements and favourably competes with the (simplest version of) the recently introduced Path-Dependent Volatility model. Extending our analysis to other underlying assets, such as single stocks, is of course important to validate our conclusions when price moves are more idiosyncratic.
Beyond prediction, Path Shadowing may be a way of tackling other burning questions in Finance, such as _typicality_: how typical or atypical is a given sequence of price changes? As far as Path Shadowing Monte-Carlo is concerned, one limitation of the method is that it requires to scan a large dataset of generated paths for delivering good performances. This scanning step could be done more efficiently. In particular, could one find 'typical' price paths that should be frequently selected for prediction in order to save intensive scanning efforts? Another highly relevant extension is towards the description of multivariate time series. We hope to address these issues in the near future using the methods introduced in this work.
###### Acknowledgements.
We want to thank C. Aubrun, N. Cuevelle-Magar, S. Crepey, J. Gatheral, F. Guth, J. Guyon, B. Horvath, G. Pages, M. Potters & V. Vargas for many insightful conversations on these topics.
|
2304.08011 | When is the silting-discreteness inherited? | We explore when the silting-discreteness is inherited. As a result, one
obtains that taking idempotent truncations and homological epimorphisms of
algebras transmit the silting-discreteness. We also study classification of
silting-discrete simply-connected tensor algebras and silting-indiscrete
selfinjective Nakayama algebras. This paper contains two appendices; one states
that every derived-discrete algebra is silting-discrete, and the other is about
triangulated categories whose silting objects are tilting. | Takuma Aihara, Takahiro Honma | 2023-04-17T06:20:18Z | http://arxiv.org/abs/2304.08011v1 | # When is the silting-discreteness inherited?
###### Abstract.
We explore when the silting-discreteness is inherited. As a result, one obtains that taking idempotent truncations and homological epimorphisms of algebras transmit the silting-discreteness. We also study classification of silting-discrete simply-connected tensor algebras and silting-indiscrete selfinjective Nakayama algebras. This paper contains two appendices; one states that every derived-discrete algebra is silting-discrete, and the other is about triangulated categories whose silting objects are tilting.
Key words and phrases:silting object, silting-discrete, perfect derived category, dg algebra, recollement, selfinjective Nakayama algebra 2020 _Mathematics Subject Classification_. 16B50, 16G20, 16E45, 18G80 TA was partly supported by JSPS Grant-in-Aid for Young Scientists 19K14497.
local algebra also conveys the property [AH]. One of the main theorems of this paper (Theorem 2.10) states that a full triangulated subcategory of a silting-discrete triangulated category is also silting-discrete. Moreover, we show that a homological epimorphism of (nonpositive) dg algebras induces the silting-discreteness (Theorem 2.15). These lead us to the following results.
**Theorem 1** (Corollaries 2.11 and 2.22).: _Let \(\Lambda\) be a finite dimensional algebra over an algebraically closed field. Assume that \(\Lambda\) is silting-discrete. Then we have the following._
1. _For an idempotent_ \(e\) _of_ \(\Lambda\)_, the truncation_ \(e\Lambda e\) _is silting-discrete._
2. _If_ \(e\) _is a stratifying idempotent of_ \(\Lambda\)_, then_ \(\Lambda/\Lambda e\Lambda\) _is silting-discrete._
We remark that quotients of algebras always inherit the \(\tau\)-tilting finiteness [DIRRT], but not necessarily the silting-discreteness (Subsection 2.5).
The Bongartz-type condition is also one of the most important subjects in silting theory. We naturally ask whether or not the Bongartz-type condition always holds1. For example, silting-discrete triangulated categories satisfy the Bongartz-type condition and every 2-term presilting object can be completed to a silting object [AM, Ai1]. Note that the tilting version (i.e., is a pretilting object partial tilting?) often fails [R1, LVY]; see Example B.8. Theorem 1 and its proof provide a new Bongartz-type lemma under a certain condition; see Corollary 2.17 for a general setting, which recovers the 2-term version.
Footnote 1: Recently, a negative answer to this question was given [LZ]. (See Subsection B.2.)
**Theorem 2** (Corollary 2.18(2)).: _Let \(e\) be an idempotent of \(\Lambda\). Then a presilting object of the perfect derived category of \(\Lambda\) generating the thick closure of \(e\Lambda\) is partial silting._
As a corollary of Theorem 1, we classify silting-discrete simply-connected tensor algebras; note that the tensor algebra of three nonlocal algebras is never silting-discrete [AH]. Also, we obtain a surprising result, which says that a representation-finite selfinjective algebra is not necessarily silting-discrete.
**Theorem 3** (Proposition 3.1, Theorems 3.4 and 3.6).:
1. _A simply-connected algebra is silting-discrere if and only if it is piecewise hereditary of Dynkin type._
2. _Let_ \(A\) _and_ \(B\) _be finite dimensional (nonlocal) simply-connected algebras over an algebraically closed field_ \(K\)_. Then the following are equivalent:_ 1. \(A\otimes_{K}B\) _is silting-discrete;_ 2. _it is piecewise hereditary of type_ \(D_{4},E_{6}\) _or_ \(E_{8}\)_;_ 3. _it is derived equivalent to a commutative ladder of degree_ \(\leq 4\)_._
3. _The selfinjective Nakayama algebra with_ \(n\) _simple modules of length_ \(r\) _is not silting-discrete if (i)_ \(r=3,4\) _and_ \(n\geq 11\)_; (ii)_ \(r=5,6\) _and_ \(n\geq r+8\)_; or (iii)_ \(r\geq 7\) _and_ \(n\geq 2r+1\)_._
## 2. Main results
Throughout this paper, let \(\mathcal{T}\) be a triangulated category which is Krull-Schmidt, \(K\)-linear for an algebraically closed field \(K\) and Hom-finite. For an object \(T\) of \(\mathcal{T}\), its thick closure is denoted by \(\operatorname{\mathsf{thick}}T\); i.e., it is the smallest thick subcategory of \(\mathcal{T}\) containing \(T\).
Silting objects play a central role in this paper.
**Definition 2.1**.: Let \(T\) be an object of \(\mathcal{T}\).
1. We say that \(T\) is _presilting_ (_pretilting_) if it satisfies \(\operatorname{Hom}_{\mathcal{T}}(T,T[i])=0\) for any \(i>0\) (\(i\neq 0\)).
2. A presilting object \(T\) is said to be _silting_ (_tilting_) provided \(\mathcal{T}=\mathsf{thick}\,T\).
3. A _partial silting_ (_partical tilting_) object is defined to be a direct summand of some silting (tilting) object.
**Notation 2.2**.: We use the following notations.
* Denote by \(\mathsf{silt}\,\mathcal{T}\) the set of isomorphism classes of basic silting objects of \(\mathcal{T}\).
* We always suppose that \(\mathsf{silt}\,\mathcal{T}\) is nonempty.
* For a silting object \(A\) of \(\mathcal{T}\) and \(d>0\), \(\mathsf{d}_{A}\mathsf{-silt}\,\mathcal{T}:=\{T\in\mathsf{silt}\,\mathcal{T} \mid A\geq T\geq A[d-1]\}\).
* Let \(S\) be a presilting object of \(\mathcal{T}\). The subset \(\mathsf{silt}_{S}\,\mathcal{T}\) of \(\mathsf{silt}\,\mathcal{T}\) consists of all silting objects having \(S\) as a direct summand.
As terminology, for presilting objects \(T\) and \(U\) of \(\mathcal{T}\) we use \(T\geq U\) if \(\operatorname{Hom}_{\mathcal{T}}(T,U[i])=0\) for every \(i>0\). This actually gives a partial order on \(\mathsf{silt}\,\mathcal{T}\) [AI, Theorem 2.11].
We define the silting-discreteness of \(\mathcal{T}\), which is the main theme of this paper.
**Definition 2.3**.:
1. We say that \(\mathcal{T}\) is _silting-discrete_ if it has a silting object \(A\) such that for any \(d>0\), the set \(\mathsf{d}_{A}\mathsf{-silt}\,\mathcal{T}\) is finite; see [Ai1, Proposition 3.8] for equivalent conditions of silting-discreteness. (As a convention, a triangulated category without silting object is also called silting-discrete.)
2. For a silting object \(A\) of \(\mathcal{T}\), we call \(\mathcal{T}\)_\(2_{A}\)-silting finite_ if \(2_{A}\mathsf{-silt}\,\mathcal{T}\) is a finite set.
3. A _2-silting finite_ triangulated category is defined to be \(2_{A}\)-silting finite for every silting object \(A\) of \(\mathcal{T}\).
**Remark 2.4**.:
1. The notion of \(2_{A}\)-silting finiteness is nothing but that of \(\tau\)-tilting finiteness for \(\operatorname{End}_{\mathcal{T}}(A)\) [DIJ, Corollary 2.8]. For \(\tau\)-tilting theory, we refer to [AIR].
2. The silting-discreteness is preserved under triangle equilvaneces, but the \(2_{A}\)-silting finiteness is not always so; it depends on the choice of silting objects \(A\).
Let us recall several results in [Ai1, AM].
**Proposition 2.5**.: _Let \(A\) be a silting object and \(T\) a presilting object of \(\mathcal{T}\)._
1. [Ai1, Proposition 2.16] _If_ \(A\geq T\geq A[1]\)_, then_ \(T\) _is a direct summand of a silting object lying in the interval between_ \(A\) _and_ \(A[1]\)_._
2. [AM, Proposition 2.14] _Assume that_ \(\mathcal{T}\) _is_ \(2_{A}\)_-silting finite. If_ \(A\geq T\geq A[2]\)_, then there exists a silting object_ \(B\) _of_ \(\mathcal{T}\) _satisfying_ \(A\geq B\geq T\geq B[1]\geq A[2]\)_._
3. [AM, Proposition 2.14] _Assume that_ \(\mathcal{T}\) _is silting-discrete. Then there is a silting object_ \(P\) _of_ \(\mathcal{T}\) _with_ \(P\geq T\geq P[1]\)_._
4. [AM, Theorem 2.4] \(\mathcal{T}\) _is silting-discrete if and only if it is 2-silting finite._
In the following, we investigate the silting-discreteness of triangulated categories:
* Full triangulated subcategories inherit the silting-discreteness (2.1);
* Recollements make an influence on silting-discreteness (2.2).
Moreover, we will also consider silting-discrete algebras as these corollaries.
Let \(\Lambda\) be a finite dimensional \(K\)-algebra which is basic and ring-indecomposable, and modules are always finite dimensional and right unless otherwise noted. The perfect derived category of \(\Lambda\) is denoted by \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\).
We say that \(\Lambda\) is _sifting-discrete_ if so is \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\). Here is an example of silting-discrete algebras which we should first observe; one frequently uses it in the rest of this paper.
**Example 2.6**.: A piecewise hereditary algebra of type \(\mathcal{H}\) is silting-discrete if and only if \(\mathcal{H}\) is of Dynkin type.
Let \(Q\) be a (finite) quiver. For an arrow \(\alpha:i\to j\) of \(Q\), we consider the _formal inverse_\(\alpha^{-1}:j\to i\) and define the _vertual degree_ by \(\deg(\alpha^{\pm})=\pm 1\) (double-sign corresponds). The eventual degree of a walk (a sequence of successive arrows and formal inverses) is defined by log-rule: \(\deg(\alpha\beta)=\deg(\alpha)+\deg(\beta)\). We say that \(Q\) is _gradable_ if any closed walk has vertual degree \(0\). For instance, every tree quiver (i.e., the underlying graph is tree) is gradable. Note that a gradable quiver admits no oriented cycle; but the converse does not necessarily hold.
We observe silting-discrete algebras with radical square zero.
**Proposition 2.7**.: _Let \(\Lambda\) be the radical-square-zero algebra given by a gradable quiver \(Q\). Then \(\Lambda\) is silting-discrete if and only if \(Q\) is Dynkin._
Proof.: It follows from [Y, Theorem 4.4] that \(\Lambda\) is derived equivalent to \(KQ\).
Note that there is a silting-discrete radical-square-zero algebra presented by an ungradable nonDynkin acyclic quiver (Example 2.19).
We say that \(\Lambda\) is _gentle_ if it is presented by a quiver \(Q\) with relation \(I\) such that
1. for any vertex of \(Q\), there exist at most \(2\) incoming and at most \(2\) outgoing arrows;
2. for each arrow \(x:i\to j\) of \(Q\), the number of arrows \(y\) stopping at \(i\) (starting from \(j\)) with \(yx\not\in I\) (\(xy\not\in I\)) is at most \(1\);
3. for each arrow \(x:i\to j\) of \(Q\), the number of arrows \(y\) stopping at \(i\) (starting from \(j\)) with \(yx\in I\) (\(xy\in I\)) is at most \(1\);
4. all relations in \(I\) are paths of length \(2\).
A quiver \(Q\) is called _one-cycle_ provided its underlying graph contains exactly one cycle. A _gentle one-cycle_ algebra is defined to be a gentle algebra given by a one-cycle quiver. We say that a gentle one-cycle algebra _satisfies the clock condition_ if the numbers of clockwise and counter-clockwise oriented relations on the cycle coincide.
The following proposition is also useful.
**Proposition 2.8**.: _A gentle one-cycle algebra \(\Lambda\) is silting-discrete if and only if it does not satisfy the clock condition._
Proof.: Assume that \(\Lambda\) satisfies the clock condition. By [AS, Theorem (A)], we see that \(\Lambda\) is piecewise hereditary of type \(\widetilde{A}\), which is not silting-discrete.
Assume that \(\Lambda\) does not satisfy the clock condition. It follows from [V, Theorem] that \(\Lambda\) is derived-discrete, and so it is silting-discrete by [YY, Example 3.9, Corollary 4.2].
Let us observe the silting-discreteness of type \(\widetilde{A}\) with commutative relation.
**Proposition 2.9**.: _Let \(\Lambda\) be the algebra given by the quiver of type \(\widetilde{A_{p,q}}\) with commutative relation; suppose that \(1<p\leq q\). Then \(\Lambda\) is silting-discrete if and only if \(p=2\) and any \(q\) or \(p=3\) and \(q=3,4,5\)._
Proof.: Let \(Q\) be the quiver \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\cdots\)\(\bullet\)\(\bullet\). Since \(\Lambda\) is the one-point extension of the path algebra \(KQ\) by the indecomposable injective module corresponding to the vertex \(\bigstar\), we see that it is derived equivalent to the one-point extension of \(KQ\) by the indecomposable projective module corresponding to the vertex \(\bigstar\) [BL, Theorem 1], which is the path algebra of the quiver
Hence, it turns out that it is silting-discrete iff one of the desired conditions holds.
### Full triangulated subcategories
The following is the first main theorem.
**Theorem 2.10**.: _Let \(\mathcal{U}\) be a full triangulated subcategory of \(\mathcal{T}\). If \(\mathcal{T}\) is silting-discrete, then so is \(\mathcal{U}\)._
Proof.: We show that \(\mathcal{U}\) is \(2\)-silting finite; hence, it is silting-discrete by Proposition 2.5(4). Let \(T\) be a silting object in \(\mathcal{U}\). As \(T\) is presilting in \(\mathcal{T}\), we obtain a silting object \(A\) satisfying \(A\geq T\geq A[1]\) by Proposition 2.5(3). Let \(U\) be a silting object of \(\mathcal{U}\) with \(T\geq U\geq T[1]\). Since there is a triangle \(T_{1}\to T_{0}\to U\to T_{1}[1]\) with \(T_{0},T_{1}\in\operatorname{\mathsf{add}}T\) in \(\mathcal{U}\) [AI, Proposition 2.24], we have inequalities \(A\geq U\geq A[2]\). Proposition 2.5(1) and (2) lead to the fact that \(U\) is a direct summand of a silting object lying in the interval beween \(A\) and \(A[2]\). As the interval is a finite set, we deduce that \(\{U\in\operatorname{\mathsf{silt}}\mathcal{U}\ |\ T\geq U\geq T[1]\}\) is also finite.
A typical example of fully faithful functors is the triangle functor \(-\otimes_{e\Lambda e}\,e\Lambda:\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,e \Lambda e)\to\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\) for an idempotent \(e\) of \(\Lambda\). This yields a corollary of Theorem 2.10, which says that taking idempotent truncations brings the silting-discreteness from the original.
**Corollary 2.11**.: _Let \(e\) be an idempotent of \(\Lambda\). If \(\Lambda\) is silting-discrete, then so is \(e\Lambda e\)._
### Recollements
In this subsection, we investigate an impact of recollements on silting-discreteness.
Let us begin with the study of silting reductions [IY], which enable us to realize the Verdier quotient of \(\mathcal{T}\) by the thick closure of a presilting object as a certain subfactor category of \(\mathcal{T}\) and which make isomorphisms of silting posets.
Let \(S\) be a presilting object of \(\mathcal{T}\) and put \(\mathcal{S}:=\operatorname{\mathsf{thick}}S\). The Verdier quotient of \(\mathcal{T}\) by \(\mathcal{S}\) is denoted by \(\mathcal{T}/\mathcal{S}\).
Take a full subcategory \(\mathcal{Z}\) of \(\mathcal{T}\) as follows:
\[\mathcal{Z}:=\{X\in\mathcal{T}\ |\ \operatorname{Hom}_{\mathcal{T}}(X,S[i])=0 =\operatorname{Hom}_{\mathcal{T}}(S,X[i])\text{ for any }i>0\}.\]
We denote by \(\frac{\mathcal{Z}}{[S]}\) the additive quotient of \(\mathcal{Z}\) modulo the ideal \([S]\), which admits a triangle structure [IY, Theorem 3.6].
Then, the silting reduction [IY, Theorem 3.6] shows that there is a triangle equivalence between \(\mathcal{T}/\mathcal{S}\) and \(\frac{\mathcal{Z}}{[S]}\). Since \(\mathcal{T}\) is Hom-finite and Krull-Schmidt, so is \(\mathcal{T}/\mathcal{S}\).
Now, we prepare a lemma for the main theorem of this subsection.
**Lemma 2.12**.: _Keep the notations above. If \(\mathcal{T}\) is silting-discrete, then so is \(\mathcal{T}/\mathcal{S}\)._
Proof.: By the silting reduction [IY, Theorem 3,7 and Corollary 3.8], the natural functor \(\mathcal{T}\to\mathcal{T}/\mathcal{S}\) induces an isomorphism \(\mathsf{silt}_{\mathcal{S}}\,\mathcal{T}\simeq\mathsf{silt}\,\mathcal{T}/ \mathcal{S}\) of posets. Thus, it is not hard to deduce that \(\mathcal{T}/\mathcal{S}\) is silting-discrete.
**Remark 2.13**.: We are assuming that \(\mathsf{silt}\,\mathcal{T}\neq\emptyset\) to apply the silting reduction, while \(\mathsf{silt}\,\mathcal{T}/\mathcal{S}=\emptyset\) may occur; see Subsection B.2.
To realize the goal of this subsection, we need (nonpositive) dg algebras.
Let \(\mathcal{A}\) be a dg \(K\)-algebra and \(\mathsf{per}(\mathcal{A})\) denote the perfect derived category of \(\mathcal{A}\). If \(\mathcal{A}\) is nonpositive with finite dimensional cohomologies, then \(\mathsf{per}(\mathcal{A})\) is Hom-finite and Krull-Schmidt; see [AMY, Proposition 6.12] for example.
We give a remarkable example [Y, Example 5.3], which mentions that the \(0\)th cohomology \(H^{0}(\mathcal{A})\) of \(\mathcal{A}\) does not necessarily inherit the silting-discreteness from \(\mathcal{A}\); moreover, the endomorphism algebra of a silting object does not always inherit the property either.
**Example 2.14**.: Let \(\Lambda\) be the algebra presented by the quiver \(Q\)
with relation \(\alpha\beta\gamma=0\). Then, we easily observe that \(\Lambda\) is piecewise hereditary of type \(E_{6}\); so, it is silting-discrete. Take \(T:=S_{1}\oplus S_{2}[1]\oplus S_{3}[2]\oplus S_{4}[3]\oplus S_{5}[2]\oplus S _{6}[1]\), where \(S_{i}\) is the simple module corresponding to the vertex \(i\) of \(Q\); it is in \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\) because \(\Lambda\) has finite global dimension. We see that \(T\) is a silting object whose dg endomorphism algebra \(\mathcal{A}\) is nonpositive with finite dimensional cohomologies; so, \(\mathsf{per}(\mathcal{A})\) is Krull-Schmidt. By Keller-Rickard's theorem [K, R1], we have a triangle equivalence \(\mathsf{per}(\mathcal{A})\simeq\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\), whence \(\mathsf{per}(\mathcal{A})\) is silting-discrete. However, the \(0\)th cohomology \(H^{0}(\mathcal{A})\) of \(\mathcal{A}\), which is isomorphic to \(\operatorname{End}_{\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)}(T)\), is the radical-square-zero algebra of \(Q^{\mathrm{op}}\), and it is derived equivalent to a hereditary algebra of type \(\widetilde{D_{5}}\). Thus, \(H^{0}(\mathcal{A})\) is not silting-discrete.
This example also says that although a derived preprojective algebra \(\mathcal{A}\) of Dynkin type (it is a nonpositive dg algebra) admits a silting-discrete perfect derived category [AMY, Corollary 8.6], it is still open whether a preprojective algebra of Dynkin type except \(A_{2},D_{2n},E_{7}\) and \(E_{8}\), which appears as the \(0\)th cohomology of \(\mathcal{A}\), is silting-discrete or not.
We aim at our goal of this subsection.
Recall that a _recollement_\(\mathcal{R}\) consists of the following data of triangulated categories \(\mathcal{D},\mathcal{X}\) and \(\mathcal{Y}\), and triangle functors between them:
such that
1. \((i^{*},i_{*}),\ (i_{!},i^{!}),\ (j_{!},j^{!})\) and \((j^{*},j_{*})\) are adjoint pairs;
2. \(i_{*}\ (=i_{!}),\ j_{!}\) and \(j_{*}\) are fully faithful;
3. any object \(D\) of \(\mathcal{D}\) admits two triangles \[i_{!}i^{!}(D)\to D\to j_{*}j^{*}(D)\to i_{!}i^{!}(D)[1]\text{ and }j_{!}j^{!}(D)\to D\to i_{*}i^{*}(D)\to j_{!}j^{!}(D)[1],\] where the morphisms around \(D\) are given by adjunctions.
Omitting the triangle functors, we write such a recolent as \(\mathcal{R}=(\mathcal{Y},\mathcal{D},\mathcal{X})\).
We denote by \(\mathsf{D}(-)\) the (unbounded) derived catgory, and for an object \(X\) of \(\mathsf{D}(\Lambda)\), \(\mathsf{Tria}(X)\) stands for the smallest full triangulated subcategory of \(\mathsf{D}(\Lambda)\) containing \(X\) which is closed under all direct sums.
From now, we consider either of the following situations.
* Start from a given recollement \((\mathcal{Y},\mathsf{D}(\Lambda),\mathcal{X})\) with \(\mathcal{X}\) compactly generated: Then, by [AHMV, Theorem 4.5]\(\mathcal{X}\) can be written as \(\mathsf{Tria}(S)\) for some presilting object \(S\) of \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\).
* Start from a given presilting object \(S\) of \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\): Since \(\mathsf{Tria}(S)\) is a smashing subcategory of \(\mathsf{D}(\Lambda)\)[AHMV, Proposition 3.5], there exists a full triangulated subcategory \(\mathcal{Y}\) of \(\mathsf{D}(\Lambda)\) such that \((\mathcal{Y},\mathsf{D}(\Lambda),\mathsf{Tria}(S))\) forms a recollement; actually, \(\mathcal{Y}\) is right perpendicular to \(\mathsf{Tria}(S)\)[NS, Corollary 2.4].
In any case, we have a recollement \((\mathcal{Y},\mathsf{D}(\Lambda),\mathsf{Tria}(S))\) for a presilting object \(S\) of \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\). Furthermore, thanks to Nicolas-Saorin's result [NS, Section 4], there is a dg \(K\)-algebra \(\mathcal{A}\) and a morphism \(\varphi:\Lambda\to\mathcal{A}\) of dg algebras such that \(i_{!}=-\otimes_{\mathcal{A}}^{\mathsf{L}}\mathcal{A}\) induces a recollement \((\mathsf{D}(\mathcal{A}),\mathsf{D}(\Lambda),\mathsf{Tria}(S))\). Here, \(\varphi\) is embedded into the triangle \(X:=\mathsf{RHom}_{\Lambda}(S,\Lambda)\otimes_{B}^{\mathsf{L}}S\to\Lambda \xrightarrow{\varphi}\mathcal{A}\to X[1]\) in \(\mathsf{D}(\Lambda^{\mathrm{op}}\otimes_{K}\Lambda)\), where \(\mathcal{B}\) is the dg endomorphism algebra of \(S\). We evidently observe that \(H^{i}(\mathcal{A})\) is finite dimensional for any \(i\). One can also describe \(\mathcal{A}\) as the dg endomorphism algebra of \(\Lambda\) in a dg enhancement of \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)/\operatorname{thick}(S)\). Such a morphism \(\varphi\) is called a _homological epimorphism_; see [P].
Now, we state the second main theorem of this paper.
**Theorem 2.15**.: _Keep the notations above. If \(\Lambda\) is silting-discrete, then so is \(\mathsf{per}(\mathcal{A})\)._
A key point is an anologue of the proof of [KY1, Corollary 2.12(a)].
Proof.: The recollement \((\mathsf{D}(\mathcal{A}),\mathsf{D}(\Lambda),\mathsf{Tria}(S))\) as in before Theorem 2.15 gives rise to an exact sequence \(0\to\mathsf{Tria}(S)\to\mathsf{D}(\Lambda)\xrightarrow{i^{*}}\mathsf{D}( \mathcal{A})\to 0\) of triangulated categories. As \(S\) is presilting in \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\), we derive from Thomason-Trobaugh-Yao localization theorem
[N, Theorem 2.1] that \(i^{*}\) induces a triangle equivalence \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)/\operatorname{\mathsf{thick}}(S) \simeq\mathsf{per}(\mathcal{A})\), whence \(\mathsf{per}(\mathcal{A})\) is silting-discrete by Lemma 2.12.
**Remark 2.16**.: Theorem 2.15 holds enoughly for a nonpositive dg \(K\)-algebra \(\Lambda\).
The proof of Theorem 2.15 spins off the following result, which mentions a Bongartz-type lemma for a given presilting object \(S\); if \(\Lambda\geq S\geq\Lambda[1]\), then \(\mathcal{A}\) is nonpositive because \(H^{i}(\mathcal{A})\simeq H^{i+1}(X)\), so the result gives a generalization of Proposition 2.5(1).
**Corollary 2.17**.: _Keep the notations as in before Theorem 2.15. Assume that \(\mathcal{A}\) is nonpositive. Then \(S\) is partial silting._
Proof.: In the proof of Theorem 2.15, we got a triangle equivalence \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)/\operatorname{\mathsf{thick}} \,S\simeq\mathsf{per}(\mathcal{A})\). The silting reduction (Lemma 2.12) yields isomorphisms \(\operatorname{\mathsf{silt}}_{S}(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\, \Lambda))\simeq\operatorname{\mathsf{silt}}(\mathsf{K}^{\mathsf{b}}(\mathsf{ proj}\,\Lambda)/\operatorname{\mathsf{thick}}\,S)\simeq\operatorname{\mathsf{silt}}( \mathsf{per}(\mathcal{A}))\) of posets. Since \(\mathcal{A}\) is nonpositive, it is silting in \(\mathsf{per}(\mathcal{A})\), whence \(\operatorname{\mathsf{silt}}(\mathsf{per}(\mathcal{A}))\) is nonempty; so is \(\operatorname{\mathsf{silt}}_{S}(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\, \Lambda))\). Thus, it turns out that \(S\) is partial silting.
Let \(e\) be an idempotent of \(\Lambda\). Following [1, Proposition 2.10] and its proof, we can construct a nonpositive dg \(K\)-algebra \(\mathcal{A}_{e}\) and a homological epimorphism \(\varphi:\Lambda\to\mathcal{A}_{e}\) of dg algebras; embed the canonical morphism \(\Lambda e\otimes_{e\Lambda e}^{\mathsf{L}}e\Lambda\to\Lambda\) into the triangle
\[\Lambda e\otimes_{e\Lambda e}^{\mathsf{L}}e\Lambda\to\Lambda\xrightarrow{ \varphi^{\prime}}\mathcal{A}^{\prime}\to\Lambda e\otimes_{e\Lambda e}^{ \mathsf{L}}e\Lambda[1],\]
and take the standard truncations \(\mathcal{A}_{e}:=\sigma^{\leq 0}(\mathcal{A}^{\prime})\) and \(\varphi:=\sigma^{\leq 0}(\varphi^{\prime})\). Then, \(\mathcal{A}_{e}\) and \(\varphi\) are the desired dg algebra and homological epimorphism, since we have isomorphims
\[H^{i}(\mathcal{A}_{e})\simeq\left\{\begin{array}{ll}0&(i>0);\\ \Lambda/\Lambda e\Lambda&(i=0);\\ \operatorname{Ker}(\Lambda e\otimes_{e\Lambda e}e\Lambda\to\Lambda e\Lambda)& (i=-1);\\ \operatorname{Tor}_{-i-1}^{e\Lambda e}(\Lambda e,e\Lambda)&(i<-1).\end{array}\right. \tag{2.17.1}\]
Here is a corollary of Theorem 2.15 and Corollary 2.17.
**Corollary 2.18**.: _Let \(e\) be an idempotent of \(\Lambda\). Then we have the following._
1. _If_ \(\Lambda\) _is silting-discrete, then so is_ \(\mathsf{per}(\mathcal{A}_{e})\)_._
2. _A presilting object of_ \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\) _generating_ \(\operatorname{\mathsf{thick}}(e\Lambda)\) _is partial silting._
Proof.: As above, we obtain a recollement \((\mathsf{D}(\mathcal{A}_{e}),\mathsf{D}(\Lambda),\operatorname{\mathsf{Tria }}(e\Lambda))\) and know that \(\mathcal{A}_{e}\) is nonpositive. So, the first assertion directly follows from Theorem 2.15. If \(S\) is a presilting object of \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\) generating \(\operatorname{\mathsf{thick}}(e\Lambda)\), then we have \(\operatorname{\mathsf{Tria}}(S)=\operatorname{\mathsf{Tria}}(e\Lambda)\), whence the last assertion is derived from Corollary 2.17.
In the rest of this subsection, we give an explicit description of \(\mathcal{A}_{e}\) and its example thanks to [11].
We first construct a nonpositive dg algebra \(\widetilde{\Lambda}\) quasi-isomorphic to \(\Lambda\); see [O, Construction 2.6]. Let \(\Lambda\) be presented by a (finite) quiver \(Q\) with admissible ideal \(I\); write \(Q^{(0)}:=Q\). We make a graded quiver \(Q^{(1)}\) by adding to \(Q^{(0)}\) arrows in degree \(-1\) which correspond to each element of \(I\) and consider a differential which sends new arrows to their corresponding relations; then \(H^{0}(KQ^{(1)})=\Lambda\). Picking out a generating set of
\(H^{-1}(KQ^{(1)})\), one produces a graded quiver \(Q^{(2)}\) by adding to \(Q^{(1)}\) arrows in degree \(-2\) which correspond to each element of the generating set and consider a differential as well. Iterating this, we get a graded quiver \(Q^{(\infty)}:=\bigcup_{i=0}^{\infty}Q^{(i)}\) such that the dg quiver algebra \(\widetilde{\Lambda}:=(KQ^{(\infty)},d)\) is quasi-isomorphic to \(\Lambda\).
Now, we apply Theorem 7.1 of [13] to obtain that \(\mathcal{A}_{e}\) is quasi-isomorphic to \(\widetilde{\Lambda}/\widetilde{\Lambda}\widetilde{e}\widetilde{\Lambda}\), where \(\widetilde{e}\) is the idempotent of \(\widetilde{\Lambda}\) corresponding to \(e\).
**Example 2.19**.: Let \(\Lambda\) be the algebra presented by the quiver
with \(\beta\gamma=0\); it is silting-discrete by Proposition 2.8. Then, \(\widetilde{\Lambda}\) is the dg quiver algebra of
with \(\deg(\delta)=-1\), the other arrows of degree \(0\) and the trivial differential.
Let \(e\) be the primitive idempotent of \(\Lambda\) corresponding to the vertex \(3\). We see that \(\mathcal{A}_{e}\) is the dg quiver algebra of the graded Kronecker quiver with arrows of degree \(0\) and \(-1\), and the trivial differential. This is silting-discrete by Corollary 2.18(1).
Note that the dg quiver algebra of the graded Kronecker quiver with arrows of the same degree \(n\leq 0\) and the trivial differential is not silting-discrete; indeed, it is derived equivalent to the (ordinary) Kronecker algebra by the silting object \(P_{1}[-n]\oplus P_{2}\). Here, \(P_{1}\) and \(P_{2}\) are the indecomposable projective modules corresponding to the vertices \(1\) and \(2\), respectively.
This observation gives an example of silting-indiscrete algebras.
**Example 2.20**.: Let \(Q\) be the quiver of type \(\widetilde{A_{p,q}}\); assign to the arrows of two paths (length \(p\) and \(q\), respectively) from the source to the sink \(\alpha\) and \(\beta\), respectively. Then \(\Lambda:=KQ/(\alpha^{p},\beta^{q})\) is silting-indiscrete.
Proof.: We see that \(\widetilde{\Lambda}\) is given by the graded quiver
with the dashed arrows of degree \(-1\). Taking the sum \(e\) of the primitive idempotents of \(\Lambda\) corresponding to all but the vertices \(1\) and \(2\), it is obtained that \(\widetilde{\Lambda}/\widetilde{\Lambda}\widetilde{e}\widetilde{\Lambda}\) is the dg quiver algebra of the graded Kronecker quiver with arrows of degree \(-1\), which is silting-indiscrete, so is \(\Lambda\) by Corollary 2.18(1).
### Homological epimorphisms of algebras
In this subsection, we restrict the results in Subsection 2.2 to 'ordinary' algebras; i.e., \(\mathcal{A}\) is just an algebra.
Let \(\Gamma\) be a finite dimensional \(K\)-algebra. Recall that a homomorphism \(\varphi:\Lambda\to\Gamma\) of algebras is a _homological epimorphism_ if the canonical morphism \(\Gamma\otimes_{\Lambda}^{\mathsf{L}}\Gamma\to\Gamma\) is an isomorphism in the derived category of \((\Gamma,\Gamma)\)-bimodules; that is, the multiplication map \(\Gamma\otimes_{\Lambda}\Gamma\to\Gamma\) is isomorphic and \(\operatorname{Tor}_{i}^{\Lambda}(\Gamma,\Gamma)=0\) for all \(i>0\). Note that the former condition \(\Gamma\otimes_{\Lambda}\Gamma\simeq\Gamma\) is satisfied iff \(\varphi\) is an epimorphism in the category of rings; it is called a _ring-epimorphism_ [S, Proposition 1.1]. A surjective homomorphism (i.e., an 'ordinary' epimorphism) is a ring-epimorphism, but the converse is not always true.
As a homological epimorphism \(\Lambda\to\Gamma\) of algebras induces a recollement \((\mathsf{D}(\Gamma),\mathsf{D}(\Lambda),\mathcal{X})\) by [NS, Section 4], we immediately obtain the following result from Theorem 2.15 under a subtle condition.
**Corollary 2.21**.: _Let \(\varphi:\Lambda\to\Gamma\) be a homological epimorphism and assume that \(\mathcal{X}\) is compactly generated; e.g., \(\operatorname{pd}\Gamma_{\Lambda}<\infty\). If \(\Lambda\) is silting-discrete, then so is \(\Gamma\)._
We say that an idempotent \(e\) of \(\Lambda\) (or the ideal \(\Lambda e\Lambda\)) is _stratifying_ if the canonical homomorphism \(\pi:\Lambda\to\Lambda/\Lambda e\Lambda\) is a homological epimorphism; equivalently, the dg algebra \(\mathcal{A}_{e}\) as in Corollary 2.18 is just isomorphic to \(\Lambda/\Lambda e\Lambda\). For example, if \(\Lambda\) is hereditary (i.e., \(\Lambda=KQ\) for an acyclic quiver \(Q\)), then any idempotent \(e\) of \(\Lambda\) is stratifying; notice that \(\widetilde{\Lambda}\) as in the last of Subsection 2.2 is nothing but \((KQ,0)=\Lambda\). More generally, if \(\Lambda e\Lambda\) is projective as a right \(\Lambda\)-module, then it is stratifying. Here is a direct consequence of Corollaries 2.18/2.21.
**Corollary 2.22**.: _Let \(e\) be a stratifying idempotent of \(\Lambda\). If \(\Lambda\) is silting-discrete, then so is \(\Lambda/\Lambda e\Lambda\)._
A typical example of stratifying idempotents is a primitive idempotent corresponding to a source or a sink in the Gabriel quiver of \(\Lambda\). So, we have:
**Example 2.23**.: Let \(e\) be a primitive idempotent corresponding to a source or a sink in the Gabriel quiver of \(\Lambda\). If \(\Lambda\) is silting-discrete, then so is \(\Lambda/\Lambda e\Lambda\). In other words, if a one-point extension algebra of \(\Lambda\) is silting-discrete, then so is \(\Lambda\).
We remark that this example can be also got from the fact that \(\Lambda/\Lambda e\Lambda\) is isomorphic to \((1-e)\Lambda(1-e)\) for such an idempotent \(e\).
Note that the converse of Example 2.23 does not necessarily hold; a one-point extension of a path algebra of Dynkin type is often of nonDynkin type. This also means that the silting-discreteness of the both sides in a recollement is not always passed to the middle in the recollement.
We apply Example 2.23 to tree quiver algebras.
**Example 2.24**.: Let \(\Lambda\) be a tree quiver algebra; i.e., the underlying graph of its Gabriel quiver is tree. If \(\Lambda\) is silting-discrete, then so is \(\Lambda/\Lambda e\Lambda\) for every idempotent \(e\) of \(\Lambda\).
### Remark on Kronecker algebras
As is well-known, the Kronecker algebras are a first and good example of \(\tau\)-tilting infinite algebras. When we discuss silting-discreteness,
one often needs graded Kronecker algebras, as we already observed in the preceding subsections. Now, we summarize it.
Let \(\mathcal{K}\) be the dg quiver algebra of a graded \(n\)-Kronecker quiver with trivial differential. For \(\lambda:=(n_{i})\in\bigoplus_{i\in\mathbb{Z}}\mathbb{Z}_{\geq 0}\), we write \(\mathcal{K}:=\mathcal{K}(\lambda)\), where \(n_{i}\) stands for the number of arrows of degree \(i\); we will omit the zero parts. Note that \(n=\sum_{i}n_{i}\). We remark that applying shifts, \(\mathcal{K}((n_{i})_{i})\) is derived equivalent to \(\mathcal{K}((n_{i+m})_{i})\) for every \(m\in\mathbb{Z}\).
In case that \(n_{i}\neq 0\) for at most two components \(i\), we get the following result.
**Proposition 2.25**.: _Let \(i<0\). Then \(\mathcal{K}=\mathcal{K}(n_{0},n_{i})\) is silting-discrete iff \(n_{0},n_{i}\leq 1\)._
Proof.: If \(n_{0}\geq 2\), then \(H^{0}(\mathcal{K})\) is the ordinary (2-)Kronecker algebra; so, it is not \(\tau\)-tilting finite. Therefore, \(\mathcal{K}\) is not silting-discrete. Let \(n_{i}\geq 2\) and \(S:=H^{0}(\mathcal{K})/\operatorname{rad}H^{0}(\mathcal{K})\). Then the Koszul dual \(\mathcal{K}^{!}:=\operatorname{Ext}_{\mathcal{K}}^{*}(S,S)\) of \(\mathcal{K}\) is quasi-isomorphic to \(\mathcal{K}(n^{\prime}_{1-i},n^{\prime}_{1})\), which is derived equivalent to \(\mathcal{K}(n^{\prime\prime}_{0},n^{\prime\prime}_{i})\). Here, \(n^{\prime\prime}_{0}=n^{\prime}_{1-i}=n_{i}\) and \(n^{\prime\prime}_{i}=n^{\prime}_{1}=n_{0}\). Since \(\mathcal{K}\) and \(\mathcal{K}^{!}\) are derived equivalent [SY, Section 5], we obtain that \(\mathcal{K}\) is not silting-discrete.
Let us show the 'if' part; we only handle the case that \(n_{0}=1=n_{i}\). Let \(\Lambda\) be the radical-square-zero algebra given by the quiver
It follows from Proposition 2.8 that \(\Lambda\) is silting-discrete. Let \(e_{v}\) be the primitive idempotent of \(\Lambda\) corresponding to the vertex \(v\). Taking the idempotent \(e:=e_{1}+\cdots+e_{-i}\), we see that \(\mathcal{K}\) is quasi-isomorphic to \(\mathcal{A}_{e}\), whence it is silting-discrete by Corollary 2.18.
**Remark 2.26**.: We know from [LY, Corollary 3.2] that if \(\mathcal{K}\) is derived equivalent to an 'ordinary' algebra, then \(\mathcal{K}=\mathcal{K}(n_{0},n_{-1})\) up to shift. Moreover, by [LY, Corollary 3.10] it is seen that \(\mathcal{K}(n_{0},n_{-1})\) is derived equivalent to the algebra presented by the quiver consisting of vertices \(1\) and \(2\) with \(n_{0}\) arrows from \(1\) to \(2\) (say \(x\)) and \(n_{-1}\) arrows from \(2\) to \(1\) (say \(y\)), and with all relations \(yx=0\). This is just quasi-hereditary with \(2\) simples.
**Remark 2.27**.: We observe that \(\mathcal{K}(1,1)\) for an arbitrary degree is derived equivalent to a graded gentle one-cycle algebra, which behaves like (ordinary) derived-discrete algebras. We hope that graded gentle one-cycle algebras which are not derived equivalent to \(K\widetilde{A}\) are silting-discrete; see [KY2].
### Remark on quotients
We give a remark on the inheritance of silting-discreteness by (idempotent) quotients, comparing with \(\tau\)-tilting finiteness.
First, note that algebra ('ordinary') epimorphisms can not necessarily transmit the silting-discreteness; see Proposition 2.9 and Example 2.20.
Next, let us observe that idempotent quotients can not always inherit the silting-discreteness, either. Let \(\Gamma\) be the algebra given by the quiver
with relations \(\beta\alpha=\delta\alpha=\gamma\varepsilon=0\) and \(\gamma\beta=yx\). To check that \(\Gamma\) is a piecewise hereditary algebra of type \(E_{7}\), we first show the following claim.
**Claim.** Let \(\Lambda\) be a piecewise hereditary algebra of Dynkin type and \(M\) an indecomposable right \(\Lambda\)-module. Then the one-point extension algebra \(\begin{pmatrix}K&M\\ 0&\Lambda\end{pmatrix}\) is derived equivalent to a hereditary algebra.
Proof.: Let \(A\) be a hereditary algebra of Dynkin type which is derived equivalent to \(\Lambda\). Applying shift functors if needed, we may assume that \(M\) corresponds to an indecomposable \(A\)-module \(N\) under a derived equivalence between \(\Lambda\) and \(A\). Since \(A\) is representation-finite, there is an integer \(\ell\geq 0\) with \(P:=\tau^{\ell}(N)\) projective. Here, \(\tau\) denotes the Auslander-Reiten translation. As it induces a derived autoequivalence of \(A\), we can take a derived equivalence between \(\Lambda\) and \(A\) which sends \(M\) to \(P\). By [BL, Theorem 1], we obtain that \(\begin{pmatrix}K&M\\ 0&\Lambda\end{pmatrix}\) is derived equivalent to \(\begin{pmatrix}K&P\\ 0&A\end{pmatrix}\), which is hereditary.
Let us return our extension to \(\Gamma\). Since the full subquiver with the vertices \(2\), \(3\), \(4\) and \(7\) makes a piecewise hereditary algebra of type \(D_{4}\), we observe by the claim that the one-point extension at the vertex \(5\) is derived equivalent to a hereditary algebra whose Coxeter polynomial is \((x+1)(x^{4}+1)\), which is of type \(D_{5}\). Therefore, it turns out that the algebra presented by the full subquiver with the vertices \(2\), \(3\), \(4\), \(5\) and \(7\) is piecewise hereditary of type \(D_{5}\). As a similar argument (use the dual of the claim), we see that \(\Gamma\) is derived equivalent to a hereditary algebra with Coxeter polynomial \((x+1)(x^{6}-x^{3}+1)\), which is of type \(E_{7}\). Thus, we figure out that \(\Gamma\) is silting-discrete.
However, it is obtained by Example 2.14 that the idempotent quotient \(\Gamma/\Gamma e_{7}\Gamma\) is not silting-discrete, where \(e_{7}\) denotes the primitive idempotent corresponding to the vertex \(7\).
**Remark 2.28**.: Although Coxeter polynomials are a derived invariant, we can not judge the silting-discreteness of a given algebra only by them, in general. Actually, the extended canonical algebra of type \(\langle 2,4,6\rangle\), which is given by the quiver
with relation \(z^{6}=x^{2}-y^{4}\), has the same Coxeter polynomial as a (piecewise) hereditary algebra of type \(D_{12}\) [LP, Proposition 5.9], but it is not silting-discrete. Note that the algebra is never derived equivalent to a hereditary algebra; for a hereditary algebra, if the spectral radius (\(=1\)) is not a root of the Coxeter polynomial, then the algebra is representation-finite; if the spectral radius (\(=1\)) is a root of the Coxeter polynomial, then the algebra is representation-tame; if the spectral radius is strictly greater than \(1\), then the algebra is representation-wild.
## 3. Applications
In this section, we investigate the silting-discreteness of finite dimensional algebras; in particular, we focus on
* simply-connected tensor algebras (3.1), and
* selfinjective Nakayama algebras (3.2).
Let \(\Lambda\) be a finite dimensional \(K\)-algebra which is basic and ring-indecomposable. Let \(\overrightarrow{A_{n}}\) be the quiver \(\begin{CD}1@>{}>{}>2@>{}>{}>\cdots @>{}>{}>n\end{CD}\). For two successive arrows \(\begin{CD}\bullet @>{\alpha}>{}>\bullet @>{\beta}>{}>\bullet\end{CD}\) in the Gabriel quiver of \(\Lambda\), we draw \(\begin{CD}\bullet @>{}>{\ddots}>{}>\bullet\end{CD}\) if \(\alpha\beta=0\).
### Simply-connected tensor algebras
In this subsection, we completely classify silting-discrete simply-connected tensor algebras. Since we already know that the tensor algebra of three nonlocal algebras is anything but silting-discrete [AH, Proposition 4.1], one turns the interest to the classification of silting-discrete simply-connected tensor algebras of two algebras. Here is the first case; one of the two is \(K\).
**Proposition 3.1**.: _A simply-connected algebra is silting-discrete if and only if it is piecewise hereditary of Dynkin type._
Proof.: The 'if' part is trivial. We show the 'only if' part. Let \(\Lambda\) be a silting-discrete simply-connected algebra. Note that the silting-discreteness implies the \(2_{\Lambda}\)-silting finiteness (the \(\tau\)-tilting finiteness for \(\Lambda\)). We see by [W, Theorem 3.4] that \(\Lambda\) is representation-finite. Similarly, it follows from [ANS, Lemma 3.1] that every reflection of \(\Lambda\) is simply-connected and \(\tau\)-tilting finite, so it is representation-finite. Then, we apply Proposition 3.3 of [ANS] to deduce that \(\Lambda\) is piecewise hereditary of Dynkin type.
_In the rest of this subsection, let \(A\) and \(B\) be triangular algebras which are basic and ring-indecomposable_, and put \(\Lambda:=A\otimes_{K}B\). Here, a _triangular_ algebra is a finite dimensional \(K\)-algebra whose Gabriel quiver is acyclic.
Our goal of this subsection is to give a complete classification of silting-discrete simply-connected tensor algebras. We know that \(\Lambda\) is silting-discrete if \(A\) is local and \(B\) is silting-discrete [AH, Theorem 2.1]. Let us give an easy observation.
**Lemma 3.2**.: _If \(\Lambda\) is silting-discrete, then so are both \(A\) and \(B\)._
Proof.: Let \(e\) be a primitive idempotent of \(A\). As \(A\) is triangular, we have an isomorphism \(eAe\simeq K\). Apply Corollary 2.11 to deduce that \(B\simeq(e\otimes 1)\Lambda(e\otimes 1)\) is silting-discrete.
We determine the algebra structure of one of the components when \(\Lambda\) is silting-discrete.
**Lemma 3.3**.: _If \(\Lambda\) is silting-discrete, then at least one of \(A\) and \(B\) has at most 2 nonisomorphic simple modules. In particular, it is isomorphic to \(K\) or \(K\overrightarrow{A_{2}}\)._
Proof.: By Lemma 3.2, it is observed that \(A\) and \(B\) are silting-discrete, and so they have no multiple arrow in their Gabriel quivers [DIRRT, Theorem 5.12(d)]. Note that for every idempotent \(e\) of \(A\), \(eAe\) is a silting-discrete triangular algebra.
Now, assume that \(A\) has at least \(3\) nonisomorphic simple modules. As above, there exists an idempotent \(e\) of \(A\) such that \(eAe\) is isomorphic to one of the following:
From Corollary 2.11, we obtain that \((e\otimes 1)\Lambda(e\otimes 1)=(eAe)\otimes_{K}B\) is silting-discrete. Note that the first two of the three algebras above are derived equivalent to the path algebra \(K\overrightarrow{A_{3}}\). If \(eAe\) is one of the first two, then \((eAe)\otimes_{K}B\) is derived equivalent to \(K\overrightarrow{A_{3}}\otimes_{K}B\), which must be silting-discrete. This yields by [AH, Theorem 4.4] that \(B\) is a Nakayama algebra with radical square zero, whence it is isomorphic to \(K\overrightarrow{A_{r}}/\operatorname{rad}^{2}K\overrightarrow{A_{r}}\) for some \(r>0\); because \(B\) is triangular. We see that \((eAe)\otimes_{K}B\) is derived equivalent to \(K\overrightarrow{A_{3}}\otimes_{K}K\overrightarrow{A_{r}}\), which is also silting-discrete. By [AH, Theorem 4.4] again, we have \(r\leq 2\).
Finally, let us suppose that \(eAe\) is the last of the three algebras above, which is derived equivalent to the algebra \(C\) given by the quiver
with \(\gamma\beta=0\). So, we observe that \((eAe)\otimes_{K}B\) is derived equivalent to \(C\otimes_{K}B\), which is silting-discrete. Since there is an epimorphism \(C\otimes_{K}B\to K\overrightarrow{A_{3}}\otimes_{K}B\), it turns out that \(K\overrightarrow{A_{3}}\otimes_{K}B\) is \(\tau\)-tilting finite [DIRRT]. As a similar argument above, we conclude that \(B\) has at most \(2\) nonisomorphic simple modules.
We often call the algebra \(K\overrightarrow{A_{2}}\otimes_{K}K\overrightarrow{A_{r}}\) the _commutative ladder_ of degree \(r\), which is prenested by the quiver with relations as follows:
Now, we realize the goal of this subsection, which gives a complete classification of silting-discrete simply-connected tensor algebras.
**Theorem 3.4**.: _Assume that \(A\) and \(B\) are nonlocal and simply-connected. Then the following are equivalent:_
1. \(\Lambda\) _is silting-discrete;_
2. _It is a piecewise hereditary algebra of type_ \(D_{4},E_{6}\) _or_ \(E_{8}\)_;_
3. _It is derived equivalent to the commutative ladder of degree_ \(\leq 4\)_._
Proof.: The equivalence (2)\(\Leftrightarrow\)(3) is due to [L]. We know the implication (2)\(\Rightarrow\)(1) holds. Let us show the implication (1)\(\Rightarrow\)(3) holds true. Applying Lemma 3.3, we may suppose that \(A\simeq K\overrightarrow{A_{2}}\). By Lemma 3.2, one sees that \(B\) is silting-discrete. Therefore, it follows from Proposition 3.1 that \(B\) is piecewise hereditary of Dynkin type \(\Delta\), whence \(\Lambda\) is derived equivalent to \(K\overrightarrow{A_{2}}\otimes_{K}K\overrightarrow{\Delta}\). Since this is silting-discrete, we observe that \(\Delta\) must be of type \(A_{r}\) by [AH, Theorem 3.2]; that is, \(\Lambda\) is derived equivalent to the commutative ladder of degree \(r\). By [AH, Example 3.3], we get \(r\leq 4\)
**Remark 3.5**.: Lemma 3.3 also says that if \(\Lambda\) is silting-discrete, then at least one of \(A\) and \(B\) is automatically simply-connected. Then, we ask whether the both are simply-connected or not. In fact, it seems to be unknown that \(K\overrightarrow{A_{2}}\otimes_{K}C\) is silting-discrete, where \(C\) is the algebra presented by the quiver with relation:
(It is \(\tau\)-tilting finite, thanks to Aoki's QPA programm.) This is also one reason why we can not drop the assumption of \(A\) and \(B\) being simply-connected in Theorem 3.4.
### Selfinjective Nakayama algebras
Ten years ago, the first-named author of this paper showed that any representation-finite symmetric algebra is silting-discrete [Ai1]. Then, we might naturally hope that every representation-finite selfinjective algebra is also silting-discrte. However, we here give the following surprising result, which says that the guess does not necessarily hold.
**Theorem 3.6**.: _Let \(\Lambda\) be a (nonlocal) selfinjective Nakayama algebra; that is, it is presented by the cyclic quiver with \(n\) vertices, and the \(r\)-th radical is zero for some \(n,r>1\). Then \(\Lambda\) is not silting-discrete if (i) \(r=3,4\) and \(n\geq 11\); (ii) \(r=5,6\) and \(n\geq r+8\); or (iii) \(r\geq 7\) and \(n\geq 2r+1\)._
Let \(n,r>1\). We denote by \(N_{n,r}\) the algebra given by the quiver
with \(x^{r}=0\). The primitive idempotent corresponding to the vertex \(i\) of the quiver is denoted by \(e_{i}\). As is well-known, an algebra is a nonlocal selfinjective (symmetric) Nakayama algebra if and only if it is isomorphic to \(N_{n,r}\) for some \(n,r>1\) (\(r\equiv 1\pmod{n}\)).
We give an example of silting-discrete selfinjective Nakayama algebras.
**Proposition 3.7**.: _If \(r=2\) or \(r\equiv 1\pmod{n}\), then \(N_{n,r}\) is silting-discrete._
Proof.: By Proposition 2.8 for \(r=2\) and [Ai1, Theorem 5.6] for \(r\equiv 1\pmod{n}\).
Put \(A(n,r):=K\overrightarrow{A_{n}}/\operatorname{rad}^{r}K\overrightarrow{A_{n}}\). Under a suitable condition, we can get \(A(n,r)\) by an idempotent truncation of a selfinjective Nakayama algebra.
**Lemma 3.8**.: _Let \(1\leq s\leq n\) and put \(e:=e_{1}+\cdots+e_{s}\). If \(s+r\leq n+1\), then \(eN_{n,r}e\) is isomorphic to \(A(s,r)\)._
Proof.: Straightforward.
Thanks to the list of [HS] (see also [LP]), we give a complete classification.
**Lemma 3.9**.:
1. _If_ \(A(n,r)\) _is not silting-discrete, then neither is_ \(A(n+1,r)\)_;_
2. \(A(n,r)\) _is silting-discrete if and only if one of the following cases occurs: (i)_ \(r=2\)_; (ii)_ \(r=3,5,6\) _and_ \(n\leq 8\)_; (iii)_ \(r=4\) _and_ \(n\leq 7\)_; (iv)_ \(r\geq 7\) _and_ \(n=r+1\)_._
Now, we show the main theorem of this subsection.
Proof of Theorem 3.6.: Let \(\Lambda:=N_{n,r}\) for \(n,r>1\). We put \(s\) in each case as follows:
1. \(r=3,4\) and \(n\geq 11\rightsquigarrow s=9\) (\(r=3\)) or \(s=8\) (\(r=4\));
2. \(r=5,6\) and \(n\geq r+8\rightsquigarrow s=9\);
3. \(r\geq 7\) and \(n\geq 2r+1\rightsquigarrow s=r+2\).
In all the cases, the assumption of Lemma 3.8 is satisfied; hence, \(e\Lambda e\simeq A(s,r)\).
By Lemma 3.9, we see that \(A(s,r)\) is not silting-discrete for these pairs \((s,r)\), whence \(\Lambda\) is not silting-discrete by Corollary 2.11.
## Appendix A Derived-discrete algebras are silting-discrete
The aim of this appendix is to give an approach to checking the silting-discreteness by classification of indecomposable objects. As a corollary, we obtain that any derived-discrete algebra is silting-discrete [YY, Example 3.9, Corollary 4.2]; the case that it has finite global dimension is due to [BPP]. Here is the main theorem of this appendix.
**Theorem A.1**.: _Assume that for each integer \(d>0\), there exists an upper bound of the dimensions of \(\operatorname{Hom}_{\Lambda}(\oplus_{i\in\mathbb{Z}}P^{i},\Lambda/ \operatorname{rad}\Lambda)\) for all indecomposable perfect complexes \(P\) with length \(d\). Then \(\Lambda\) is silting-discrete._
Proof.: By [HZS, Corollary 9] (see also [ANR, Theorem 1.1]), the assumption implies that there are only finitely many indecomposable presilting complexes of \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\) with length \(d\), which derives that \(\mathsf{d}_{\Lambda}\text{-}\mathsf{silt}\,\mathsf{K}^{\mathsf{b}}(\mathsf{ proj}\,\Lambda)\) is finite. Thus, \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\) is silting-discrete.
For example, the assumption of Theorem A.1 is satisfied if \(\Lambda\) is a derived-disctete algebra [BM, BGS] (see also [ALPP]). So, we recover the result of Yao-Yang.
**Corollary A.2**.: _Any derived-discrete algebra is silting-discrete. In particular, every Nakayama algebra with radical square zero is silting-discrete._
Proof.: Let \(\Lambda\) be a derived-discrete algebra, but not piecewise hereditary of Dynkin type. By [BGS], we see that \(\Lambda\) is derived equivalent to the algebra presented by the quiver with relations for some \(1\leq r\leq n\) and \(m\geq 0\):
It follows from [BM] that each term of an indecomposable perfect complex is a multiplicity-free projective module, which means that \(\Lambda\) satisfies the assumption of Theorem A.1, so it is silting-discrete.
## Appendix B Triangulated categories whose silting objects are tilting
The gap between siltings and tiltings has been often drawing attention lately; see [AK, AD] for example. In this appendix, we explore when a triangulated category has only tilting objects; the main result of this appendix was used by the first-named author
of this paper in the article [Ai2], but he gave no proof because it is easy. The aim of this appendix is to present a proof and examples of the main result.
We say that \(\mathcal{T}\) is _asotic_ if any silting object in \(\mathcal{T}\) is tilting; The word 'asotic' is an abbreviation of the phrase 'Any Silting Object is Tilting' + the suffix '-ic'.
In this appendix, algebras are always finite dimensional \(K\)-algebras which are basic and ring-indecomposable, unless otherwise noted.
The first example of asotic triangulated categories is a triangulated category with an indecomposable tilting object [AI, Theorem 2.26]. In the following, let us consider the nonlocal case.
We say that \(\mathcal{T}\) is \(\ell\)_-Calabi-Yau_ if there is a bifunctorial isomorphism \(\operatorname{Hom}_{\mathcal{T}}(-,?)\simeq D\operatorname{Hom}_{\mathcal{T}} (?,-[\ell])\), where \(D\) stands for the \(K\)-dual. Here is an easy example of asotic triangulated categories; see [AI, Lemma 2.7 and Example 2.8].
**Example B.1**.: A \(0\)-Calabi-Yau triangulated category is asotic. In particular, the perfect derived cagegery of a symmetric algebra is \(0\)-Calabi-Yau, and so it is asotic.
We also know that a (complete) preprojective algebra of extended Dynkin type admits an asotic perfect derived category [KM, Proposition A.1]; note that the algebra is not finite dimensional and the perfect derived category has symmetry like \(0\)-Calabi-Yau property.
The main result of this appendix gives a slight generalization of Example B.1. Leaving the details for silting mutation to the paper [AI], we use the terminologies \(\mu_{X}^{-}(T)\) and \(\mu_{X}^{+}(T)\) for the left and right mutations of a silting object \(T\) at its direct summand \(X\).
**Theorem B.2**.: _Let \(\Lambda\) be an algebra and \(\mathcal{T}:=\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\)._
1. _The following are equivalent:_ 1. \(\mu_{P}^{-}(\Lambda)\) _is tilting for any indecomposable projective module_ \(P\)_;_ 2. _The top of the indecomposable injective module corresponding to_ \(S\) _is the direct sum of some copies of_ \(S\) _for every simple module_ \(S\)_._
2. _The following are equivalent:_ 1. \(\mu_{P}^{+}(\Lambda)\) _is tilting for any indecomposable projective module_ \(P\)_;_ 2. _The socle of the indecomposable projective module corresponding to_ \(S\) _is the direct sum of some copies of_ \(S\) _for every simple module_ \(S\)_._
Proof.: We show (2); the other can be handled dually. Let \(S\) be a simple module and \(P\) its corresponding indecomposable projective module which is of the form \(e\Lambda\) for a primitive idempotent \(e\) of \(\Lambda\). Then, we observe that the right mutation \(\mu_{P}^{+}(\Lambda)\) is the direct sum of a complex \(X\) and the stalk complex \(\Lambda/P\) concerned in degree \(0\), where \(X\) is the \((-1)\)-shift of the projective presentation of \(M:=e\Lambda/e\Lambda(1-e)\Lambda\). Note that the second syzygy \(\Omega^{2}(M)\) of \(M\) is contained in the direct sum of some copies of \(\Lambda/P\) [AI, Lemma 2.25]. Thus, the equivalence (i)\(\Leftrightarrow\)(ii) can be checked easily by an isomorphism \(\operatorname{Hom}_{\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)}(\mu_{P}^ {+}(\Lambda),\mu_{P}^{+}(\Lambda)[-1])\simeq\operatorname{Hom}_{\Lambda}(M, \Lambda/P\oplus\Omega^{2}(M))\).
For example, weakly-symmetric algebras satisfy the condition (ii) as in Theorem B.2(1)(2). Moreover, the weakly-symmetric property of algebras is a derived invariant; see [AD, Proposition 3.1] for example. These yield the following corollary.
**Corollary B.3**.: _Let \(\Lambda\) be a weakly-symmetric algebra and \(\mathcal{T}:=\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\). Let \(\mathcal{C}\) be a connected component of the Hasse quiver of \(\mathsf{silt}\,\mathcal{T}\). Then the following hold:_
1. _If_ \(\mathcal{C}\) _has a tilting object, then all members in_ \(\mathcal{C}\) _are tilting._
2. _If_ \(\Lambda\) _is tilting-connected; that is,_ \(\mathcal{C}=\mathsf{silt}\,\mathcal{T}\)_, then_ \(\mathcal{T}\) _is asotic._
We give an example of nonsymmetric algebras whose perfect derived categories are asotic; see [Ai2, AM, AK, AD].
**Example B.4**.: Let \(\Lambda\) be the preprojective algebra of Dynkin type \(D_{2n},E_{7}\) or \(E_{8}\), which is nonsymmetric weakly-symmetric. Then \(\mathcal{T}:=\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\) is asotic.
Proof.: The strategy is due to [AM]; we give a proof here for the convenience of the reader.
By Theorem B.2, any (irreducible) silting mutation of \(\Lambda\) is a tilting object whose endomorphism algebra is isomorphic to \(\Lambda\). So, for every sequence \(\Lambda=:T_{0},T_{1},\cdots,T_{d}\) of silting objects such that \(T_{i+1}\) is the (left) silting mutation of \(T_{i}\) at an indecomposable direct summand, we see that all \(T_{i}\)'s are tilting. Since the set \(2_{T_{i}}\mathsf{-silt}\,\mathcal{T}\) is finite, we obtain that \(\mathcal{T}\) is silting-discrete; in particular, it is silting-connected. Thus, it turns out that \(\mathcal{T}\) is asotic by Corollary B.3.
We remark that there are weakly-symmetric algebras whose perfect derived categories are not asotic [AD, Section 4]. This says that for such an algebra \(\Lambda\), the Hasse quiver of \(\mathsf{silt}(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda))\) has a connected component consisting of nontilting silting objects. On the other hand, we know from [AD, Proposition 3.6] that for a weakly-symmetric algebra \(\Lambda\), every silting object lying in \(2_{\Lambda}\mathsf{-silt}(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda))\) is tilting.
We also find nonweakly-symmetric algebras with asotic perfect derived categories.
**Proposition B.5**.: _Let \(R\) be a local algebra and \(\Lambda\) a silting-discrete algebra. If \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\) is asotic, then so is \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda\otimes_{K}R)\)._
Proof.: The assertion follows from [AH, Theorem 2.1].
### Thick subcategories
As an application, we describe thick subcategories generated by silting objects (i.e., the thick closures of presilting objects) in terms of algebras associated to a given algebra; however, we will assume the Bongartz-type condition (i.e., every presilting object is partial silting). Since any silting-discrete triangulated category satisfies the Bongartz-type condition [AM, Theorem 2.15], we can, for example, choose a silting-discrete symmetric algebra as our algebra.
The following proposition is practical to write out full triangulated subcategories with silting objects (up to triangle equivalence).
**Proposition B.6**.: _Let \(\Lambda\) be an algebra whose perfect derived category is asotic and satisfies the Bongartz-type condition. Then every full triangulated subcategory of \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\) with a silting object can be realized as \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,e\Gamma e)\), up to triangle equivalence. Here, \(\Gamma\) is a derived equivalent algebra to \(\Lambda\) and \(e\) is its idempotent._
Proof.: Let \(\mathcal{U}\) be a full triangulated subcategory of \(\mathcal{T}:=\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\) with \(U\) silting. Since \(U\) is presilting in \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\), the Bongartz-type condition of \(\mathcal{T}\) implies that \(U\) can be completed to a silting object \(T:=U\oplus X\) of \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\). As \(\mathcal{T}\) is asotic, it is seen that
\(T\) is tilting in \(\mathcal{T}\), so \(U\) is tilting in \(\mathcal{U}\). Hence, \(\Gamma:=\operatorname{End}_{\mathcal{T}}(T)\) is derived equivalent to \(\Lambda\). Take the composition \(e\) of the canonical morphisms \(T\to U\to T\), which is an idempotent of \(\Gamma\) with \(e\Gamma e\simeq\operatorname{End}_{\mathcal{T}}(U)\). Since \(U\) is tilting in \(\mathcal{U}\), it turns out that there are triangle equivalences \(\mathcal{U}\simeq\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\operatorname{End}_{ \mathcal{T}}(U))\simeq\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,e\Gamma e)\).
Let us give an easy example.
**Example B.7**.: Let \(\Lambda:=\Lambda_{0}\) be the multiplicity-free Brauer star algebra with 3 edges; i.e., its Brauer tree is:
As is well-known [R2], there are two derived equivalent algebras to \(\Lambda\); one is \(\Lambda\) itself and the other is the multiplicity-free Brauer line algebra \(\Lambda_{1}\) with 3 edges. Taking idempotent truncations, we get 3 kinds of Brauer tree algebras other than \(\Lambda\) and \(\Lambda_{1}\); the Brauer tree algebras \(\Lambda_{2},\Lambda_{3}\) and \(\Lambda_{4}\) whose Brauer trees are \(G_{2}\), \(G_{3}\) and \(G_{3}\times G_{3}\), respectively:
By Proposition B.6, these give all (nonzero) full triangulated subcategories of \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\) with siltings, which are triangle equivalent to \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda_{i})\) for \(i\in\{0,1,2,3,4\}\).
Even if a given full triangulated subcategory has a tilting object, it does not necessarily poccess partial tilting in the whole; the following example was first appeared in [RS].
**Example B.8**.: Let \(\Lambda\) be the algebra presented by the quiver
with relations \(\alpha\beta=\gamma\delta=\delta\alpha=0\). Then, \(\Lambda\) has global dimension 4, and the simple module \(S\) corresponding to the vertex 1 is a partial tilting module with projective dimension 2; hence it is pretilting in \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\). Let \(\mathcal{U}\) be the thick closure of \(S\) in \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\). Then, \(\mathcal{U}\) has a tilting object \(S\), but its (pre)tilting objects are never partial tilting in \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\).
Indeed, the Grothendieck group of \(\mathcal{U}\) has rank 1, so \(\mathsf{silt}\,\mathcal{U}=S[\mathbb{Z}]\). However, we obtain from [LVY, Example 4.4] that for any \(n\in\mathbb{Z}\), \(S[n]\) is not partial tilting in \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\). Thus, \(\mathcal{U}\) has no partial tilting object of \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\). On the other hand, \(S\) is partial silting in \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\); actually, \(S\oplus\Lambda/P[2]\) is silting in \(\mathsf{K}^{\mathsf{b}}(\mathsf{proj}\,\Lambda)\), where \(P\) stands for the indecomposable projective module corresponding to the vertex 1.
**Remark B.9**.: A crux of Proposition B.6 is that our triangulated category satisfies the Bongartz-type condition but not aside; that is, the classification of thick subcategories with siltings can be done by using idempotent truncations of dg algebras. However, we here avoided doing that because it is very difficult to classify all derived equivalent dg algebras to the original. In this case, the asoticness condition was useful.
### Remark on Bongartz-type condition
Recently, it was pointed out in [LZ] that the Bongartz-type condition for _presiltings_ does not necessarily hold. Let us recall it here.
We consider the algebra \(\Gamma\) presented by the quiver \(\begin{CD}2\x@>{x}>{y}>1\x@>{x}>{y}>3\end{CD}\) with \(x^{2}=0=y^{2}\). Let \(S:=P_{2}/(y)\); it has projective dimension \(2\) and is pretilting in \(\mathsf{K^{b}(proj\,\Gamma)}\). Here, \(P_{i}\) stands for the indecomposable projective module corresponding to the vertex \(i\). Now, we find a thick generator \(S\oplus P_{2}\oplus P_{3}\) of \(\mathsf{K^{b}(proj\,\Gamma)}\) whose dg endomorphism algebra \(\Lambda\) is given by the quiver as in Example B.8 with \(\deg(\delta)=2\); by Keller-Rickard theorem, there is a derived equivalence between \(\Gamma\) and \(\Lambda\) which sends \(S_{\Gamma}\) to \(P_{1\Lambda}\). Applying the silting reduction (Theorem 2.15 and Corollary 2.17) to \(\Lambda\), we get isomorphisms \(\mathsf{sit}_{S}\,\Gamma\simeq\mathsf{sit}_{P_{1}}\,\Lambda\simeq\mathsf{sit} \,\mathcal{A}\) of posets. Here, \(\mathcal{A}\) is the dg quiver algebra given by \(\begin{CD}2\x@<{\beta}<{\beta}<{\gamma}>3\end{CD}\) with \(\gamma\varepsilon=0=\varepsilon\beta\) and \(\deg(\varepsilon)=1\) (trivial differential). It was proved that \(\mathsf{sit}_{S}\,\Gamma=\emptyset\) in [LZ] and \(\mathsf{sit}\,\mathcal{A}=\emptyset\) in [CJS]; that is, \(S\) is not partial silting.
**Remark B.10**.: As seen above, silting reduction sometimes makes a triangulated category without silting. We give a list of triangulated categories without silting:
* the singularity category \(\mathsf{D_{sg}(\Lambda)}\) for a finite dimensional algebra \(\Lambda\); in particular, the stable module category \(\underline{\mathsf{mod}\,\Lambda}\) for a selfinjective algebra \(\Lambda\)[CLZZ, AHMW, AI];
* a positively-Calabi-Yau triangulated category [AI];
* the perfect derived category \(\mathsf{per}(\mathcal{A})\) for \(\mathcal{A}\) as above with the same relations but with \(\deg(\beta)+\deg(\varepsilon)=1=\deg(\gamma)+\deg(\varepsilon)\) [CJS, JSW].
## Acknowledgements
The authors would like to give their gratitude to Osamu Iyama and Norihiro Hanihara for useful discussions and giving a lot of valuable and helpful comments.
|
2307.11010 | Empirical Evaluation of a Live Environment for Extract Method
Refactoring | Complex software can be hard to read, adapt, and maintain. Refactoring it can
create cleaner and self-explanatory code. Refactoring tools try to guide
developers towards better code, with more quality. However, most of them take
too long to provide feedback, support, and guidance on how developers should
improve their software. To reduce this problem, we explored the concept of Live
Refactoring, focusing on visually suggesting and applying refactorings, in
real-time. With this in mind, we developed a Live Refactoring Environment that
visually identifies, recommends, and applies Extract Method refactorings. To
validate it, we conducted an empirical experiment. Early results showed that
our approach improved several code quality metrics. Besides, we also concluded
that our results were significantly different and better than the ones from
refactoring the code manually without further help. | Sara Fernandes, Ademar Aguiar, André Restivo | 2023-07-20T16:36:02Z | http://arxiv.org/abs/2307.11010v1 | # Empirical Evaluation of a Live Environment for Extract Method Refactoring
###### Abstract
Complex software can be hard to read, adapt, and maintain. Refactoring it can create cleaner and self-explanatory code. Refactoring tools try to guide developers towards better code, with more quality. However, most of them take too long to provide feedback, support, and guidance on how developers should improve their software. To reduce this problem, we explored the concept of Live Refactoring, focusing on visually suggesting and applying refactorings, in real-time. With this in mind, we developed a Live Refactoring Environment that visually identifies, recommends, and applies _Extract Method_ refactorings. To validate it, we conducted an empirical experiment. Early results showed that our approach improved several code quality metrics. Besides, we also concluded that our results were significantly different and better than the ones from refactoring the code manually without further help.
code smells, refactoring, code quality metrics, software visualization, live programming
## I Introduction
Complex and large software tend to be hard to evolve and maintain. Changing it can be one of the most expensive and time-consuming tasks in the development cycle [1]. The presence of underlying problems like code smells can make it even harder. To reduce this problem, we need to refactor the code, making it more readable, adaptable, and maintainable [1, 2, 3].
Several refactoring approaches try to do precisely that [4, 5, 6, 7]. However, most are characterized by their inertia to coding actions and their "on-demand" execution. Thus, they let simple problems that could have been easily corrected by a simple refactoring transform into a complex issue that must be reduced by a more complex refactoring or set of refactorings. Besides, if the refactoring process is executed outside the IDE, software developers will need more time to regain their "programming mindset".
Several authors focused on shortening the "edit-compile-link-run" loop to reduce the time between coding and the outcomes from that actions and to provide quick assistance in reaching better programming solutions [8, 9, 10, 11]. Most of us recognize this concept as _Live Programming_[8].
Thereby, we focused our research on incorporating liveness in the refactoring process. In our optic, _Live Refactoring_ is a relevant concept since it helps inspect code to provide immediate and continuous feedback, support, and guidance to developers on possible refactoring opportunities present in their software [12, 13, 14]. Thus, it reduces the refactoring-loop by shortening its three main moments -- the _identification_, _recommendation_, and _application_ of the refactorings candidates. With a _Live Refactoring_ approach, we can create awareness of what, how, and why a block of code needs to be refactored in earlier development stages.
Considering this topic and its advantages, we developed a _Live Refactoring Environment_ that consists of a Java IntelliJ plugin that identifies, suggests, and applies _Extract Method_ refactorings [1], while coding, in real-time.
With it, we aimed to answer the following research questions:
**RQ1** "What is the impact of a live refactoring environment on the developers' awareness of their code quality?"
**RQ2** "What is the impact of a live refactoring environment on the code quality?"
**RQ3** "What is the impact of a live refactoring environment on the total time needed to converge to code with more quality?"
We conducted an empirical experiment with multiple tasks and experimental groups to validate our approach and beliefs on _Live Refactoring_. The results showed that our _Live Refactoring Environment_ helped developers improve their code quality. Also, by comparing the total time needed to achieve good programming solutions between groups, we verified that our approach helped converge to better code, faster than refactoring the code manually.
This paper is organized into five sections. Section II summarizes some related work. Section III describes our _Live Refactoring Environment_, its main components, and its current limitations. Section IV details the empirical experiment we carried out to validate our work. Then, Section V presents and discusses the results obtained from our experiment. Finally, Section VI sums up the main conclusions of our work, and it also lists some improvements to address in the future.
## II Related Work
Martin Fowler and Kent Beck defined a _code smell_ as a surface indicator of a deeper problem in the code, which makes
it hard to read, adapt, and maintain. To reduce a code smell, we need to refactor the code [1].
Palomba [4] detected _Long Method_ smells by textually analyzing the code. Fenske _et al._[7] created a plugin that identifies three types of code clones. Nucci _et al._[15] proposed an approach that used machine learning algorithms to identify _Large Class_ or _Long Method_ smells. Ujihara _et al._[16] created a _Feature Emy_ smell detection tool based on four heuristics: (i) number of methods implemented in a class, excluding abstract methods, (ii) number of incoming or outgoing edges connected between the members of a specific class, (iii) number of classes using the methods or properties of a particular class, and (iv) number of classes whose methods or properties are used by methods of another class. ALAbwaini _et al._[5] used the slicing technique to identify blocks of code that aren't being executed. Murphy-Hill and Black [17] developed an interactive approach to quickly identify code smells like _Feature Emy_ or _Data Clumps_ on the code. Tsantalis and Chatzigeorgiou [6] identified _Extract Method_ refactorings by decomposing methods. Pantiuchina [18] proposed a refactoring tool that identifies complex classes through the Random Forest algorithm.
As described, several approaches already identify different code smells and refactoring opportunities. However, most of them only work in a "on-demand" mode and aren't dynamic and reactive to coding actions.
We believe that including liveness in the refactoring process may benefit developers. It would enable the inspection and exploration of code in real-time. Then, developers would have more control over their software since they would be continuously confronted with feedback, support, and guidance on improving and evolving their code [9]. Therefore, it would help developers implement high-quality systems, faster [2, 3, 8, 9]. On this topic, Grigera _et al._[19] developed a tool for web applications that identifies possible code smells and suggests refactorings to solve them, in real-time. Alizadeh _et al._[20, 21] proposed a recommender that dynamically adapts and suggests several refactorings. Each developer can approve, modify, or reject the suggested refactorings. Then, the approach uses that feedback to update the recommended refactorings. Barbosa [22] used the OPTICS clustering algorithm to identify _Extract Method_ refactorings. Salgado [23] used several heuristics to find refactorings, such as the _Extract Method_ or _Extract Class_.
Like these approaches, our solution also tries to meet the benefits of live refactoring. However, it focuses on providing live support and guidance to developers on improving their code by providing refactoring suggestions and visually recommending them. By visually suggesting each refactoring on the IDE, we believe that we would help developers to quickly converge to better code.
## III Live Refactoring Environment
Development environments and several tools created for them already guide developers on how they should improve their code. However, most provide support and guidance only when developers ask for it [7, 24]. Yamashita and Moonen [25] and Tymchuk [26] stated that developers prefer to receive feedback on their software and their programming actions as soon as possible.
Therefore, we believe faster refactoring guidelines would drive developers to easily and quickly change, adapt, and maintain their code, improving its quality. Thus, our approach consists of a _Live Refactoring Environment_ developed as an IntelliJ plugin for Java software that visually identifies, suggests, and applies refactorings, in real-time. Its work-flow is described by Figure 1.
### _Development Environment_
As said, our approach consists of a _Live Refactoring Environment_ developed as a refactoring plugin for IntelliJ IDE. We studied several IDEs and their capabilities. In our case, we needed an environment that would allow us to access and manipulate the code's Abstract Syntax Tree (AST), create visual mechanisms to suggest the identified refactorings, and apply each refactoring automatically to the code. However, most IDEs allow doing that. Then, we focused on popular IDEs that would enable us to develop a plugin that would work with a well-known programming language.
Therefore, we decided to use IntelliJ IDE. IntelliJ has all the APIs needed to implement our approach, and it would allow us to create a plugin for Java software. By analyzing the current literature on refactoring tools, we verified that many of them supported Java code. Besides, we also checked that several large and complex Java software were used to validate the refactoring approaches created by other authors [27, 28].
### _Refactoring Analyzer_
Our environment is composed of an analyzer that aims to identify possible _Extract Method_ opportunities. We believe it is a good refactoring to start the development of our plugin since we can create well-organized methods with higher levels
Fig. 1: Flowchart describing the main behaviours of our solution.
of readability, adaptability, and maintainability. Besides, it is related to standard bad programming practices like _Long Methods_ or _Duplicated Code_[1].
Our approach considers each child of the code's AST and some code quality metrics such as the Halstead Metrics [29] or the cognitive complexity [30]. They help detect blocks of code that are more complex and incohesive than they should be. To measure each metric, we resort to the AST of the code focused on the text editor. In IntelliJ, we have access to the _Program Structure Interface_ (PSI), which provides quick and easy access to the ASTs. Each AST is represented by _PsiTree_ objects, where each class is a _PsiClass_, each method is a _PsiMethod_, and so on.
We defined several thresholds to find an _Extract Method_ candidate. We only analyze long methods with more than 50 lines of code, cyclomatic and cognitive complexity higher than 15, and Halstead Effort higher than 50. We chose to use these values after several trials and after analyzing the candidates produced by each. Then, when a method fulfills all these requirements, we start selecting its fragments that could be extracted to create a new method. This step is based on the approach of Salgado [12, 13, 14, 23] since it was the simplest and fastest to identify this kind of refactoring. First, we select the extractable fragments and combine them into a refactoring candidate. These fragments can be combined into a possible candidate if they have consecutive code placement.
After combining them, our approach searches for specific cases between the first candidates identified. To be an _Extract Method_ candidate, a block of code must have more than three statements, and it mustn't contain more than 80% of the original method's statements. All the thresholds can be configured in a proper configuration menu. We used these thresholds after studying what would be the best values to present reliable and relevant _Extract Method_ opportunities to developers.
Lastly, once we have the final set of _Extract Method_ candidates, we sort them using an approach based on the methodology proposed by Meananeatra [31]. They evaluate three code quality metrics: the cyclomatic complexity **(CC)**, the number of lines of code **(LOC)**, and the lack of cohesion **(LCOM4)** of the class that includes the candidate. In our approach, we also included cognitive complexity. By default, it first tries to sort the candidates by maximizing the number of statements that should be extracted. Then, supposing there is a tie between candidates, it tries to find the candidates with higher cyclomatic and cognitive complexities. After that, it searches for candidates with a higher lack of cohesion.
### _Refactoring Candidates Visualizer_
Our visualizer is based on the approach of Barbosa [12, 13, 14, 22]. After identifying and sorting the refactoring candidates, we measured their severity. We used a scale from 1 to 10, where 10 represents a refactoring candidate that should be applied immediately. To find the place for each candidate on our scale, we calculate their severity by normalizing their position on the set of refactoring candidates into a value between 1 and 10. We selected a scale from 1 to 10 since our color gradient had ten colors, from light green to dark red. Light green represents a less severe candidate, and red represents code that should be refactored immediately.
After measuring the severity of each candidate, they are painted on the left side of the text editor as clickable gutters. Figure 2 presents an example of our visual methodology. There, each gutter is a PNG image of the color mapped by the severity of the candidate. By clicking on one of the gutters, we can access a refactoring menu, where we can select the most convenient refactoring to apply to our code. Since each statement can be part of multiple refactoring candidates, we only display the color of the refactoring from that group with more severity. Then, on the refactoring menu, we list all the overlapping refactorings related to that position in the code.
### _Refactoring Candidates Applier_
After choosing the refactoring we aim to apply, it is implemented automatically in the code. IntelliJ provides a refactoring API that allows us to apply multiple refactorings, including _Extract Method_, by only specifying the elements of the code that should be refactored.
When the refactoring is completely applied to the code, our approach saves the code quality metrics of that file before and after the refactoring is performed and saves them in a Firebase database for testing purposes. Then, it starts a new inspection process, using the new code and new metrics to find a new set of refactoring candidates.
### _Liveness Applier_
When the refactoring environment starts, it measures the code quality metrics of the code displayed in the text editor and visually identifies and suggests each refactoring candidate. Then, after applying a refactoring or even after coding, our approach starts inspecting the new code to find new refactoring opportunities.
This happens because of some live mechanisms included in our refactoring environment. These mechanisms are triggers provided by IntelliJ that allow knowing when something changes in the code. These triggers help initiate our approach at the right time when a major change occurs at one of the children of the _PsiTree_ that represents the code (_e.g._ change that occurs in ten or more characters). One of the triggers
Fig. 2: Developed environment visually suggesting refactoring candidates.
initializes the inspection after implementing a refactoring or manually changing the code. Other starts the same process when switching the file focused on the IDE.
### _Limitations_
One of the limitations of our environment is the small number of refactorings included in it. Until now, our approach only fully supports the _Extract Method_ refactoring. We believe that by keeping a small number of refactoring, we are decreasing the range of code smells that could be mitigated. Therefore, reducing the opportunities to improve the code quality and its readability, adaptability, and maintainability. We also started including the _Extract Class_, _Extract Variable_, _Move Method_, and _Introduce Parameter Object_ refactorings. However, we didn't test them and we don't know if they are correctly implemented.
Another limitation is the time our environment takes to inspect the code and present each refactoring opportunity. Our approach only takes a few seconds to identify and suggest _Extract Method_ candidates. However, when the dimension of the code increases, the time needed to inspect it also increases. In our opinion, to consider our approach a complete live refactoring environment, it should only take up to one or two seconds to analyze code and suggest possible refactorings no matter the code. So, we considered it a limitation that should be addressed as future work.
## IV Empirical Experiment
We designed a controlled experiment to validate our approach and main assumptions. It was divided into multiple refactoring tasks that wouldn't take more than 45 minutes to be executed by different experimental groups.
Each participant had access to the materials needed to carry out the experiment through a Google Form sent by email. These materials comprised the description of each task, source code of the projects they should use, guides explaining the experiment, and the refactoring environment they should use to perform each task. The Google Form also contained several questions to characterize the participants and assess the usability of each approach used in the experiment.
### _Population_
The participants of our experiment were students from programming bachelor classes. Their identity was kept anonymous from all the data collected in our experiment. Figure 3 synthesizes the four experimental groups. The participants of our experiment were divided equally through them. Group A1 used the developed approach without any further change. Group A2 used a tool showing only the most severe refactoring candidate per software iteration. Then, the tool from group A3 listed all the candidates in an HTML file that should be open outside the IDE. Lastly, group B didn't have access to any refactoring tool. They only had access to a plugin measuring specific metrics for each experiment task.
By comparing the results from each experimental group, we aimed to verify if our refactoring environment helps create better code, faster. Besides, we also aimed to know the best way to present the refactoring candidates and allowed the most to converge to better programming solutions.
### _Programming Tasks_
The participants had to install our refactoring environment to perform our experiment. Then, they should start to execute each refactoring task. These tasks were the same for all experimental groups. Our experiment consisted of three tasks corresponding to three different Java projects with varying difficulty levels. We provided a short description and a UML diagram to help the participants know more about the projects. In these tasks, the participants could apply all the refactorings suggested by their tool or the refactorings they wanted and thought were relevant to the code.
_Task 1:_ This was a warm-up task to help each participant understand the tool they were using. It was focused on a minimal and simple movie rental system that consists of one of the examples for _Extract Method_ presented by Fowler [1]1.
Footnote 1: Refactoring example - [https://www.cs.unc.edu/~stotts/COMP204/refactor/chap1.html](https://www.cs.unc.edu/~stotts/COMP204/refactor/chap1.html)
_Task 2:_ This was a task focused on a "Space Invaders" project2. The participants had to implement a method with the game cycle (class Board). Its goal was to detect and suggest _Extract Method_ candidates while programming. All the implementation steps were provided to the participants through comments on the code.
Footnote 2: Space Invaders - [https://github.com/tailattanzi/java-space-invaders](https://github.com/tailattanzi/java-space-invaders)
_Task 3:_ This was a task focused on the JHotDraw project3. We selected this project because it is large and complex, with multiple classes with more than 1.000 lines of code, several long methods, and duplicated code. This task aimed at identifying and applying _Extract Method_ refactorings in two different Java files (classes DrawApplication and StandardDrawingView) to verify if the code improves and becomes more readable, adaptable, and maintainable.
Footnote 3: HotDraw - [https://github.com/vrandelshofer/jhotdraw](https://github.com/vrandelshofer/jhotdraw)
### _Outcomes and Significance_
Our experiment resulted in four main types of outcomes. All these outcomes helped us answer our research questions using hypothesis tests (Table I).
_Outcome A:_ It contained the characterization of each participant. In a Google Form, we asked several questions that
Fig. 3: Synthesis of the groups that participated in our empirical experiment.
helped us know more about the programming background of each participant.
_Outcome B:_ It contained the evolution of the code quality metrics over each refactoring applied. We assessed several metrics, such as the Halstead metrics [32], or maintainability index [33]. These values were compared between the experimental groups to verify which converged to better results.
_Outcome C:_ It contained the time needed to perform each refactoring applied to the code. Its goal was to evaluate the time efficiency of our approach by comparing the average time measured by experimental group.
_Outcome D:_ It contained the results of a SUS Questionnaire [34] made to the participants.
## V Results and Discussion
This section lists and discusses the most relevant data collected from our empirical experiment.
### _Participants Characterization_
Due to time restrictions, our experiment was only carried out by 56 participants, distributed equally by the four experimental groups. As we previously mentioned in Section IV-A, they were bachelor's students from programming classes.
We asked them to evaluate themselves on their programming knowledge and experience using IntelliJ, refactoring, and metrics tools. With their answers, we verified that most participants knew how to program in Java, use different data structures, and use IntelliJ IDE often. However, most of them never used a refactoring or code quality metrics tool. The main differences verified in these results are further addressed in Section V-H2.
### _Code Quality_
Outcome B helped us assess the evolution of some metrics regarding each method affected by the refactorings applied to the code. As seen in Table II, most metrics had a positive evolution in all the groups. And, as expected, they differ between the different experimental groups.
In a deeper analysis, we can verify that most values from groups A1, A2, and A3 don't differ much, as proved in Section V-E. We believe that happens since, independently of the version used, these groups had access to our approach and had help performing each _Extract Method_. Group A1 was the one with the better improvement. We believe this happened because their approach helped them make more informed refactoring decisions on improving their code. Then, as expected, group A3 had the second-best results. However, probably because of the visual presentation of the list of possible refactorings outside the IDE, the participants from this group didn't make the same refactoring decisions as group A1. This proves that suggesting refactoring outside the IDE doesn't help developers improve their code as if they were placed inside of the IDE. Also, as we expected, all the metrics from group B improved less than those from the remaining groups. In fact, the values of cyclomatic complexity and difficulty got worsened.
Therefore, we concluded that having a refactoring environment that assists developers in knowing where and how they should refactor their code helps them converge to better code, with more quality.
### _Number of Refactorings applied_
Through Outcome B, we also measured the average number of refactorings applied by experimental group. To do it, we analyzed the JSON file, which saved the evolution of the code quality metrics over each refactoring.
As seen in the first row of Table III, on average, group A1 performed more refactorings, followed by groups A3 and A2, and, lastly, group B. Besides, the discrepancy between groups A1, A2, and A3 values isn't large. It may indicate that by applying the best refactoring at a time, we can reach good code almost in the same amount of refactorings as when having multiple refactoring suggestions simultaneously (with or without the suggestions placed inside the IDE). By complementing this analysis with the one from Table II, we concluded that, in fact, with a lower number of refactorings, the participants from group A1 were able to improve all quality metrics, being the group that improved the majority of them the most. On the other hand, group B applied half of the refactorings as the other groups, indicating that some blocks of code may have remained unrefactored. Checking their values from Table II, we can conclude that the lower number of refactorings applied by group B caused lower improvement in all of the metrics than the ones from the other groups.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & _Group A1_ & _Group A2_ & _Group A3_ & _Group B_ \\ \hline
**LOC** & 27\% & 22\% & 24\% & 10\% \\
**Cog** & 57\% & 54\% & 34\% & 12\% \\
**CC** & 20\% & 17\% & 9\% & -15\% \\
**Length** & 29\% & 23\% & 15\% & 2\% \\
**Volume** & 41\% & 35\% & 24\% & 6\% \\
**Effort** & 45\% & 38\% & 20\% & 1\% \\
**Difficulty** & 6\% & 4\% & 4\% & -6\% \\
**Maintainability** & 7\% & 6\% & 6\% & 2\% \\ \hline \hline \end{tabular}
\end{table} TABLE II: Average improvement of each metric per method per experimental group.
\begin{table}
\begin{tabular}{c l} \hline \hline & _RQ1_ - Measured from Outcomes B \\ \hline
**H0** & A Live Refactoring Environment causes no impact on the developers’ awareness of their code quality. \\
**HA** & A Live Refactoring Environment impacts positively the developers’ awareness of their code quality. \\ \hline
**H0** & **A Live Refactoring Environment causes no impact on code quality. \\
**HA** & A Live Refactoring Environment impacts positively the code quality. \\
**HA** & **A Live Refactoring Environment impacts positively the code quality.** \\
**HA** & **A Live Refactoring Environment impacts positively the total quality.** \\ \hline
**H0** & **A Live Refactoring Environment causes no impact on the total time needed to converge to code with more quality. \\
**HA** & **A Live Refactoring Environment impacts positively the total time needed to converge to code with more quality.** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Hypothesis tests considering each research question.
### _Time Efficiency_
Outcome C helped measure the actual efficiency of our refactoring environment.
As can be verified in the second row of Table III, there is a large discrepancy between the average time measured in the experimental groups that used our approach (groups A1, A2, and A3) and those measured in group B. This indicates that, as we expected, an live environment that identifies, suggests, and applies refactorings helps to reduce the refactoring-loop. Also, the time of group A3 is higher than those of groups A1 and A2, which indicates that having refactoring candidates suggested inside the IDE helps developers maintain their train of thought and, therefore, refactor their code, faster.
### _Comparison of the Experimental Groups_
To validate our assumption that developers benefit from receiving live refactoring feedback, support, and guidance, we performed several hypothesis tests on the code quality, the time needed to refactor the code, and the total number of refactorings applied. With them, we aimed to verify if the experimental groups were statistically different. Notice that we only performed these tests on tasks 2 and 3. We didn't use the data from task 1 since its effects were practically identical for all the experimental groups. Also, this task was a simple warm-up task and had no value for this purpose.
Table IV presents the p-values of several metrics when comparing the experimental groups on task 2 (**T2**) and task 3 (file 1 - **(T3F1)**, file 2 - **(T3F2)**). As can be seen, most p-values of the code quality metrics compared between groups A1, A2, and A3 with group B were smaller than 0.05. We only found outliers when comparing the cyclomatic and cognitive complexity of task 3 between these groups. Besides, there were cases in which groups A1, A2, and A3 weren't statistically different. However, we expected these results since the approaches used by groups A2 and A3 were based on the one used by group A1. The only change was how many and how the refactoring candidates were suggested to the participants. Even though there were some p-values we didn't expect, the remaining were smaller than 0.05, meaning the groups were statistically different. Since groups A1, A2, and A3 were statistically different from group B when comparing their code quality metrics, we could refute the null hypotheses from research questions RQ1 and RQ2.
Most p-values measured by comparing the time needed to apply a refactoring between all groups were lower than 0.05. The only outliers were found when comparing groups A1, A2, and A3 with each other, in task 2 and group A3 with group B, in the first file of task 3. Since most p-values were smaller than 0.05, we concluded that groups A1, A2, A3, and B were statistically different. Therefore, we were able to refute the null hypothesis from research question RQ3.
The same happens with the average number of refactorings applied to the code. In most cases, the p-values measured when comparing groups A1, A2, and A3 were higher than 0.05. We believe this occurred since they were using similar versions of the same approach. Despite this, the remaining p-values when comparing groups A1, A2, and A3 with group B were smaller than 0.05, which means they were statistically different. These elements complement the data that helped us refute the null hypothesis from research question RQ1.
In most cases, we verified that groups A1, A2, and A3 weren't statistically different. However, it doesn't mean there wasn't an approach better than the other, as we verified in Section V-B.
### _Usability_
Based on a SUS Questionnaire [34] made to groups A1 and A2, we measure the average Perceived Ease of Use, Perceived Usefulness, and Intention to Use by group. To evaluate them, we used the Cronbach's alpha. The literature says that alphas of 0.7 or above are viewed as acceptable [35].
As can be verified in Table V, the approaches used by groups A1 and A2 had high Perceived Usefulness and Intention to Use since their Cronbach's alphas were higher than 0.7. However, we can't say for their Perceived Ease of Use because their Cronbach's alphas were lower than 0.7. Despite the values being very close to the standard 0.7, they indicate that the participants don't fully believe that our approach is easy to use. Even so, we found that the participants of group A1 have a stronger belief than the ones from group A2. Therefore, we can only say that our experiment's participants understood our approach's usefulness and aim to use it in the future.
Participants from groups A1 and A2 also provided their opinion on the visual methodology used in our environment to suggest each refactoring candidate. We focused our questions
\begin{table}
\begin{tabular}{c c c c c c} \hline & _Group A1_ & _Group A2_ & _Group A3_ & _Group B_ \\ \hline
**\#Refactorings** & 16 & 13 & 14 & 6 \\ \hline
**Time Efficiency** & 29 & 36 & 48 & 60 \\ \hline \end{tabular}
\end{table} TABLE III: Average number of refactorings applied and time efficiency by experimental group.
\begin{table}
\begin{tabular}{c c c c c c c} \hline & _Group A1_ & _Group A2_ & _Group A3_ & _Group B_ \\ \hline
**\#Refactorings** & 16 & 13 & 14 & 6 \\ \hline
**Time Efficiency** & 29 & 36 & 48 & 60 \\ \hline \end{tabular}
\end{table} TABLE III: Average number of refactorings applied and time efficiency by experimental group.
on the color scheme, the number of refactorings displayed, and their placement on the IDE.
With their answers, we concluded that the visual methodology used in our environment to identify and suggest the refactoring candidate is appealing, non-intrusive, and user-friendly. Besides, the results also showed that the participants prefer an approach that displays several refactoring candidates at a time since it would help them make better decisions about their code.
### _Answering the Research Questions_
After analyzing and discussing all the results of our empirical experiment, we were finally able to answer our research questions.
**RQ1**_"What is the impact of a live refactoring environment on the developers' awareness of their code quality?"_
In Section V-B, we verified that independently of the version of our approach, it improved all the assessed code quality metrics. Besides, as analyzed in Section V-E, we could reject the null hypothesis of this research question when comparing the quality metrics measured with and without our approach. Also, by comparing the average number of refactorings applied, we could reject the null hypothesis of this research question. By analyzing the first row of Table III, we concluded that when using a Live Refactoring Environment, the participants performed more refactorings, probably because they were more visually aware of possible flaws present in their code. Therefore, we can conclude that detecting code smells in real-time positively impacts the developers' awareness of their code quality.
**RQ2**_"What is the impact of a live refactoring environment on the code quality?"_
The logic behind the approaches used by groups A1, A2, and A3 are the same. Therefore, as seen in Section V-E, their quality results aren't statistically different. That doesn't happen when we compare the results of these groups with those of group B, with which we were able to reject the null hypothesis of this research question. Besides, as seen in the first row of Table III, the complete version of our approach (used by group A1) had better results in all the assessed quality metrics. Therefore, we can conclude that a Live Refactoring Environment positively impacts code quality.
**RQ3**_"What is the impact of a live refactoring environment on the total time needed to converge to code with more quality?"_
As said in Section V-E, we could reject the null hypothesis of this research question because the time values measured when comparing groups A1, A2, and A3 were statistically different from the ones from group B. Besides, with the analysis of the second row of Table III, we also concluded that the time needed to perform a refactoring using any version of our approach was faster than the time measured when the participants didn't use any refactoring tool. More, group A1 was the one that applied each refactoring faster. Furthermore, as stated above, with our approach, the participants could reach better code, with more quality. Therefore, we can conclude that a Live Refactoring Environment positively impacts the time needed to converge to a good programming solution with quality.
With all the evidence summarized above, we believe that besides answering positively to our research questions, we were also able to validate our hypothesis.
### _Threats to Validity_
We found several threats to validity on our empirical experiment related to applying the suggested refactorings and to the software and participants themselves.
#### V-H1 Internal Threats
In our case, the internal validity is related to the cause-effect relationship between applying refactorings and their impact on the code quality.
**Number of refactorings applied:** Since this is a novel approach, we couldn't estimate the correct number of refactorings that should be recommended and implemented to raise awareness of possible code flaws and improve code quality.
**Order of refactorings:** The order in which we refactor the code inevitably influences the results obtained from it. This problem can be mitigated by having enough examples of the results obtained from refactoring code, verifying their influence on software, and future refactoring suggestions.
**Code quality metrics:** The analysis of code quality metrics can only provide us with an approximation of the attributes they measure, not the exact code quality. Nevertheless, the proven results from active research on this topic give us confidence in their reliability in assessing code quality.
#### V-H2 External Threats
In our case, external validity is related to how the results and conclusions drawn from our approach can be applied to different scenarios.
**Experiment Participants:** Our approach was only tested by bachelor students, not proficient software developers. Besides, some participants have an average knowledge of how to program with Java or use different data structures on IntelliJ IDE. We believe these cases could have weakened the final results mainly on the code quality and the time needed to implement a refactoring. We can easily solve these problems by providing our environment to developers from the software industry to test it. We can also provide a tutorial focused on the main aspects of Java and its data structures and how to use the IntelliJ IDE correctly.
**Social pressure:** Participants could have felt pressurered to perform the tasks correctly on time. We believe this is quite common in an empirical experiment with humans, which can be mitigated by pre-establishing a minimum duration for the experiment and not a maximum time. Besides, we can divide the tasks into different checkpoints. Even if the participants
\begin{table}
\begin{tabular}{l c c} \hline \hline & _Group A1_ & _Group A2_ \\ \hline
**Perceived Base of Use** & 0.60 & 0.55 \\
**Perceived Usefulness** & 0.84 & 0.76 \\
**Intention of Use** & 0.99 & 0.99 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Cronbach’s alphas measured between Group A1 and A2.
cannot finish the tasks on time, we would be able to collect data from each checkpoint completed successfully.
**Selected projects:** We couldn't analyze a vast number of Java software systems, and we can't testify that our approach works correctly with all available Java projects. This issue is mainly caused due to time constraints and not necessarily by flaws in our approach or the experiment design. In the best-case scenario, we could analyze all the projects that make sense to be used by our environment.
## VI Conclusions
Complex code tends to have lower readability and be hard to refactor [1]. Most refactoring approaches have the common disadvantage of not being reactive and live enough to code actions. Thus, the refactoring assistance may occur in later development stages, increasing the time and effort needed to create code with quality.
Focusing on this problem, we studied how we could identify, recommend, and apply different refactorings as soon as possible to reduce the time and effort needed to converge to better programming solutions. Therefore, we proposed a _Live Refactoring Environment_ that aimed to impact the refactoring-loop positively. It assesses several code quality metrics to detect and sort possible _Extract Method_ refactorings, and visually suggests and applies them to the code, in real-time.
We conducted an empirical experiment to answer our research questions and validate our hypothesis. Despite only focusing on testing the _Extract Method_, the data collected showed us that our refactoring environment were able to raise awareness of possible code flaws. We also verified that with it, developers could converge to better code with more quality and faster than refactoring the code manually. Lastly, we also concluded that the participants well accepted the visual methodology used to suggest the refactoring opportunities. Therefore, we believe our approach is different from the existing ones because it has a strong live and visual component that helps developers know which block of code should be refactored in a user-friendly and intuitive way.
As future work, we aim to validate the other refactorings we started including in our approach. We also hope to identify and test new refactorings that would help mitigate important code smells like _Feature Envy_ or _Shotgun Surgery_[1]. We also aim to improve our live and visual mechanisms to make our tool even faster and more appealing than it already is. Then, we aim to reproduce the empirical experiment with at least 120 participants and compare our results with the ones from other refactoring tools.
## Acknowledgments
This work is financed by National Funds through the Portuguese funding agency, FCT - Fundacao para a Ciencia e a Tecnologia, within project 2020.05161.BD.
|
2304.07322 | Internal kinematics of GAIA DR3 wide binaries: anomalous behaviour in
the low acceleration regime | The {\it Gaia} eDR3 catalogue has recently been used to study statistically
the internal kinematics of wide binary populations using relative velocities of
the two component stars, $\Delta V$, total binary masses, $m_{B}$, and
separations, $s$. For $s \gtrsim 0.01$ pc, these binaries probe the low
acceleration $a \lesssim 2a_{0}$ regime where gravitational anomalies usually
attributed to dark matter are observed in the flat rotation curves of spiral
galaxies, where $a_{0}\approx 1.2\times 10^{-10}$m s$^{-2}$ is the acceleration
scale of MOND. Such experiments test the degree of generality of these
anomalies, by exploring the same acceleration regime using independent
astronomical systems of vastly smaller mass and size. A signal above Newtonian
expectations has been observed when $a \lesssim 2a_{0}$, alternatively
interpreted as evidence of a modification of gravity, or as due to kinematic
contaminants; undetected stellar components, unbound encounters or spurious
projection effects. Here I take advantage of the enhanced DR3 {\it Gaia}
catalogue to perform a more rigorous study of the internal kinematics of wide
binaries than what has previously been possible. Internally determined {\it
Gaia} stellar masses and estimates of binary probabilities for each star using
spectroscopic information, together with a larger sample of radial velocities,
allow for a significant improvement in the analysis and careful exclusion of
possible kinematic contaminants. Resulting $\Delta V$ scalings accurately
tracing Newtonian expectations for the high acceleration regime, but markedly
inconsistent with these expectations in the low acceleration one, are obtained.
A non-Newtonian low acceleration phenomenology is thus confirmed. | X. Hernandez | 2023-04-14T18:00:09Z | http://arxiv.org/abs/2304.07322v3 | Internal kinematics of _Gaia_ Dr3 wide binaries: anomalous behaviour in the low acceleration regime.
###### Abstract
The _Gaia_ eDR3 catalogue has recently been used to construct samples of nearby wide binaries to study the internal kinematics of these objects using relative velocities of the two component stars, \(\Delta V\), total binary masses, \(m_{B}\), and separations, \(s\). For \(s\gtrsim 0.01\) pc, these binaries probe the low acceleration \(a\lesssim 2a_{0}\) regime over which the gravitational anomalies usually attributed to dark matter are observed in the flat rotation curves of spiral galaxies, where \(a_{0}\approx 1.2\times 10^{-10}\) is the acceleration scale of MOND. Such experiments test the degree of generality of these anomalies, by exploring the same acceleration regime using independent astronomical systems of vastly smaller mass and size. A signal above Newtonian expectations has been observed when \(a\lesssim 2a_{0}\), alternatively interpreted as evidence of a modification in the relevant fundamental physics, or as being due to kinematic contaminants affecting the experiment; the presence of undetected stellar components, unbound encounters and spurious projection effects. Here I take advantage of the enhanced DR3 _Gaia_ catalogue to perform a more rigorous and detailed study of the internal kinematics of wide binaries than what has previously been possible. Having internally determined accurate _Gaia_ stellar masses and estimates of binary probabilities for each star using spectroscopic information, together with a larger sample of radial velocities, allows for a significant improvement in the analysis of wide binaries and careful exclusion of possible kinematic contaminants. Resulting \(\Delta V\) scalings accurately tracing Newtonian expectations for the high acceleration regime, but markedly inconsistent with these expectations in the low acceleration one, are obtained. A non-Newtonian low acceleration phenomenology is thus confirmed.
keywords: gravitation -- stars: kinematics and dynamics -- binaries: general
## 1 Introduction
The presence of gravitational anomalies in the low acceleration regime of \(a<a_{0}\approx 1.2\times 10^{-10}\) m s\({}^{-2}\) at galactic scales has been alternatively interpreted as evidence for a dominant dark matter component of unknown origin and as yet lacking any direct confirmation, or as indicating a change of regime in the structure of gravity, generically termed Modified Gravity (e.g. Milgrom, 1983), or even a validity limit for Newton's second law, Modified Inertia proposals, e.g. Milgrom (1994), Milgrom (2022). Empirically, to first order such gravitationally anomalous regime is characterised by flat rotation curves at an amplitude satisfying the baryonic Tully-Fisher relation, \(V_{TF}=(GMa_{0})^{1/4}=0.35(M/M_{\odot})^{1/4}\) km s\({}^{-1}\), where \(M\) refers to the total baryonic mass of a galaxy, McGaugh et al. (2000), Lelli et al. (2017).
Deciding between the alternative interpretations could benefit from exploring the low acceleration regime in different astronomical contexts, to obtain evidence as to the generality, or lack thereof, of the gravitational anomalies present in the outskirts of spiral galaxies. Steps in this direction have been taken by studies probing the possible presence of Tully-Fisher phenomenology in pressure supported galactic systems by e.g. Jimenez et al. (2013), Durazo et al. (2018) and Chae et al. (2020a), who find asymptotic velocity dispersion amplitudes also scaling with the fourth root of total baryonic masses. Extensions towards dwarf galaxies have also been explored, e.g. McGaugh et al. (2021). In going to smaller pressure supported systems, e.g. Scarpa et al. (2003), Hernandez et al. (2012b) and Hernandez & Lara-D I (2020) show asymptotically flat velocity dispersion profiles of Galactic Globular Clusters also presenting the same empirical scalings with total baryonic mass of the baryonic
Tully-Fisher relation, albeit the results of Claydon et al. 2017, who present an explanation for the observed Globular Cluster phenomenology within a standard gravitational framework.
As first identified in Hernandez et al. (2012a), large samples of wide binaries with internal separations larger than about 0.01 pc offer a probe of the low acceleration \(a\lesssim 2a_{0}\) regime at mass and length scales many orders of magnitude below those of spiral galaxies, and even below the ones mentioned above, where any dark matter contribution would be negligible. Even at the widest separations considered here, of 0.06 pc, the local dark matter density inferred under a Newtonian framework of \(0.01M_{\odot}\) pc\({}^{-3}\) (e.g. Read, 2014), would only imply about \(1\times 10^{-5}M_{\odot}\) of dark matter within the wide binary orbit, and hence only a contribution of order one part in \(10^{5}\) of the total mass of the system, with individual stellar masses not far from \(0.8M_{\odot}\).
The internal kinematics of wide binary samples as a test of gravity in the low acceleration regime have more recently been considered by: Scarpa et al. (2017), Banik and Zhao (2018), Pittordis and Sutherland (2018), Hernandez et al. (2019a), Banik (2019), Pittordis and Sutherland (2019), Acedo (2020), Hernandez et al. (2022), Manchanda et al. (2023) and Pittordis and Sutherland (2023). The presence of anomalous internal velocities in wide binaries is now well established, with various interpretations having been proposed. In Hernandez et al. (2022) we used _Gaia_ eDR3 to show that on reaching the low acceleration regime of sufficiently separated binaries, a qualitative regime change appears where the binned rms projected relative velocity for wide binary populations ceases to drop along Keplerian expectations, and settles to values consistent with the baryonic Tully-Fisher scaling of spiral galaxies, to argue in favour of a change in regime for the physics describing the problem.
On the other hand, Clarke (2020) simulating wide binary populations and Pittordis et al. (2023) using _Gaia_ eDR3 data, show that introducing a hypothetical population of hidden tertiaries, cases where one or both components of an observed binary harbour an undetected stellar companion, results in a kinematic contaminant which if chosen judiciously can explain the observations within a Newtonian framework. This population of hidden tertiaries hence becomes a prediction of Newtonian gravity, which fortunately has recently been shown to lie within the reach of independent confirmation through dedicated follow-up studies using a variety of readily available techniques, Manchanda et al. (2023).
In this paper I explore from an empirical approach, as an extension of our previous study of Hernandez et al. (2022), the kinematics of Solar Neighbourhood wide binaries and the mass-velocity scalings these present, taking advantage for the first time in this context of the recent _Gaia_ DR3 catalogue. This latest data release benefits from direct mass estimates using spectroscopic information and the _Gaia_ work package FLAME for a sub-set of stellar sources, an internally assessed binary probability for each star, the CLASSPROB_DSC_COMBMOD_BINARYSTAR parameter, an inference using spectral, photometric and astrometric information, henceforth \(B_{P}\), as well about \(4.7\times\) more stars containing velocities along the line of sight. All of the above improvements on _Gaia_ eDR3 permit a more accurate probing of the problem, with an enlarged sample of stars having radial velocities, useful to exclude unbound flyby events, and more accurate elimination of kinematic contaminants, e.g. by imposing cuts in the \(B_{P}\) parameter of a sample, as well as allowing for an accurate calibration of the luminosity-mass scalings used in all the studies mentioned above to infer individual stellar masses.
The structure of the paper is as follows: Section 2 describes the sample selection, driven by the philosophy of defining a small but very high quality sample where the consistency or otherwise of wide binary internal kinematics with Newtonian expectations might be assessed. Section 3 shows the calibration of a simple mass-luminosity relation to the high quality spectroscopically determined _Gaia_ DR3 individual masses, which is then used throughout. Section 4 presents results for two samples having different distance cuts so as to obtain a measure of distance dependent effects. Section 5 gives a comparison to recent independent work, and section 6 a final discussion.
## 2 Sample selection
The sample selection used is a modified version of the _Gaia_ one used by El-Badry and Rix (2018) who present and test a wide binary catalogue with a distance cut of \(D<200\)pc. Accounting for projection effects, simulations modelling reasonable distributions of ellipticities and undetected companions to estimate completion factors, these authors report a level of contamination of below 0.2%. Based on this same approach Tian et al. (2020) produced a lower quality but more extensive catalogue with a distance cut of \(D<4\)kpc containing 800 000 binary candidates.
I begin with a _Gaia_ search returning all stars within 333pc with accurate parallaxes and G magnitudes having
Figure 1: Sky plot of 95,899 binary pairs with \((D/\mathrm{pc})<333\), \((S/N)_{\varpi}>100\) and \((S/N)_{G}>5\) in each star, where all binary candidates having stars in common have already been discarded, grey dots. The 1352 binary pairs within \((D/\mathrm{pc})<125\) and further satisfying \(RUWE<1.2\), \(B_{P}<0.2\), \(\Delta V_{LOS}<4km/s\) and \(R_{P}\),\(G\),\(B_{P}\) and proper motion signal-to-noise values \(>20\) are shown as black dots. Notice no conspicuous groupings or local clusters remain after the aggressive de-grouping procedure used.
a signal-to-noise ratio of \((S/N)_{\varpi}>100\) and a signal-to-noise ratio in the _Gaia_ G band of \((S/N)_{G}>5\). I then search within a projected 0.5pc radius on the plane of the sky about each star for a potential binary companion. Any resulting pair is then accepted as a binary candidate, provided the difference in distance along the line of sight between both stars, \(\Delta D\), is smaller than twice the projected separation between both stars, \(2s\), at a \(3\sigma\) level, i.e., that \(\Delta D-3\sigma_{\Delta D}<2s<\Delta D+3\sigma_{\Delta D}\). A 192 HEALPix scheme is used, to limit the number of lost binaries where component stars lie in adjacent HEALPix.
As noticed in El-Badry & Rix (2018), the fixed _Gaia_ resolution implies that as the distance cut of the sample increases, a growing fraction of close binaries will be lost. Hence, the sample will have a distance-dependent completion factor. This is not a concern, as the objective is not to ensure valid candidates are all included with a fixed probability, but that unsound binary candidates are excluded. I modify the original El-Badry & Rix (2018) selection criteria, to remove the condition that the relative velocity between the two components of a binary system should be within Newtonian expectations, as it is precisely the validity of this assumption that is being probed.
The next step is to aggressively eliminate any binary candidates which might be under the gravitational influence of a third star, or indeed be part of a tertiary system. I search for any stars which are members of more than one candidate binary, and remove all such binary candidates. Thus, I do not try to decide which binary a given star flagged as a member of two or more binary candidate systems belongs to, but eliminate all candidate binaries which contain individual stars forming part of more than one such system. Given the parallax signal-to-noise ratio of the sample, for a binary system at an average distance for the final binaries used of \(\approx 100\)pc, by construction, no other _Gaia_ sources remain within 1pc along the line of sight at the \(1\sigma\) level. For binary systems in the critical range of internal separations of \(\approx 0.03\)pc (see section 4), this implies an isolation factor along the line of sight of about \(30\times\) the internal binary separation. On the plane of the sky, the initial search criteria ensures an isolation factor of \(5/3\times\) larger than along the line of sight at a \(1\sigma\) level, ensuring the final selected binaries are indeed free from kinematic contamination from other _Gaia_ sources, to a very large degree. Of the close to 10 million binary candidates originally identified, the above de-grouping algorithm leaves only 97,251 binary pairs.
Next, I apply a series of data quality cuts to each of the stars of each candidate binary system, and remove any such system where either of the constituent stars fails the cut. First, \(R_{P}\), \(G\), and \(B_{P}\) magnitude signal-to-noise cuts of \(>20\), which given the strong correlations between \((S/N)_{\varpi}\) and magnitude signal-to-noise levels, actually exclude very few binary candidates, but ensures no suspicious sources are being considered. As discussed in the introduction, a primary concern is the possible kinematic contamination from hidden tertiaries, which is addressed through a variety of quality cuts. The first is the use of the _Gaia_ internal binary probability field, \(B_{P}\), which provides a first order estimate of the probability that each individual _Gaia_ source is actually a binary star, which would then make any of our binary candidate systems a hidden tertiary. I impose a \(B_{P}<0.2\) quality cut. Also, it has been shown, e.g. Belokurov et al. (2022), that the probability of a _Gaia_ source being an unresolved binary is a strong function of the \(RUWE\) quality index of the source, and actually identify a threshold of \(RUWE<1.4\) at distances of 1kpc below which hidden tertiaries can be reliably excluded using _Gaia_ DR2 data. Since the DR3 data used reflect a 34 month timeline, rather than the 22 month one of DR2, and since our 200 pc distance cut-off is much smaller than the 1 kpc which Belokurov et al. (2022) consider, imposing a stringent \(RUWE<1.2\) limit will guarantee a sample relatively free of hidden tertiaries. Again, the two quality cuts mentioned above are applied to
Figure 2: Left(a): CMD for the stars in the 95,899 binaries mentioned in Fig. 1, shown as grey dots. The stars for the 1352 binary pairs shown as black dots in Fig. 1 appear here as black dots. Right(b): CMD for the stars in the 688 binary pairs within the colour magnitude region selected to eliminate photometric binaries as member stars of final selected binaries, and minimise the inclusion of hidden tertiaries in the kinematic samples.
all stars, with any candidate binary system where either of its stars misses any of the cuts, being excluded.
Then, I require all stars to have a reported radial velocity measurement in the DR3 catalogue. This cut serves three purposes: first, as the probability of having a radial velocity measurement in the _Gaia_ catalogue drops rapidly as the single star solution degrades, requiring radial velocities serves as a further quality control against hidden tertiaries. Then, having radial velocities allows a full spherical geometric correction when deriving the relative velocity between both components, Smart (1968). This correction is only relevant to the few very close and wide binaries, indeed, ElBadry (2019) shows that ignoring this correction will result in spuriously elevated relative velocities for wide binaries, but only for internal separations above about 0.1pc. Finally, having radial velocities allows a filter on projection and unbound flyby systems, which we cut by requiring all our final binary candidate systems have individual stars showing a radial velocity difference below 4 km s\({}^{-1}\).
The battery of quality and hidden tertiary cuts described above culls the original 97,251 binary candidates down to a small and highly curated sample, which for a distance cut of \(D<125\) pc leaves only 1,352 binary pairs. These are shown in a sky plot in Fig. 1 as black dots, where grey dots show the original 97,251 binary candidates. The thorough de-groping strategy described already removes all known nearby clusters and groups from the original binary candidate list, the strict series of exclusion cuts that then follow leave a small sample of highly isolated wide binaries not showing any evidence of unwanted groups or clusters.
As can be seen in Fig. 1, explicitly excluding the low Galactic latitudes is unnecessary given the aggressive de-grouping strategy adopted, as is the specific removal of local known groups. Indeed, the de-grouping implemented leaves no local over-densities which might correlate with either known local groups or with the Galactic plane. The use of _Gaia_ data restricted to distances below 125 pc with very high quality parallaxes makes this cut unnecessary, within that distance the data show no overcrowding. For the high quality \(D<125\) pc sample from which conclusions are drawn, the final mean signal-to-noise value for the parallaxes of the stars is of 855.4. Even at the outer limit of this sample, this implies a distance uncertainty of only 0.15 pc, which ensures crowding against background sources is not an issue. A second safeguard against the inclusion of spurious binary pairs in overcrowded regions comes from the relative radial velocity cut introduced, which efficiently eliminates most projection effects. Indeed, Fig. 1 was included as a check of this point, the sky plot of binaries remaining after only cases where both components have a well determined radial velocity, and a relative value in this quantity of \(<4\)km s\({}^{-1}\), does not show any conspicuous over-densities, not even along the Galactic plane, which does remain evident in the binary candidates before the use of the relative radial velocity cut.
Finally, we take advantage of a CMD selection strategy to further reduce the probability of hidden tertiary systems remaining in the high quality wide binary samples used. The left panel of Fig. 2 shows a CMD of all the stars in the original 97,251 candidate binaries as grey dots. The stars in the 1,352 sample appear as black dots, already showing a much more well defined main sequence than the original sample. A small number of photometric binaries remain above this well-defined main sequence.
We now impose a further cut defined on the CMD, following again the conclusions of Belokurov et al. (2020), who show that unresolved binaries can be further avoided by remaining within a region of the CMD below the turn-off and excluding the less massive and hence dimmer lower tail of the main sequence. We hence exclude all binary candidates containing one or two stars lying outside of a region having a G magnitude width of 0.6 magnitudes about the line connecting points (0.7, 4.7) and (2.2, 9.7) in the (colour, magnitude) plane shown. This last cut leaves only 688 binary pairs, shown in the right panel of Fig. 2.
The above CMD selection now implies an absolute magnitude limit of 9.7, although only a handful of stars are actually above an absolute magnitude of 9. Indeed those few stars mentioned above are amongst the dimmest and with the exception of only one, are excluded by the final signal to noise cuts in the kinematic plot. Thus, an absolute magnitude cut of 9 is representative of the final sample. This then becomes a distance-dependant apparent magnitude cut, which for the mean distance of the final high-quality cut of 90.25 pc becomes 18,77. For the outer limit of this sample, the corresponding value is of 19.5.
When comparing the present final high-quality sample to the final sample in our previous study, only 40% of the stars in Hernandez et al. (2022) appear in our present sample. This is due to the significant increase in _Gaia_ sources having reported radial velocities in going from eDR3 to DR3, of a factor 4.7. Keeping a sample of comparable size to the one of our previous study now allows stricter quality cuts. In Hernandez et al. (2022), no binary probability cut as internally determined by Gaia was applied, as the \(B_{p}\) parameter was not available in eDR3. As discussed above, the present sample includes a \(B_{p}<0.2\) cut, with the final sample mean value of this internally determined _Gaia_ binarity probability per source, which uses astrometric, photometric and spectroscopic information in ways complementary to the RUWE filter and the radial velocity determinations, of \(<B_{P}>=0.07\).
To summarise the strategy applied towards the removal of hidden tertiaries, the first cut is through the use of a limit in the _Gaia_ DR3 internally determined binary probability parameter of each source, the \(B_{p}<0.2\) constraint described above. The second is through the use of the RUWE\(<1.2\) constraint, this ensures sources where the single star photometric solution is poor will not be included. Then, keeping only binaries where radial velocities are reported ensures again a high quality spectroscopic single star solution resulted for the star in question, which is easily degraded by a hidden tertiary. Finally, following Belokurov et al. (2020), a careful CMD selection strategy is applied. This allows not only the exclusion of clear photometric binaries, but also limits consideration to regions of the CMD where errors and photometric blending are minimised. Indeed, through careful statistical re-sampling of mock observations of simulated hidden tertiaries and then 'observed' assuming actual Gaia sensitivity parameters, the above authors estimate only a 5% hidden tertiary contamination, down to Jupiter scales, when taking a RUWE\(<1.4\), plus the restriction to a well defined region of the CMD, out to 1 kpc, using DR2. Necessarily, the same CMD criterion as applied here, in conjunction with a RUWE\(<1.2\) cut and distances of \(<125\)pc using the longer
base line DR3 catalogue, will be highly clear of hidden tertiaries.
Hence it is astrometry, photometry and spectroscopy, though four different and independent combinations of constraints that are used to eliminate hidden tertiaries. All cuts are applied sequentially to both components of each candidate binary, with all such candidates where even one of the components fails the cut being removed.
Before presenting the relative velocity vs. internal separation and relative velocity vs. mass scalings of our final samples, the following section details the mass estimates used.
## 3 Addressing mass estimate biases
Determining if the internal kinematics of a sample of wide binaries is consistent or not with Newtonian gravity, clearly requires estimating the masses of each star in each of the binaries being studied (e.g. Banik & Zhao, 2018, Pittordis & Sutherland, 2023). In the context of large _Gaia_ samples of Solar Neighbourhood wide binaries, this mass estimate has so far been approached in terms of simple magnitude-mass scalings, such as:
\[\left(\frac{M}{M_{\odot}}\right)=10^{0.0725(4.76-M_{G})}, \tag{1}\]
expected to be reasonably accurate and unbiased for the old low mass stars of the Solar Neighbourhood comprising the relevant wide binaries e.g. Pittordis & Sutherland (2019), Hernandez et al. (2022). However, the availability of an internally determined _Gaia_ FLAME work package mass estimate making use of spectroscopic information in DR3, allows for a much more accurate update in the masses of the binaries analysed. Unfortunately, the more demanding observations required for _Gaia_ spectroscopic mass determinations imply that this parameter is available only for a fraction of even nearby stars. If we restrict ourselves to wide binaries where both components have a _Gaia_ DR3 mass determination, we would have to eliminate all candidates where no such information is available, and also those cases where _Gaia_ DR3 masses are available for only one of the two members of a binary. Therefore, supplementing internally determined spectroscopic _Gaia_ DR3 masses with estimates from eq.(1), in cases where the former is not available, is desirable.
The first test of this scheme is to check for the presence of any biases or inconsistencies when comparing the results of eq.(1) to _Gaia_ DR3 masses. I begin with a sample as described in the previous section, taking a distance cut of \(D<135\), and changing only the \(RUWE\) and \(B_{P}\) cuts to \(<2.0\) and \(<0.4\). This relaxation of the quality controls only at this point, serves to increase the sample so as to obtain a more complete comparison of the two mass estimates mentioned above. This sample yields 3,696 binary systems.
The left panel in Fig. 3 shows the individual stars of the sample just described which have _Gaia_ DR3 masses, in a plot comparing this quantity to the result of eq.(1) for these same stars, in the mass range relevant for the wide binaries of interest. The grey dots show stars which are members of a binary system where the other star does not have a _Gaia_ DR3 mass determination, and the black dots cases where the other component of the binary system also has a _Gaia_ DR3 mass available. As can be seen, a significant fraction of binaries do not have _Gaia_ DR3 masses for both components, indeed, in the sample shown only 24% of the stars have _Gaia_ DR3 masses available for both components. Thus, given the small available high quality samples, it is important to supplement _Gaia_ DR3 mass determinations with a reliable estimate inferring masses from _Gaia_ DR3 quantities available for a larger fraction of stars. Unfortunately, we see
Figure 3: Left(a):Inferred stellar masses using eq.(1) compared to internally provided spectroscopic _Gaia_ DR3 masses. A small systematic offset is clearly apparent in the low mass range. Right(b): Inferred stellar masses using eq.(1) compared to internally provided spectroscopic _Gaia_ DR3 Masses, after a \(0.05M_{\odot}\) correction factor was added to the result of eq.(1), for masses obtained initially below \(0.7M_{\odot}\). No substantial systematic offset remains at this point in the mass range shown, where the stars comprising the binaries in the kinematic samples used are found. In both panels grey dots indicate cases where the star in question is part of a binary where the other star has no _Gaia_ DR3 mass reported, while the black dots indicate cases where the star shown is part of a binary where the other star also has a reported _Gaia_ DR3 mass.
from the left panel in Fig. 3 that the masses inferred using equation (1) are not an unbiased estimate of the _Gaia_ DR3 masses. For masses below about \(0.7M_{\odot}\) a small systematic appears such that the inferences from eq.(1) are on average 10% smaller than the more accurate _Gaia_ DR3 ones which use more detailed spectroscopic information.
For masses \(>0.7M_{\odot}\), mass estimates from eq.(1) and _Gaia_ DR3 agree much better and show almost no systematic biases. However, stars with masses in the range where both inferences diverge form a large fraction of the constituent members of the relevant binaries, e.g. Hernandez et al. (2022) report a mean binary mass of \(m_{B}=1.6M_{\odot}\) for their _Gaia_ eDR3 sample, and in the study by Pittordika & Sutherland (2019), inferred mass distributions for individual stars peak for \(0.4M_{\odot}<m_{\star}<0.5M_{\odot}\). Indeed, attempting to test particular MOND models requires masses determined to close to 10% accuracy (Banik & Zhao, 2018), and crucially, no systematic biases in this quantity, making the need of a non-biased mass estimate for all stars involved an important requirement in the context of wide binary gravity tests.
To this end we perform a simple correction on eq.(1), adding \(0.05M_{\odot}\) to the mass of all stars which eq.(1) returns as being below \(0.7M_{\odot}\). The result of this adjustment is shown in the right panel of Figure 3, where it is clear that both mass estimates are now in much better agreement and no substantial systematic bias remains. A more detailed adjustment is of course possible, but will only yield much smaller corrections, which in any case will eventually be rendered unnecessary as the fraction of accurate _Gaia_ spectroscopically determined masses grows with time. For the remainder of this paper, I shall use _Gaia_ DR3 masses when available, supplemented by the use of eq.(1), with the inclusion of the correction described, in cases where no _Gaia_ DR3 masses exist for any particular star. This is in effect equivalent to the more detailed higher order mass-magnitude scalings used by other authors, e.g. Chae (2023).
## 4 Results
Starting from the distance cut described at the end of section 2 with \(D<125\) leaves 688 binary pairs, after the CMD cut criteria presented there. For each of these binaries, parallaxes, positions and proper motion _Gaia_ DR3 parameters are used to evaluate the relative velocities on the plane of the sky in R.A. and Dec., \((\Delta V)_{RA}\) and \((\Delta V)_{Dec}\), including spherical geometric effects. We consider only velocity differences on the plane of the sky for the kinematic tests performed because these are more robust against contamination from the presence of hidden tertiaries than velocities along the line of sight. Velocities on the plane of the sky are being inferred through proper motions, while along the line of sight, velocities are derived through the Doppler effect. This is important since a hidden tertiary of short period could result in an added velocity component, which if along the line of sight, will directly result in an equal kinematic contaminant on the radial velocity, regardless of the amplitude of the stellar oscillation. However, on the plane of the sky, the same velocity contaminant, for a very tight hidden tertiary having a small internal semi-major axis, will not result in any kinematic contamination if the resulting oscillation is of small spatial amplitude. Distortions of small spatial amplitude (even those of high velocity) will to a large degree be averaged out over the 34 month timeline of the DR3 catalogue and produce negligible effects on plane of the sky velocities, whenever the inner orbital periods are significantly shorter than the integration period of the catalogue.
Also, restricting the relative velocity analysis to the plane of the sky (as was also done and clearly stated in
Figure 4: Left(a):Kinematic plot showing 1D \(\Delta V\) values on the plane of the sky, dots, for binaries in the \((D/\rm{pc})<125\) pc sample as a function of the internal separation of each binary in pc, \(s\), R.A. and Dec values appearing separately for each binary. The points with error bars give the binned rms values for this same quantity, with the horizontal bars giving the bin size and the vertical ones \(1\sigma\) confidence intervals on the quantity being plotted, dashed and dot-dashed bins for R.A. and Dec. observations, respectively. The solid line gives the prediction for the binned rms 1D \(\Delta V\) values on the plane of the sky for solar neighbourhood wide binaries from Jiang & Tremaine (2010). Right(b): Same as the left panel, but for the \(125<(D/\rm{pc})<170\) sample, see text and Table 1 for all other signal-to-noise, RUWE, \(B_{p}\) and CMD diagram selection cuts, which were identically applied to both distance samples shown in this figure.
Hernandez et al., 2022), makes this study completely robust to general relativistic gravitational redshift effects distorting relative velocities along the line of sight (e.g. El-Badry, 2022), the opposite of what was mistakenly claimed by Loeb (2022). Our kinematic \(\Delta V\) determinations include only relative motions on the plane of the sky as inferred through _Gaia_ proper motions.
Once the \((\Delta V)_{RA}\) and \((\Delta V)_{Dec}\) parameters are obtained for the binaries in question, two final quality cuts are introduced. The first is the exclusion of low signal-to-noise cases, I keep only binaries where both \(\Delta V\) measurements satisfy \((\Delta V/\sigma_{\Delta V})>1.5\). Lastly, I evaluate the final average binary probability _Gaia_ parameter for the sample, \(<B_{P}>\), and remove half of that number as a percentage of the highest \(\Delta V\) systems, uniformly distributed along the separation, \(s\), interval considered. The logic for this last cut is that if say, \(<B_{P}>=0.07\), it is possible that 7% of our remaining systems harbour a hidden tertiary. Some of those will have negligible effects on the resulting \(\Delta V\) measurements of the binaries, when the hidden tertiary velocity perturbation lies primarily along the line of sight, but in some cases this kinematic contamination will occur substantially on the plane of the sky and hence distort the intended test. There is no guarantee that these last cases will be those showing the largest \(\Delta V\) values across our sample, but as a measure to ensure the least possible effect of any degree of contamination from hidden tertiaries, the fastest 3% of cases per separation bin are eliminate across the entire \(s\) range covered.
The two final quality cuts mentioned above leave a small and very pure sample of 450 highly isolated wide binaries, a scatter plot of the \((\Delta V)_{RA}\) and \((\Delta V)_{Dec}\) values obtained is presented in the left panel of Fig.4, as a function of the internal separation on the plane of the sky for each binary. The final average distance to these stars is of 90.25 pc, with the selection criteria used resulting in very high average signal-to-noise for relative \(\Delta V\), stellar proper motion and parallax of 15.71, 3442 and 855.4 respectively, and average \(RUWE\) and \(B_{P}\) values of 1.01 and 0.07 respectively, as detailed in Table 1.
The points with error bars give the binned rms values and corresponding \(1\sigma\) confidence intervals for R.A. and Dec. measurements, dashed and dot-dashed cases respectively, where the horizontal lines are just the bin sizes and the figures above the bins give the number of data points per bin. This can be compared to detailed Newtonian expectations for 1D relative velocities between components of wide binaries in the Solar Neighbourhood through the work of Jiang & Tremaine (2010), henceforth JT10. These authors simulate populations of 50,000 wide binaries in the Solar Neighbourhood, evolved over a 10 Gyr period under the influence of Galactic tides and encounters with field stars and molecular clouds, under the assumptions of Newtonian dynamics, constant total binary masses of \(m_{B}=2M_{\odot}\) and expected distributions of ellipticities and isotropic orientations with respect to a simulated observational line of sight. The final rms value of the 1D \(\Delta V\) measurements was then reported, and is shown by the solid line in the left panel of Fig. 4. In the \(s\) range given we see essentially a Keplerian \(\Delta V\propto s^{-1/2}\) scaling, as we remain well within the \(\approx 1.7\) pc Jacobi radius of the problem.
As explained above, an important aspect of the wide binary test is to consider the effect of a distribution of projection effects and a distribution of ellipticities, even within the Newtonian region. I have not performed any dynamical modelling, neither within a Newtonian model nor within any modified gravity one. The main intent is only to test the validity of a full Newtonian model. To this end, the consequences of projection effects and of a distribution of ellip
Figure 5: Left(a):log-log plot of 1D \(\Delta V\) values on the plane of the sky, dots, for binaries in the \((s/\mathrm{pc})<0.01\), \((D/\mathrm{pc})<125\mathrm{pc}\) sample as a function of total binary mass, R.A. and Dec values appearing separately for each binary. Dots with error bars give the mean values of said quantity per bin, together with \(1\sigma\) confidence intervals on these means. The solid line shows a linear fit to the binned means having a slope of \(\alpha=0.46\pm 0.13\) and giving a correlation parameter of \(r=0.87\), consistent with Newtonian expectations in this high acceleration region. Right(b):Same as the left panel, but for the \((s/\mathrm{pc})<0.01\), \(125<(D/\mathrm{pc})<170\) sample. The solid line shows a linear fit to the binned means having a slope of \(\alpha=0.089\pm 0.21\) and giving a correlation parameter of \(r=0.21\), barely consistent with Newtonian expectations in this high acceleration region at a \(2\sigma\) level, and indicating some combination of error dominated \(\Delta V\) determinations and the presence of kinematic contaminants in this more distant sample.
ticities under a Newtonian scheme were taken into account through a detailed comparison to the results of JT10. In that paper the authors simulate large populations of wide binaries, and provide detailed predictions for the 1D projected rms values of the resulting relative velocities as a function of projected separations. It is for this reason that those are precisely the quantities calculated here, as it is only those quantities which can be compared in detail to the predictions of JT10. A further important point to note is that JT10 do not present initial instantaneous predictions for the rms value of binned 1D relative velocities as a function of projected separations, but those quantities after 10 Gyr of dynamical evolution in the tidal field of the Solar Neighbourhood and under the dynamical influence of gravitational interaction for the wide binary populations treated with field star. This is important, as the results of this evolution alter the final distributions of ellipticities away from the initial assumed ones. One should not compare against birth distributions of ellipticities, \(e\), but against the expected present day ones. To this effect, note that JT10 assume an initial uniform distribution of \(e^{2}\) values between 0 and 1, which is known as a thermal distribution of ellipticities and is justified in terms of thermodynamical expectations in the sampling of the energy distribution of initial wide binary parameters. However, recent direct inferences of \(e\) distributions by Hwang et al. (2022) using _Gaia_ data of local wide binaries, show that the present-day distribution of ellipticities becomes superthermal in going to the \(s>5\times 10^{-3}\)pc region, precisely the region of relevance to the wide binary test.
We see that in the separation range \(s<0.01\) pc, results closely follow the predictions of JT10, confirming their Newtonian expectations for the rms values of 1D \(\Delta V\). Our inferred values lie slightly below the Newtonian predictions, in consistence with the final sample \(<m_{B}>=1.56M_{\odot}\) being a factor of 0.78 below the assumed \(m_{B}=2M_{\odot}\) of JT10, whose prediction hence lies above the results presented by a factor of very close to \(0.78^{1/2}=0.88\).
The situation is qualitatively different in the \(s>0.01\) pc region, where the rms _Gaia_ DR3 inferred \(\Delta V\) values cease to follow the Newtonian \(\Delta V\propto s^{-1/2}\) scalings and settle to a constant value with distance. It is interesting that this asymptotic value of slightly below 0.4 km s\({}^{-1}\) closely agrees with the spiral galaxy baryonic Tully-Fisher velocity scaling of \(V_{TF}=(GMa_{0})^{1/4}=0.35(M/M_{\odot})^{1/4}\) km s\({}^{-1}\), which when evaluated at a total baryonic mass of \(1.56M_{\odot}\) yields 0.39 km s\({}^{-1}\).
The modified gravity expectation for a change in regime for wide binaries as close to 0.035 pc was based on the assumption of \(1M_{\odot}\) stars forming the binaries in question, given the \(<m_{B}>=1.56M_{\odot}\) of the final sample, the same acceleration threshold appears a factor of \((1.56/2)^{1/2}=0.88\) inwards, i.e. at 0.031 pc. Comparing to the 0.01 pc separation where we see the transition, would leave a factor of 3.1 to account between a form factor due to the difference in symmetry conditions between circular equilibrium orbits and the highly dynamical problem of populations of two orbiting bodies with a distribution of effective eccentricities, and the appearance of gravitational anomalies a little before the \(a<a_{0}\) threshold. Indeed, such is the case in spiral galaxy dynamics, e.g. at the Solar Radius \(a\approx a_{0}\) and the internal dark matter fraction required under Newtonian gravity is already about 0.5. At the Solar circle radius of about 8.2 kpc, the Newtonian baryonic rotation curve of the Galaxy is about 185 km s\({}^{-1}\) but the actually observed rotation velocity is about 230 km s\({}^{-1}\). Thus, gravity is about \(1.5\times\) stronger than the Newtonian expectation with only baryons, imply
Figure 6: Left(a):log-log plot of 1D \(\Delta V\) values on the plane of the sky, dots, for binaries in the \((s/\)pc\()>0.01\), \((D/\)pc\()<125\)pc sample as a function of total binary mass, R.A. and Dec values appearing separately for each binary. Dots with error bars give the mean values of said quantity per bin, together with \(1\sigma\) confidence intervals on these means. The solid line shows a linear fit to the binned means having a slope of \(\alpha=0.263\pm 0.32\) and giving a correlation parameter of \(r=0.50\), consistent in this low acceleration region with the 1/4 of Tull-Fisher kinematics at a \(1\sigma\) level, and also with the 1/2 of Newtonian expectations, at a \(2\sigma\) level. Right(b):Same as the left panel, but for the \((s/\)pc\()>0.01\), \(125<(D/\)pc\()<170\) sample. The solid line shows a linear fit to the binned means having a slope of \(\alpha=0.086\pm 0.49\) and giving a correlation parameter of \(r=0.12\), consistent with either Newtonian or Tully-Fisher expectations in this low acceleration region, but at a low significance level, given the large confidence intervals, probably indicating some combination of error dominated \(\Delta V\) determinations and the presence of kinematic contaminants in this more distant sample.
ing a dark matter to baryon ratio of about 0.5, see e.g. Zhu et al. (2022).
A finer test geared towards understanding the physics behind the \(\Delta V\) vs. \(s\) scalings discussed above comes from plotting the \(\Delta V\) observations against the total binary mass of each system, \(m_{B}\). For the sample being discussed, this appears in the left panel of Fig. 5, including only binaries within the internal separation interval \(s<0.01\) pc, the region consistent with a Newtonian \(\Delta V\propto s^{-1/2}\) scaling, shown in the left panel of Fig. 4. The dots with error bars now give the average \(\Delta V\) values for binned binaries, with R.A. and Dec. treated as independent data points. The horizontal lines give the width of the bins, while the vertical ones indicate the \(1\sigma\) confidence intervals on \(<\Delta V>\) and the numbers above show the number of data points per bin. A linear fit to the binned \(<\Delta V>\) values gives a scaling of \(<\Delta V>\propto m_{B}^{0.46\pm 0.13}\) with a correlation parameter of \(r=0.87\), perfectly consistent with Newtonian expectations of \(<\Delta V>\propto m_{B}^{0.5}\).
The corresponding plot for the \(s>0.01\) pc region of the left panel in Fig. 4 is given in the left panel of Fig.6. This time the linear fit gives \(<\Delta V>\propto m_{B}^{0.26\pm 0.32}\). This scaling, though still consistent with Newtonian expectations at a \(1\sigma\) level, is in closer agreement with a galactic baryonic Tully-Fisher scaling of \(<\Delta V>\propto m_{B}^{0.25}\). The narrow mass interval available and the small number of points limit the precision of this last test, in spite of which a strong reduction in the best fit \(<\Delta V>\propto m_{B}^{0}\) value of \(\alpha\) is apparent on crossing the \(s=0.01\) binary separation threshold which divides the region consistent with Newtonian expectations in the previous \(\Delta V\) vs. \(s\) plot, from the flat relative velocity regime appearing for \(s>0.01\) pc in that plot. Notice that not all points considered in the fits in Figs. 5, 6 appear in the plots, given the \(\Delta V\) range displayed.
All parameters of the fits discussed appear in Table 2, including the % of binaries used where both masses were supplied directly by _Gaia_ DR3, and the percentage of binaries where at least one mass was provided in such a way. For the \(s<0.01\) pc region, the above two values are 40% and 61% respectively, with the corresponding values for the \(s>0.01\)pc region being 41% and 63%. We see no statistically significant difference between these two sets of numbers, showing no bias in the availability of _Gaia_ DR3 mas determinations, and hence no bias in the availability of high quality data, across the two regions being compared.
Despite the extensive pruning of the sample and all the various independent checks introduced to limit the presence of any kinematic contaminants, it is of course possible that some remain. To gauge the possibility of this, we explore the distance dependence of our results, given that all errors and kinematic contaminants necessarily grow in importance as the mean distance of the binaries considered increases. With increasing distance the fixed \((S/N)_{\varpi}>100\) constraint used implies a larger confidence interval along the line of sight and hence a greater probability of including projection interlopers, while on average stars will appear dimmer, leading to an unavoidable increase in the error intervals for the inferred proper motions and hence a noisier \(\Delta V\) sample. Further, any hidden tertiary induced wobble will more easily hide below more uncertain stellar parameters.
I test for the sensitivity of the results to distance by repeating the experiment described previously, leaving all quality and kinematic contaminant exclusion cuts the same, but increasing this time the distance cut-off of the sample, to produce a second binary sample independent of the previous one. This time the distance cut is \(125<D<170\) pc. The \(\Delta V\) vs. \(s\) plot of this new sample is shown in the right panel of Fig. 4, while the \(s<0.01\) pc \(\Delta V\) vs. \(m_{B}\) plot appears in the right panel of Fig. 5, the corresponding \(s>0.01\) pc \(\Delta V\) vs. \(m_{B}\) scaling is shown in the right panel of Fig. 6. From the sample parameters given in Table 1 we see that the mean distance of the sample has increased by 64%, resulting in important decreases in the final average signal-to-noise values for binary \(\Delta V\), stellar proper motion and parallax of 52.3%, 47.8% and 44.5%, respectively.
Such important decreases in the quality of the sample inevitably lead to noisier \(\Delta V\) data where all trends will be less clear, independent of the probable increased appearance of kinematic contaminants. The right panel of Fig. 4 no longer shows such a close correspondence to the Newtonian expectations as seen in the left panel, rms values for \(\Delta V\) are now above the JT10 line for almost all the interval probed. For \(s>0.01\) we still see the suggestion of a flat region, although it is now much less well defined and dispersion between bins is more prominent.
In comparing the right and left panels of Figs. 5 and
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Distance & Initial No. & No. after & No. clearing & \(<D>\) & \(<S/N>_{\Delta V}\) & \(<S/N>_{pm}\) & \(<RUWE>\) & \(<B_{P}>\) \\ selection & of binaries & CMD cut & \((\Delta V/\sigma_{\Delta V})>1.5\), & & & & & \\ & & & & \(<\)top 3\%\(\Delta V\) & & & & \\ \hline \(D<125\) & 1352 & 688 & 450 & 90.25 & 15.7 & 3442 & 855.4 & 1.01 & 0.07 \\ \(125<D<170\) & 1562 & 914 & 450 & 147.6 & 7.5 & 1798 & 474.9 & 1.01 & 0.06 \\ \hline \end{tabular} For the two distance cuts described in the text, \((D/\rm pc)<125\) and \(125<(D/\rm pc)<170\), the first three entries of the table give the initial number of \((S/N)_{\varpi}>100\), \(RUWE<1.2\), \(B_{P}<0.2\) binaries after de-grouping, the remaining binaries after application of the CMD filtering procedure, and the final number of binaries in the kinematic plots after removal of \((\Delta V/\sigma_{\Delta V})<1.5\) cases, and the exclusion of the 3% fastest binaries per bin. The following two entries show the final average distance in pc and \(\Delta V\) signal-to-noise ratio, averaged over RA and Dec, for the binaries in the final samples. The last four entries give the average signal-to-noise values in proper motions (averaged over RA and DEC) and parallax, and final mean \(RUWE\) and \(B_{P}\) values for all the individual stars included in the kinematic plots of figure 4.
\end{table}
Table 1: Parameters for the two distance selection cuts described.
6 we see that the \(\Delta V\) vs. \(m_{B}\) scalings have disappeared, with \(<\Delta V>\) values showing no clear scaling with \(m_{B}\). Indeed, the results of linear fits are both consistent with no dependence of \(\Delta V\) on \(m_{B}\), yielding \(\alpha=0.089\pm 0.21\) and \(\alpha=0.086\pm 0.49\) with very low statistical correlations of \(r=0.21\) and \(r=0.12\) for the \(s<0.01\) and the \(s>0.01\) regions, respectively. Although these power-law scalings remain consistent with the Newtonian value of 0.5 at a \(2\sigma\) level, the poor correlation obtained makes this final result more suggestive of indicating the presence of kinematic contamination, particularly when compared to the much more relevant and precise results obtained for these scalings using the \(D<125\)pc sample. From Table 2 we notice also a slight increase in \(<m_{B}>\) for the more distant sample, showing that stars of smaller masses beginning to drop from the sample, at fixed minimum \((S/N)_{\varpi}>100\).
Thus, a combination of noisier data, and a greater probability of kinematic contamination render wide binary samples with mean distances even 64% larger than 125pc, inadequate for the purpose of inferring the details regarding the scalings of their internal kinematics.
## 5 Comparison to recent independent studies
In this final section I compare the results obtained to a couple of recent studies exploring also _Gaia_ wide binaries in the context of testing gravity in the low acceleration regime. The first is the work of Pittordis & Sutherland (2023), who examine velocities on the plane of the sky for a much larger _Gaia_ sample than the one treated here. These authors present their results in terms of distributions of the \(\tilde{v}\) parameter for various \(s\) cuts, where:
\[\tilde{v}=\frac{\Delta V}{\sqrt{Gm_{B}/s}}, \tag{2}\]
a measure first introduced in Banik & Zhao (2018). Clearly, \(\tilde{v}=1\) for a Newtonian binary having a circular orbit parallel to the plane of the sky. Projection effects will lead to a distribution of \(\tilde{v}\) values below 1 for an observed population, while the presence of a distribution of ellipticities will broaden the distribution, which will have a sharp upper cut at the escape velocity of \(\tilde{v}=\sqrt{2}\). I now calculate the \(\tilde{v}\) values for all the 450 wide binaries appearing in the left panel of Fig. 4. The resulting sample of \(\tilde{v}\) values was then divided into two sub-samples according to the separation of the binaries, distinguishing between \(s<0.01\)pc and \(s>0.01\)pc. These two distributions are shown in Fig. 7, with the left panel giving results for \(s<0.01\) pc, and the right panel for \(s>0.01\)pc, red lines. The blue histograms, taken from Banik & Zhao (2022), give predictions for a Newtonian model and for a MOND model including the external field effect of standard AQUAL or QUMOND proposals. The numbers in the y axis give the binned occupancy values for the wide binaries treated, in the right panel, while in the left panel, the data has been re-scaled so as to allow a uniform comparison, by a factor of 0.66.
For context on the external field effect (EFE) of MOND, recall that the theory is inherently non-linear, and includes an explicit acceleration scale, \(a_{0}\), with the character of dynamics depending sensitively on the ratio of the acceleration present to \(a_{0}\). This implies that a small bound system immersed within a larger mass distribution will behave in ways which are determined both by its internal acceleration, and by the local acceleration felt as part of the larger system, beyond the inclusion of tides (see e.g. Milgrom, 1986). As the details of exactly how this EFE applies are dependent on the exact particular MOND proposal, all of which share identi
Figure 7: Left(a):Distribution of \(\tilde{v}\) values for the wide binaries in the left panel of Fig. 4, for the \(s<0.01\)pc Newtonian region, red histogram. The Blue histogram gives Newtonian predictions for this distribution from Banik & Zhao (2022). Right(b):Distribution of \(\tilde{v}\) values for the wide binaries in the left panel of Fig. 4, for the \(s>0.01\)pc low acceleration region, red histogram. The Blue histogram gives predictions for the distribution of \(\tilde{v}\) values under an AQUAL MOND model and the green one corresponding predictions for a MOND without external field effect model, both from Banik & Zhao (2022).
cal predictions regarding the low acceleration behaviour of orbits about static isolated systems, e.g. Milgrom (2011) or Milgrom (2022), it is common to find predictions for behaviour which is sensitive to the EFE presented for particular well studied proposals, such as AQUAL or QUMOND, and also for the extreme limiting case of no EFE, e.g. Pittordis & Sutherland (2018).
Looking at the left panel of Fig. 7 we see that the obtained distribution of \(\tilde{v}\) values is broadly consistent with the Newtonian expectations, the peak appears well within values of \(\tilde{v}=1\), and no systems appear at values above \(\tilde{v}=1.5\). The difference between both curves are due to the small numbers of binaries available after the cuts imposed (Poisson noise confidence intervals on the red curve, which have not been added to avoid cluttering the figures, make both histograms consistent), and to any offset between the assumed ellipticity distribution of the Newtonian models constructed by Banik & Zhao (2022) and the actual present-day \(e\) distribution. The same can be said of the red and blue histograms in the right panel of Fig. 7, which compare our results for the low acceleration \(s>0.01\)pc region and the AQUAL prediction of this quantity, also from Banik & Zhao (2023). Compared to the Newtonian \(s<0.01\)pc distributions, the \(s>0.01\)pc ones show a slight broadening, and also a slight displacement towards larger values of the \(\tilde{v}\) parameters, with a maximum still well within \(\tilde{v}=1\). Thus, our results are strongly suggestive of an AQUAL phenomenology at the low acceleration \(s>0.01\)pc region.
The green histogram in this last figure, also from Banik & Zhao (2022), shows the expectations of a MOND without external field effect model, corresponding to the deep MOND limit of an isolated system, e.g. the models known to accurately reproduce the rotation curves of isolated spiral galaxies, e.g. Lelli et al. (2017). With respect to the blue histograms, this last prediction shows a very distinct triangular shape, significantly shifted to the right, with a maximum appearing at values of \(\tilde{v}\) larger than 1, a region where the observed distribution presents almost no points. This last distribution is clearly inconsistent, both qualitatively and quantitatively with the observed one.
The red histograms shown in Fig. 7 can now be compared to those presented recently by Pittordis & Sutherland (2023), their Fig. 9. We see that results from these authors are quite similar to mine in the \(s>0.01\)pc right panel of Fig. 7, in the \(\tilde{v}<2\) region, for all the large samples these authors present, of about 2000 binaries, all in various sections of the low acceleration \(s>0.01\)pc regime. However, theirs present extended distributions reaching values as high as \(\tilde{v}=7\). This feature shows the extensive presence of kinematic contaminants in the large sample of Pittordis & Sutherland (2023), and alerts to their presence also within the sensitive \(\tilde{v}<2\) region which has to be used to discriminate between different gravity models. These authors are aware of this problem, and attempt to model their results through the inclusion of a fraction of hidden tertiaries and flyby contaminants, rather than adopting the strategy followed here, of attempting the removal of these contaminants, which would necessarily result in a much smaller sample. Thus, an statistical approach is taken where the fraction of hidden tertiaries and of flybys is allowed to vary so that optimal values for these parameters are obtained. The final results show a poor chi squared values in the hundreds, for both Newtonian and MOND models, with Newtonian models having generally slightly better fits.
Hence, the above authors obtain results which favour standard Newtonian gravity over both MOND variants discussed above. However, having missed entirely the Newtonian region, as the smallest \(s\) value they consider is of \(s=0.025\)pc, they have no firm Newtonian point against which to calibrate or test the robustness of the scheme used. It is quite possible that had they included a Newtonian calibration point, a regime transition at \(s=0.01\)pc would have been apparent. The inclusion of a firm calibration Newtonian region in Pittordis & Sutherland (2023) would also allow a clearer separation of the degeneracies inherent to a problem where the fraction of hidden tertiaries, the fraction of unbound flybys and any changes in the structure of gravity appear intermixed.
Returning to Fig. 7, right panel, the clear inconsistency of the MOND without external field effect green histogram when compared to the data presented here, shows that the analogy with a flat rotation curve galactic region suggested by the low acceleration behaviour in Fig.4, is limited. The flat behaviour shown by the rms values of the \(\Delta V\) populations in this figure is hence the result of the details of the distributions. Where the deviation from Newtonian behaviour is evident, the precise manner of this deviation has to be deduced from more than just a single moment of the distribution obtained. The \(\tilde{v}\) distributions shown in Fig. 7 make it obvious that a MOND variant including an external field effect is preferred by the data.
That the hidden tertiary cleaning strategy implemented here is highly successful, albeit leaving only a very small final sample, can now be confirmed through three independent checks. First, the Newtonian prediction of JT10 is very accurately traced in the \(s<0.01\) pc region of the left panel of Fig. 4, in spite of the fact that the rms statistic being used in this comparison is highly sensitive to outliers. Then the mass-velocity power-law scaling resulting for the above mentioned sample is of \(0.45\pm 0.13\), in excellent agreement with
Figure 8: The figure shows the same data appearing in the left panel of Fig. 4, but showing this time the mean values at each bin, rather that the rms values of Fig. 4. For comparison, the straight line is the same as in Fig. 4.
Newtonian expectations, despite the small dynamical range in mass available. Lastly, the \(\tilde{v}\) distributions for all regimes of the final binaries used show none of the extended tail at values higher than 2.6, out to 6 or even 7, which are characteristic of the results of the Pitordis & Sutherland group, who perform no cleaning of hidden tertiaries or flyby contaminants. In the present sample only two binaries appears with \(\tilde{v}\) values above 2, and none above 2.6. If any meaningful population of hidden tertiaries remained, none of the above three conditions would be met.
As explained in section 3, it is only through rms 1D values that the results can be compared against an existing Newtonian prediction including projection effects, a distribution of initial ellipticities, and crucially, 10 Gyr of evolution self-consistently included in the work of JT10, who present final 1D rms results. For this reason no 2D velocities are calculated. Note that no other wide binary analysis in the context of tests of gravity presently found in the literature has included any consideration of the effects of dynamical evolution in the galactic environment, tidal fields and interaction with field stars, a non-negligible effect on the loosely-bound wide binaries being studied.
The use of the 1D results of JT10 also allow a careful comparison to the high acceleration Newtonian region, as shown in the left panel of Fig. 4. This acts as a consistency check on the study as a whole. The highly accurate agreement between present results for \(s<0.01\) pc and the predictions of JT10 validates the procedure used here and shows that the final sample is highly free from kinematic contaminants of any type. This comparison is only possible in the variables currently shown. Still, the use of the rms statistic is not entirely intuitive and lead to the mistaken interpretation in previous work by my group of the flattening in this presentation of the data as evidence for a MOND without external field effect model. The results shown in the right panel of Fig. 7 show this last to have been a mistaken interpretation of results.
In order to explore the character of the non-Newtonian \(s>0.01\)pc region better, I now repeat the left panel of Fig. 4, but showing not the rms value but the means at each of the same binning intervals of the previous figure. This is given in Fig. 8, where the line shows again the Newtonian expectations of JT10, for comparison. As it is the mean rather than the rms values at each bin which are being plotted, the Newtonian region now falls slightly below the rms Newtonian expectations, but reassuringly, for \(s<0.01\)pc, the scaling still follows the \(\Delta V\propto s^{-1/2}\) of Newtonian expectations. In going to the low acceleration \(s>0.01\)pc, just as in Fig. 4, a clear departure from Newtonian behaviour becomes apparent, although this time, the mean \(\Delta V\) values no longer remain flat, but show a fixed fractional boost over the extrapolation of the Newtonian region. In the \(s>0.01\)pc region we see mean values tracing the Newtonian scaling law, but above by a small factor.
This last figure can be compared to the recent results of Chae (2023), which appeared while the present paper was being refereed. This author takes a similar approach to Pitordis & Sutherland (2023) discussed above, in that a large wide binary sample is maintained by not attempting a clearing of hidden tertiaries, but in contrast to Pittordis & Sutherland (2023), Chae (2023) does efficiently remove flyby contamination through the use of an isolation criterion for the binaries being considered, taken from El-Badry et al. (2021). This results in a much cleaner sample, evident from the lack of a high \(\tilde{v}\) distribution, as is also the case in my present study. The treatment of hidden tertiaries between the above two studies is similar, in that the hidden tertiary fraction is treated as a free parameter to be determined statistically. However, the wide binary sample considered by Chae (2023) does extend into the Newtonian regime, reaching down to the same \(s=0.001\)pc lower value as I use here. This allows a crucial calibration at the Newtonian region where an excellent agreement between the data and Newtonian expectations allows an accurate calibration of the hidden tertiary fraction, as well as a consistency check of the whole procedure.
Chae (2023) reports wide binary data accurately tracing Newtonian expectations out to \(s=0.01\)pc, after which a clear and highly significant departure appears, with the data becoming consistent with an AQUAL MOND description of the data, precisely as shown in my present study. Chae (2023) then presents a comparison to the first version of my present paper as it appeared on the arXiv on submission to the journal, by plotting in his Fig. 31 his data in a \(\Delta V\) vs. \(s\) scatter plot, with binned values showing mean values at each bin. This last figure is qualitatively and quantitatively consistent with the result shown in Fig. 8 here. It is very reassuring of the underlying robustness of the results obtained here that a completely independent and highly complementary (in terms of the treatment of hidden tertiaries) study should reach conclusions consistent with those presented here.
We note finally that Chae (2023) intended his Fig. (31) as a comparison to Fig. 4 in the first draft of the present study, but failed to note that in Fig. 4 of my study, for reasons already detailed, what is being plotted is the rms value at each bin and not the mean. For this reason Chae (2023) mentions an accurate agreement with the results of the first draft of the present study in terms of an accurate tracing of the Newtonian expectations out to \(s=0.01\)pc, followed by a clear and highly significant departure upwards in velocity for \(s>0.01\). However, Chae (2023) also notes a qualitative difference with my earlier results, as the mean binned \(\Delta V\) values do not remain flat but rather show a parallel tracing of the Newtonian scaling, but a small factor above. This is indeed the exact behaviour shown in Fig. 8, the flat binned behaviour is found here when plotting the rms values, not the means. Hence, agreement with Chae (2023) is complete.
## 6 Discussion
The results shown in the left panels of Figs. 4 and 5 for the \(<D/pc>=90.25\) sample clearly display a gravitational anomaly in the \(s>0.01\) pc region which can not be attributed merely to the presence of noise due to random uncertainties in the _Gaia_ data used; the mean relative velocity signal-to-noise ratio of this sample, \(<S/N>_{\Delta V}=15.7\). If random noise were determing the signal recovered, the clear Newtonian signal seen immediately before \(s=0.01\) would be lost and not show the accurate tracing of the JT10 prediction right up to the appearance of the flat \(\Delta V\) regime. Indeed, in the much noisier \(<S/N>_{\Delta V}=7.5\), \(<D/pc>=147.6\) sample, we see a gradual departure upwards of the recovered
\((\Delta V)_{rms}\) values which never closely trace the JT10 prediction.
Therefore, understanding the \(s>0.01\) pc scalings inferred here within a Newtonian framework requires the assumption of carefully crafted kinematic contaminants, so as to accurately reproduce the close accordance with the scalings obtained. Pittordis & Sutherland (2023), analysing only the low acceleration \(s>0.01\) pc region, have show that the observed \(\Delta V\) vs \(s\) signal can be reproduced by assuming a suitable combination of unbound flybys and hidden tertiaries as kinematic contaminants of local wide binary samples. The above authors show that an optimal fit to the non-Newtonian signal is achieved by assuming a close to 20% flyby contamination (unbound flybies as a fraction of true bound binaries) in the largest separation intervals, see their Tables 4-6. From the left panel of Fig.4 we see that this would have to apply to binaries with internal separations below 0.06 pc, at flyby encounter velocities of close to 0.4 km s\({}^{-1}\).
Given the mean interstellar separation of 1pc of the Solar Neighbourhood, stars with a Gaussian velocity distribution with a \(\sigma_{V}\) close to 40 km s\({}^{-1}\), random encounter relative velocities will also show a Gaussian \(\Delta V\) distribution with a \(\sigma_{\Delta V}=\sqrt{2}\times 40\approx 60\) km s\({}^{-1}\). Hence, both in terms of spatial separations and relative velocities, the required flybys as kinematic contaminants are inconsistent with random encounters in the field. One has to assume a strong degree of correlation invoking initial conditions, essentially a common origin for the stellar pairs making up the unbound flyby population. At current separations below 0.06 pc and current relative velocities of \(\approx 0.4\) km s\({}^{-1}\), separation doubling times will be below \(1.5\times 10^{5}\) yr, a very small fraction of \(1.5\times 10^{-5}\) the typical 10 Gyr ages of the small mass Solar Neighbourhood stars in question. It is therefore far from obvious that a fully self-consistent Solar Neighbourhood dynamical model can be constructed where close to 10% of the wide binaries presented are unbound flybys, at the observed separations of \(s<0.06\) pc and relative velocities of \(\approx 0.4\) km s\({}^{-1}\).
Regarding hidden tertiaries as kinematic contaminants of wide binary samples ( as originally highlighted theoretically by Banik & Zhao 2018 and latter explicitly shown by Clarke 2020), Pittordis & Sutherland (2023) have shown that a very high 100% hidden tertiary percentage is required, for every bound binary formed from two single stars, one bound binary containing a hidden tertiary is required (in addition to the amount of flybies mentioned above) to reproduce the high velocity tail of the observed \(\Delta V\) binary distribution.
Assuming such a high fraction of hidden tertiaries in our current sample is inconsistent with the estimates of Penoyre (2020) and Belokurov et al. (2020), who estimate a hidden tertiary contamination (including even hot or outer Jupiters) below 5% for a \(RUWE<1.4\) cut, \(D<1\)kpc, using the CMD cleaning which they propose and which we follow, in their case for the 22 month _Gaia_ DR2 sample. Invoking a 50% hidden tertiary contamination in our much cleaner 34 month _Gaia_ DR3 (\(RUWE<1.2\), \(<RUWE>=1.01\), \(D<125\) pc, \(<D>=90.3\) pc) sample, further restricted by the imposed internal _Gaia_ binary probability filter of \(B_{P}<0.2\) resulting in a final \(<B_{P}>=0.07\) for all the stellar sources included, from which the fastest 3% \(\Delta V\) values across all bins have been removed, is highly unlikely.
If the phenomenology of the \(s>0.01\) pc region is attributed to hidden tertiaries, the excellent agreement with Newtonian predictions even immediately below \(s=0.01\)pc would require the abrupt termination of this hypothetical contamination, a scenario which is not supported by the available empirical evidence which shows no relevant trend with wide binary separation on the frequency of tertiary systems, in the regimes probed to date, e.g. Tokovinin et al. (2002), Tokovinin et al. (2010).
Finally, even if a suitable combination of kinematic contaminants can be put together to reproduce the \(\Delta V\) vs. \(s\) scalings seen in the left panel of Fig. 4, any such arrangement would also have to satisfy the \(\Delta V\) vs. \(m_{B}\) scalings present in the data, as shown in Figs. 5 and 6. Thus, it is possible that we could in fact be detecting the low acceleration validity limit of standard gravity, at a regime where the inclusion of dark matter would appear contrived. Indeed, such is the conclusion independently reached by Chae (2023), whose complementary wide binary results closely match what has been presented here.
Following the pioneering approach of MOND (Milgrom 1983), within a classical framework, and Bekenstein (2004) in terms of covariant extensions to GR, a multitude of modified gravity and modified inertia proposals now exist which do not require proposing a hypothetical dark matter component to explain galactic rotation curves, e.g. Moffat & Toth (2008), Zhao & Famaey (2010), Capozziello & De Laurentis (2011), Verlinde (2016), Barrientos & Mendoza (2018), Hernandez et al. (2019b) or Skordis & Zlosnik (2021), to mention but a few. Unfortunately, the available predictions of most of these, particularly covariant extensions of GR, are limited to spherically symmetric and static metrics. Calculating modified gravity/modified inertia orbits of binary stars within the global potential of the Milky Way is something which has only been done for a very limited set of specific MOND variants (e.g. the particular QUMOND option explored by Banik & Zhao 2018, or within the superfluid dark matter proposal which yield much the same expectations as
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \(D<125\) & & \(125<D<170\) & \\ & \(s<0.01\) & \(s>0.01\) & \(s<0.01\) & \(s>0.01\) \\ \hline \(N\) & 349 & 87 & 321 & 113 \\ \(\alpha\) & \(0.46\pm 0.13\) & \(0.26\pm 0.32\) & \(0.089\pm 0.21\) & \(0.086\pm 0.49\) \\ \(r\) & 0.87 & 0.50 & 0.21 & 0.21 \\ \(<m_{B}>\) & 1.56 & 1.56 & 1.59 & 1.59 \\ \(\%GM_{2}\) & 40 & 41 & 47 & 46 \\ \(\%GM_{1}\) & 61 & 63 & 67 & 64 \\ \hline \end{tabular} The table gives the number of binaries in each of the four samples used for the fits shown in Figs. 5 and 6, together with the fitted total \(<\Delta V>\) vs. binary mass power law index \(\alpha\), the \(1\sigma\) confidence interval with this quantity, and the Pearson statistical correlation between \(<\Delta V>\) and total binary binned mass, \(r\), average total binary mass in \(M_{\odot}\) and \(\%\) of binaries having both, \(\%GM_{2}\), and at least one, \(\%GM_{1}\), stellar masses from internally supplied _Gaia_ data.
\end{table}
Table 2: Parameters for the mass-velocity power-law fits.
QUMOND, as shown by Berezhiani & Khoury 2015). Hence, the results presented here can not presently be compared to the majority of existing modified gravity/modified inertia proposals, with some exceptions as presented in Pittordis & Sutherland (2018) and in section 5 here.
Note finally that the many orders of magnitude in mass, velocity and scale which separate \(a<a_{0}\) local wide binaries from the \(a<a_{0}\) galactic regime makes these interesting binaries a crucial subject for further study, which could yield surprises in terms of unknown kinematic contaminants yet to be considered under a Newtonian framework, or important constraints to limit modified gravity scenarios helping to eventually find a complete extended theory.
## Acknowledgements
I acknowledge useful conversations with Charalambos Pittordis, Kyu-Hyun Chae, Indranil Banik, Hong-Sheng Zhao and Ricardo Cortes on all aspects of this work. I also acknowledge the critical input of an anonymous referee as important towards clearing up a mistaken interpretation of my results as presented in the original version of this study. All data retrieval, processing, statistical analysis and presentation was performed using software developed jointly with Stephen Cookson. Xavier Hernandez acknowledges financial assistance from UNAM DGAPA grant IN106220 and CONACYT. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
## Data Availability
All data used in this work will be shared on reasonable request to the author.
|
2306.11339 | Masking Augmentation for Supervised Learning | Pre-training using random masking has emerged as a novel trend in training
techniques. However, supervised learning faces a challenge in adopting masking
augmentations, primarily due to unstable training. In this paper, we propose a
novel way to involve masking augmentations dubbed Masked Sub-model (MaskSub).
MaskSub consists of the main-model and sub-model; while the former enjoys
conventional training recipes, the latter leverages the benefit of strong
masking augmentations in training. MaskSub addresses the challenge by
mitigating adverse effects through a relaxed loss function similar to a
self-distillation loss. Our analysis shows that MaskSub improves performance,
with the training loss converging even faster than regular training, which
suggests our method facilitates training. We further validate MaskSub across
diverse training recipes and models, including DeiT-III, MAE fine-tuning, CLIP
fine-tuning, ResNet, and Swin Transformer. Our results show that MaskSub
consistently provides significant performance gains across all the cases.
MaskSub provides a practical and effective solution for introducing additional
regularization under various training recipes. Code available at
https://github.com/naver-ai/augsub | Byeongho Heo, Taekyung Kim, Sangdoo Yun, Dongyoon Han | 2023-06-20T07:17:38Z | http://arxiv.org/abs/2306.11339v2 | # Augmenting Sub-model to Improve Main Model
###### Abstract
Image classification has improved with the development of training techniques. However, these techniques often require careful parameter tuning to balance the strength of regularization, limiting their potential benefits. In this paper, we propose a novel way to use regularization called Augmenting Sub-model (AugSub). AugSub consists of two models: the main model and the sub-model. While the main model employs conventional training recipes, the sub-model leverages the benefit of additional regularization. AugSub achieves this by mitigating adverse effects through a relaxed loss function similar to self-distillation loss. We demonstrate the effectiveness of AugSub with three drop techniques: dropout, drop-path, and random masking. Our analysis shows that all AugSub improves performance, with the training loss converging even faster than regular training. Among the three, AugMask is identified as the most practical method due to its performance and cost efficiency. We further validate AugMask across diverse training recipes, including DeiT-III, ResNet, MAE fine-tuning, and Swin Transformer. The results show that AugMask consistently provides significant performance gain. AugSub provides a practical and effective solution for introducing additional regularization under various training recipes. Code is available at [https://github.com/naver-ai/augsub](https://github.com/naver-ai/augsub).
## 1 Introduction
As deep neural networks scale up, addressing the issues such as overfitting and improving generalization performance becomes important. To solve it, various data augmentation and regularizations have been developed. Starting from the traditional techniques like weight-decay and dropout [1], modern approaches such as image-mixing augmentations (e.g., Mixup [2] and CutMix [3]), mixture of data augmentation (e.g., RandAugment [4] and AutoAugment [5]), and drop-based techniques (e.g., Drop-path [6; 7] and RandomErase [8]) have been widely used to improve the performance.
These regularizations and augmentations usually improve generalization performance, making training difficult and hindering the deep models from converging with low training loss. In other words, the training techniques often cause underfitting on the training data with degraded performance. Thus, practitioners and researchers have empirically found appropriate combinations and intensities of these techniques [9; 10; 11], a concept referred to as "training recipes". The importance of such training recipe becomes even more significant in Vision Transformer (ViT) [12] architecture. The recipes of DeiT [10] and DeiT-III [11] are considered de facto for training ViTs.
A major role of training recipes [9; 11] is to find optimal hyperparameters for training techniques, so modifying the recipes may lead to unstable training or degraded performance. This makes it challenging for users to increase the intensity of regularization or introduce a new training technique.
Our research goal is to achieve further performance improvements with additional regularization while maintaining the stability of existing training recipes. To this end, we introduce a training framework using a "sub-model" alongside the main model. The main model uses standard training recipes [9; 11]. On the other hand, the sub-model utilizes additional regularization. We name our method as Augmenting Sub-model (AugSub). In the example of Figure 1, the desired additional
regularization is random masking (as done in MAE [13]). As in Figure 1 (b), applying the random masking to the main model may lead to degraded performance. In contrast, as in Figure 1 (c), AugSub utilizes the sub-model for random masking, and the sub-model receives a training signal from the main model similar to the self-distillation [14; 15; 16]. While the random masking technique amplifies the difficulty of the training process, this is counterbalanced by self-distillation loss since the outputs of the main model are relaxed and easier objective than the ground-truth label.
In summary, AugSub applies an additional regularization separated from the main model, utilizing a relaxed loss form. As a result, AugSub enables additional regularization without disrupting the convergence of original train loss. We construct AugSub utilizing three in-network drop-based techniques: dropout [1], drop-path [6; 7], and input masking [13; 17]. Corresponding to each respective regularization strategy, we denote them AugDrop, AugPath, and AugMask.
We extensively validate the performance of three AugSub methods. First, we analyze AugSub using 100 epochs training on ImageNet [18]. Without AugSub, loss convergence speed and corresponding accuracy are significantly degraded when additional regularization is applied. Conversely, AugSub successfully mitigates potential harmful effects from additional regularization, leading to a network training process that is even more efficient than standard training procedures. Among the three variants of AugSub, AugMask notably exhibits a significant enhancement in performance. Thus, we expand experiments to regular training in ImageNet [18] focus on AugMask. AugMask is applied on various supervised learning cases including DeiT-III [11], ResNet-RSB [9], MAE finetuning [13], and Swin transformer [19]. AugMask demonstrates remarkable performance improvement in all benchmarks. We argue that AugMask can be regarded as a significant advancement as a novel way to utilize regularization for visual recognition.
## 2 Related Work
**Training recipe** has been considered an important ingredient in building a high-performance network. He et al. [20] demonstrate that the training recipe significantly influences the network performance. RSB [9] is a representative and high-performance recipe for ResNet. With the emergence of ViT [12], the training recipe for ViT has gained the attention of the field. DeiT [10] shows that ViT can be trained to strong performance with only ImageNet-1k [18]. DeiT-III [11] is an improved version of DeiT, which applies findings from RSB to DeiT instead of distillation from CNN teacher. It is challenging to implement stronger or additional regularization in existing training recipes. To address this issue, we propose AugSub approach utilizing the sub-model.
Co-sub [21] introduces a similar concept to ours, utilizing sub-models. However, the objective of the sub-model differs significantly: while AugSub aims to stabilize training through additional regularization, Co-sub aims to train the sub-models in a collaborative manner [22]. We regard AugSub as a more generalized framework since Co-sub only considers the drop-path method to employ sub-models, whereas AugSub can cover a variety of drop-based techniques.
**Self-distillation** utilizes supervision from a network itself instead of using a teacher. ONE [14] uses a multi-branch ensemble to build superior output for the network and distill ensemble outputs as
Figure 1: **Overview of our Augmenting Sub-model (AugSub). (a) original supervised training; (b) conventional drop-based technique (random masking). It is applied to the main model, which degrades performance; (c) our proposed AugSub training, which separates the drop-based technique from the main model using the sub-model and employs a relaxed loss based on self-distillation. It achieves significant improvements from the state-of-the-art ViT training recipe [11].**
supervision for each branch. Some studies [15; 16] utilize the early-exit network for self-distillation. Those studies improve performance by using a full network as a teacher and an early-exit network as a student. MaskedKD [23] utilizes masking to reduce computation for knowledge distillation. From a self-distillation perspective, AugSub presents a new insight to construct the student model (i.e., sub-model) from the teacher model (i.e., main model) utilizing drop-based techniques.
**Self-supervised learning** shares components with AugSub. Previous works on contrastive learning incorporate two models with self-distillation loss [24; 25]. Want et al. [26] introduce a double tower with weak and strong augmentation for each model. In masked image models, supervised MAE [27] introduces additional supervised learning tasks to the MAE framework and accelerates MAE. Those studies partially share the fundamental concept with AugSub and gave us the motivation for AugSub. However, the proposed AugSub is significantly different from self-supervised learning approaches.
## 3 Method
We propose our method Augmenting Sub-model (AugSub) with formulation and pseudo-code in Section 3.1. Next, we introduce three variants of AugSub: AugDrop, AugPath, and AugMask in Section 3.2. Section 3.3 presents analyses of AugSub with loss convergence, accuracy, and gradient.
### Augmenting Sub-model (AugSub)
The cross-entropy loss with the softmax \(\sigma(\mathbf{z})=e^{z_{i}}/\sum_{j}e^{z_{j}}\) for images \(\mathbf{x}_{i}\) and one-hot labels \(\mathbf{y}_{i}(i\in[1,2,...,N])\) in a mini-batch with size \(N\) is
\[-\frac{1}{N}\sum_{i}^{N}\mathbf{y}_{i}\mathrm{log}\left(\sigma(f_{\theta}( \mathbf{x}_{i}|p_{\mathrm{drop}}=0))\right), \tag{1}\]
where \(f_{\theta}\) represents the network used for training. \(p_{\mathrm{drop}}\) means drop probability of network. Since the drop probability can be easily changed, we denote it as a condition for network function. Based on the value of \(p_{\mathrm{drop}}\), certain network features are dropped with probability \(p_{\mathrm{drop}}\). Note that we set the default drop probability to zero for convenience. Then, loss for drop-based regularization loss with probability \(p\in[0,1]\) is
\[-\frac{1}{N}\sum_{i}^{N}\mathbf{y}_{i}\mathrm{log}\left(\sigma(f_{\theta}( \mathbf{x}_{i}|p_{\mathrm{drop}}=p))\right). \tag{2}\]
Typically, a network with drop-based regularization is trained with equation 2. But, we conjecture that training using equation 2 with high probability \(p\) may interfere with loss convergence and induce instability in training. To ensure training stability, we utilize the model output of equation 1, \(f_{\theta}(\mathbf{x}_{i}|p_{\mathrm{drop}}=0)\), as guidance for drop-based regularization \(f_{\theta}(\mathbf{x}_{i}|p_{\mathrm{drop}}=p)\) instead of \(\mathbf{y}_{i}\). In other words, equation 2 is changed as
\[-\frac{1}{N}\sum_{i}^{N}\sigma(f_{\theta}(\mathbf{x}_{i}|p_{\mathrm{drop}}=0) )\log\left(\sigma(f_{\theta}(\mathbf{x}_{i}|p_{\mathrm{drop}}=p))\right). \tag{3}\]
In our Augmenting Sub-model (AugSub), the average of equation 1 and equation 3 is used as a loss function for the network. We designate \(f_{\theta}(\mathbf{x}_{i}|p_{\mathrm{drop}}=0)\) as the main model and \(f_{\theta}(\mathbf{x}_{i}|p_{\mathrm{drop}}=p)\) as the sub-model. This naming convention is employed because a network with dropped features appears to be a subset of the entire network. In equation 3, the main model output \(f_{\theta}(\mathbf{x}_{i}|p_{\mathrm{drop}}=0)\) is used with stop-gradient. Thus, the sub-model is trained to mimic the main model, but the gradient for the main model is independent of the sub-model. This can be interpreted as self-distillation, where knowledge is transferred from the main model to the sub-model. Also, AugSub can easily be expanded to binary cross-entropy loss by replacing the softmax function with the sigmoid function.
Algorithm 1 describes PyTorch-style pseudo-code of training with AugSub. The drop probability is put into the network input. The gradients are calculated on the average losses from the main and the sub-model. Note that AugSub does not use additional data augmentation, optimizer steps, and network parameters for the sub-model. We will demonstrate the significant performance benefits of this simple training technique.
Since the sub-model mimics the main model, it automatically controls the difficulty. If the main model produces output closely aligned with the ground-truth label, the sub-model loss aims to attain an accurate classification output under drop-based regularization. Conversely, if the main model fails to converge, the sub-model loss becomes considerably easier than constructing a ground-truth label. In summary, AugSub prioritizes the learning process, ensuring that drop-based regularization is exclusively applied to images that produce successful output in a standard setting. We assert that the prioritized loss mechanism of AugSub enables the network to preserve its convergence speed and learning stability while maintaining the benefits of drop-based regularization.
### Drop-based Sub-model Regularizations
We select three drop-based techniques for AugSub: dropout [1], drop-path [6], and random masking [13]. All methods are to drop a certain intermediate feature of the network. Drop-based techniques can easily adjust the strength by controlling drop probability and are also widely used as an essential regularization in training recipes [9; 10; 11; 19]. In this section, we explain each drop-based techniques and our implementation.
**AugDrop.** Dropout [1] is a fundamental activation drop method. Dropout drops feature elements with a fixed probability. Since dropout is not related to feature structure, every element of the feature has independent drop probability \(p\). Although dropout is not preferred in recent training recipes [10; 11], it is effective in the sub-model framework. For AugDrop, dropout is used for every self-attention and MLP block following the famous implementation [28].
**AugPath.** Drop-path [6; 7] randomly drops a total feature of the network block with a probability \(p\). When a layer is dropped, the signal proceeds to the next layer through the residual path only, acting as if the dropped layer does not exist for a specific image input. Drop-path is used to adjust the regularization strength [11]. For AugPath, we maintain drop-path of the training recipe in the main model and increase drop-path probability with a fixed rate, +0.1 or +0.2, in the sub-model.
**AugMask.** Random masking is an augmentation technique for BERT-like self-supervised learning [17; 13]. It drops input patches with a fixed ratio and uses the remaining patches as the network inputs. Despite the successes of random masking in self-supervised learning, it is often deemed too rigorous for supervised learning. Thus, we employ it in AugSub, which is designed to mitigate the intensity of regularization. We implement AugMask using MAE style token drop [13], allowing us to inherit the computational cost reduction by skipping network computation for the masked region.
### Analysis
We analyze the impact of AugSub with training ViT-B for 100 epochs on ImageNet-1k [18]. Three variants of AugSub are applied: AugDrop, AugPath, and AugMask. Based on DeiT-III [11], we shorten the epoch to 100 epochs and use image resolution \(224\times 224\). We compare three settings:
Figure 2: **Training metric analysis. We use 50%-random masking to compare three training settings: original (equation 1), masking (equation 2), and AugMask. We visualize (a) accuracy on the validation set; (b) train loss without drop (masking); (c) train loss with drop (masking).**
original, drop-based technique, and AugSub. The original uses equation 1 as the training loss, and none of the drop-based techniques is not used. For the drop-based techniques, the network is trained with equation 2. Note that it is a common practice to use drop-based techniques. We compare those two settings with AugSub. For analysis, we measured equation 1 'train loss - original' and equation 2 'train loss - drop' regardless of loss used for training. It shows how losses changed by training setting.
Figure 2 shows loss and accuracy trends in masking 50% case. When random masking is applied to training (green), loss with masking (Figure 1(c)) converges better than the original (blue). However, it significantly degrades the original train loss (Figure 1(b)), resulting in degrade in accuracy (Figure 1(a)). Regularization over the balance point often causes malicious effects on original train loss, which decreases accuracy. As shown in Figure 1(b) and 1(c), AugMask improves the loss convergence for both losses, which makes a significant improvement in accuracy.
Figure 3 explains the function of AugSub loss (equation 3) in the aspect of gradients magnitude for masking 50% case. The gradient magnitude from the main model (AugMask-Main) is similar to that of other training. In contrast, gradients from the sub-model (AugMask-Sub) have a small magnitude at the early stage. As the learning progresses, the gradients from the sub-model increase. It shows that AugSub trains the network following our intention. During the early stage of training, the gradients from the main model lead the training. Following the progress of the main model training, the sub-model adaptively increases its gradient magnitude and produces a reasonable amount of gradients at the end of training.
Table 1 shows results of other drop-based regularization with two drop-ratio. 'Single model' represents a training with drop loss function equation 2, when additional drop-based technique is directly applied to the main model. 'Augmenting Sub-model (AugSub)' shows the performance when drop-based regularization is applied through AugSub. Similar to Figure 2, 'Train loss (original)' shows equation 1 and 'Train loss (drop)' represents loss with drop as equation 2. The results demonstrate that AugSub improves training in all three drop-based regularization cases. In all cases, AugSub improves original and drop loss convergence, which is connected to superior accuracy compared to original training.
## 4 Experiments
We validate the effectiveness of Augmenting Sub-model (AugSub) by applying it to diverse training scenarios. We claim AugSub is an easy plug-in solution for various training recipes. Thus, we strictly follow the original training recipe, including optimizer parameters, learning rate and weight-decay, and regularization parameters. The only difference between baseline and AugSub is the drop-based technique for the sub-model. We consider AugMask our representative method among the three
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Single model} & \multicolumn{4}{c}{Augmenting Sub-model (AugSub)} \\ \cline{3-8} & \begin{tabular}{c} Drop \\ ratio \\ \end{tabular} & Accuracy & \begin{tabular}{c} Train loss \\ (original) \\ \end{tabular} & Accuracy & \begin{tabular}{c} Train loss \\ (original) \\ \end{tabular} & Accuracy & \begin{tabular}{c} Train loss \\ (original) \\ \end{tabular} &
\begin{tabular}{c} Train loss \\ (drop) \\ \end{tabular} \\ \hline Original & - & 77.4 & 6.42 & - & - & - & - \\ \hline \multirow{3}{*}{Dropout [1]} & 0.1 & 76.1 & 6.60 & 6.87 & 79.1 & 5.88 & 6.32 \\ & 0.2 & 74.1 & 6.95 & 7.34 & 79.1 & 5.82 & 6.57 \\ & 0.3 & 71.6 & 8.34 & 7.79 & 79.1 & 5.84 & 6.90 \\ \hline \multirow{3}{*}{Drop-path [6]} & 0.1 & 77.4 & 6.42 & 6.42 & 78.4 & 6.11 & 6.11 \\ & 0.2 & 74.9 & 6.74 & 7.19 & 78.7 & 5.91 & 6.48 \\ & 0.3 & 71.6 & 7.31 & 8.04 & 78.8 & 5.87 & 7.02 \\ \hline \multirow{3}{*}{Masking [13]} & 25\% & 76.3 & 6.60 & 6.96 & 79.0 & 5.89 & 6.38 \\ & 50\% & 73.8 & 7.02 & 7.77 & **79.4** & 5.81 & 6.89 \\ \cline{1-1} & 75\% & 67.3 & 8.08 & 9.27 & 79.2 & 5.84 & 8.15 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Analysis on drop regularization with/without AugSub.** The table shows 100 epochs of the ViT-B performance trained with drop regularization. Note that training loss scale \(10^{-3}\) is omitted for simplicity. The table presents the average values over three separate runs, and the standard deviations are reported in Table 1
Figure 3: **Gradients magnitude.** The gradient norm is averaged value over all parameters for each epoch.
variants of AugSub with its cost efficiency and impressive performance. We mainly report the results with AugMask with a fixed masking ratio 50% across all experiments.
### Training from scratch
The training recipe in ViTs is a key factor enabling ViT to surpass CNN; thus, the ViT training recipe is a significant and active research topic. We use a state-of-the-art ViT recipe, DeiT-III [11], as a baseline. Integrating additional techniques into the DeiT-III is a significant challenge, and improvements made over DeiT-III can be considered a novel state-of-the-art in ViT training.
We measure the performance of all three variants of AugSub on a 400 epochs training with Deit-III. We use Dropout (0.2), Drop-path (base + 0.1), and Masking (50%) for AugDrop, AugPath, and AugMask, respectively. Table 2 shows the results. All three variants of AugSub outperform the baseline. Among the three methods, AugMask shows the best performance. Also, AugMask has the lowest computation costs due to MAE [13]-style computation reduction. Thus, we conclude that AugMask (50%) is the best in practice for other training recipes.
We expand the experiment with AugMask (50%). Various sizes of ViTs are trained with AugMask (50%) on 400 and 800 epochs training. The results are shown in Table 3. AugMask significantly improves performance in all settings. In 400 epochs training, AugMask improves DeiT-III with substantial margins, which even outperforms 800 epochs training except for ViT-S/16. AugMask also demonstrates superior performance in 800 epochs of training. The impact of AugMask is impressively sustained even for larger models like ViT-L/16 and ViT-H/16. It is worth noting that ViT-H + AugMask (400 epochs) outperforms ViT-H/16 (800 epochs) with a significant +0.5pp gain even with half training length. Thus, AugMask is an effective way to improve ViT training.
### Finetuning
Following the emergence of self-supervised learning in the ImageNet [18] dataset, the significance of finetuning has notably increased. Generally, self-supervised learning, such as MAE [13] and BEiT [17; 29], does not use supervised labels at pretraining, which makes AugSub inapplicable for pretraining. However, most methods utilize supervised finetuning steps after pretraining to demonstrate their performance. Thus, we apply our AugMask (50%) to the finetuning stage. Note that we strictly follow original finetuning recipes and apply AugMask (50%) based on it. All finetuning is conducted using officially released pretrained weights.
We utilize three finetuning recipes: MAE [13], BEiT v2 [29], and Finetune CLIP [30]. MAE [13] is a representative method of masked image models (MIM). Since our random masking is motivated by MAE, AugMask is seamlessly integrated into MAE finetuning process. BEiT v2 [29] uses pretrained CLIP for MIM and achieves superior performance compared to MAE. Following the masking strategy of BEiT v2 using mask-token, we adjust AugMask to masking using mask-token from the pretrained
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Architecture & Baseline & AugDrop & AugPath & AugMask \\ \hline ViT-S/16 & 80.4 & 80.6 (+0.2) & 80.8 (+0.4) & **81.1 (+0.7)** \\ ViT-B/16 & 83.5 & 83.8 (+0.3) & 83.8 (+0.3) & **84.1 (+0.6)** \\ \hline Computational costs & \(\times 1.0\) & \(\times 2.0\) & \(\times 2.0\) & \(\times 1.5\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Comparison of three variants of AugSub. We use 400 epochs training with DeiT-III [11] to compare the performance of three drop-based regularizations integrated with AugSub: AugDrop, AugPath, and AugMask. The table presents the average values over three runs, and the standard deviations are reported in Table A.2**
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Architecture} & \multirow{2}{*}{\# params} & \multirow{2}{*}{FLOPs} & \multicolumn{3}{c}{400 epochs} & \multicolumn{2}{c}{800 epochs} \\ \cline{4-7} & & & DeiT-III & + AugMask & DeiT-III & + AugMask \\ \hline ViT-S/16 & 22.0 M & 4.6 G & 80.4 & **81.1 (+0.7)** & 81.4 & **81.7 (+0.3)** \\ ViT-B/16 & 86.6 M & 17.5 G & 83.5 & **84.1 (+0.6)** & 83.8 & **84.2 (+0.4)** \\ ViT-L/16 & 304.4 M & 61.6 G & 84.5 & **85.2 (+0.7)** & 84.9 & **85.3 (+0.4)** \\ ViT-H/14 & 632.1 M & 167.4 G & 85.1 & **85.7 (+0.6)** & 85.2 & **85.7 (+0.5)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Training from scratch with ViT using DeiT-III.** AugMask (50%) is applied to the ViT training recipe [11] on ImageNet-1k. Note that the training parameters are identical to the original ones.
weight. Finetune CLIP [30] is a finetuning recipe for CLIP [31] pretrained weights. AugMask is applied to finetuning CLIP without change, the same as MAE finetune.
Table 4 shows the finetuning results. AugMask improves the performance of all finetune recipes, including large-scale ViT models. This is notable as it shows substantial improvement with a short finetuning phase of fewer than 100 epochs compared to the pretraining period of 1600 epochs. In MAE finetuning, AugMask improves 0.2 - 0.3pp in all model sizes. AugMask is also effective on BEiT v2, which utilizes Relative Positional Embeddings (RPE) [19; 17] and masking strategy with mask-token. Even in CLIP finetuning, AugMask achieves substantial improvement. In finetuning CLIP, we report performance at the last epoch rather than selecting the best performance in early epochs. The best performance of finetuning CLIP with AugMask is the same as the baseline.
### Hierarchical architecture
We extend experiments to architectures with hierarchical spatial dimensions: ResNet [32] and Swin Transformer [19]. Unlike ViT maintains spatial token length for all layers, those networks change the spatial size of features in the middle of layers, requiring a change in masking strategy. We apply AugMask (50%) to ResNet and Swin Transformer. We simply fill out masked regions with zero pixels for ResNets and replace masked regions to mask-tokens for Swin Transformer. It maintains the spatial structure and enables spatial size reduction of hierarchical architecture. Following previous literature [33], we use random masking with patch-size \(32\times 32\). Note that the computation reduction of AugMask is not applicable for this case due to changes in the masking strategy. Thus, AugMask costs double the training budget. For ResNet, we use a high-performance training recipe [9] with 300 epochs. The recipe of original paper [19] is used for the Swin Transformer training. We strictly follow the training recipe and apply AugMask without recipe tuning.
Results are shown in Table 5. AugMask achieves impressive performance gains even in ResNet and Swin Transformer. ResNet is a convolutional neural network using Batch Normalization [34]. Thus, it is substantially different from ViT. Swin Transformer uses a different training recipe from the conventional ViT recipe [11]. Thus, an improvement on Swin shows that AugMask can be used for different training from scratch recipes without tuning. In summary, the effectiveness of AugMask is not limited to ViT architectures and is applicable to hierarchical architectures.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Pretraining & Finetuning recipe & Finetuning epochs & Architecture & Baseline & +AugMask \\ \hline MAE [13] & MAE finetuning [13] & 100 & ViT-B/16 & 83.6 & **83.9 (+0.3)** \\ (1600 epochs) & (+ AugMask) & 50 & ViT-L/16 & 85.9 & **86.1 (+0.2)** \\ & & 50 & ViT-H/14 & 86.9 & **87.2 (+0.3)** \\ \hline BEiT v2 [29] & BEiT v2 finetuning [29] & 100 & ViT-B/16 & 85.5 & **85.6 (+0.1)** \\ (1600 epochs) & (+ AugMask) & 50 & ViT-L/16 & 87.3 & **87.4 (+0.1)** \\ \hline CLIP [31] & Finetuning CLIP [30] & 50 & ViT-B/16 & 84.8 & **85.2 (+0.4)** \\ & (+ AugMask) & 30 & ViT-L/14 & 87.5 & **87.8 (+0.3)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Impact on ImageNet-1k finetuning. We report finetuned performance of MAE [13], BEiT v2 [29] and CLIP finetuning [30] with AugMask (50%). Official pretrained weights are used.**
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Training recipe & Epochs & Architecture & \# params & FLOPs & Baseline & + AugMask \\ \hline \multirow{3}{*}{ResNet-RSB A2 [9]} & \multirow{3}{*}{300} & ResNet50 [32] & 25.6 M & 4.1 G & 79.7 & **80.0 (+0.3)** \\ & & ResNet101 [32] & 44.5 M & 7.9 G & 81.4 & **82.1 (+0.7)** \\ & & ResNet152 [32] & 60.2 M & 11.6 G & 81.8 & **82.8 (+1.0)** \\ \hline \multirow{3}{*}{Swin Transformer [19]} & \multirow{3}{*}{300} & Swin-T [19] & 28.3 M & 4.5 G & 81.3 & **81.4 (+0.1)** \\ & & Swin-S [19] & 49.6 M & 8.7 G & 83.0 & **83.4 (+0.4)** \\ \cline{1-1} & & Swin-B [19] & 87.9 M & 15.4 G & 83.5 & **83.9 (+0.4)** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **ImageNet-1k with hierarchical architecture. We show the performance of ResNet [32] and Swin Transformer [19] trained from scratch with AugMask (50%).**
### Training budget
We have shown that AugMask effectively improves the performance of various architectures. However, AugMask requires additional computation costs for the sub-model, which increase training costs. Thus, we analyze AugMask regarding its training costs to determine if AugMask could be an effective solution within a limited training budget. We compare AugMask with training recipes set to \(\times 1.5\) epochs to align with the training budget. The training budget is quantified regarding required GPU days when only a single NVIDIA V100 GPU is used for training. Table 6 shows the results. In DeiT-III [11] training, AugMask outperforms baseline with \(\times 1.5\) epochs setting. Thus, AugMask is superior to the long epoch training to spend computation costs for training ViT. MAE finetuning with \(\times 1.5\) epochs even degrades the performance compared to the baseline. For ResNet, we compare 300 epochs AugMask with 600 epochs training recipe RSB [9] A1. AugMask outperforms 600 epochs training recipes in ResNet101 and ResNet152. Consequently, the results show that AugMask is an effective way to improve training, even considering computation costs for the sub-model.
### Robustness
We evaluate the impact of AugMask in various robustness benchmarks. We use models trained for 800 epochs in Table 3. Table 7 shows the results. ViT models trained with AugMask demonstrate superior performance in all robustness metrics. AugMask outperforms the baseline in natural adversarial examples [35] (IN-A), objects in different styles and textures (IN-R [36]), controls in rotation,
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{2}{c}{Architecture} & Training recipe & +AugMask & Epochs & GPU days & Accuracy \\ \hline \multirow{8}{*}{DeiT-S/16} & \multirow{3}{*}{ViT-S/16} & \multirow{3}{*}{DeiT-III [11]} & - & 600 & 22 & 80.7 \\ & & & \(\mathbf{\check{\mathbf{\nu}}}\) & 400 & 22 & **81.2** (+0.5) \\ \cline{2-7} & & & - & 1200 & 45 & 81.6 \\ \cline{2-7} & & & \(\mathbf{\check{\mathbf{\nu}}}\) & 800 & 44 & **81.7** (+0.1) \\ \cline{2-7} Training & \multirow{3}{*}{ViT-B/16} & \multirow{3}{*}{DeiT-III [11]} & - & 600 & 26 & 83.7 \\ & & & \(\mathbf{\check{\mathbf{\nu}}}\) & 400 & 25 & **84.1** (+0.4) \\ \cline{2-7} & & & - & 1200 & 52 & 83.8 \\ \cline{2-7} & & & \(\mathbf{\check{\mathbf{\nu}}}\) & 800 & 50 & **84.2** (+0.4) \\ \hline \multirow{8}{*}{MAE [13]} & \multirow{3}{*}{ViT-B/16} & \multirow{3}{*}{MAE Finetune [13]} & - & 150 & 9 & 83.5 \\ & & & \(\mathbf{\check{\mathbf{\nu}}}\) & 100 & 9 & **83.9** (+0.4) \\ \cline{2-7} Finetuning & \multirow{3}{*}{ViT-L/16} & \multirow{3}{*}{MAE Finetune [13]} & - & 75 & 14 & 85.5 \\ & & & \(\mathbf{\check{\mathbf{\nu}}}\) & 50 & 14 & **86.1** (+0.6) \\ \hline \multirow{8}{*}{ResNet [32]} & \multirow{3}{*}{ResNet101} & \multirow{3}{*}{RSB [9] A1} & - & 600 & 22 & 80.4 \\ & & & \(\mathbf{\check{\mathbf{\nu}}}\) & 300 & 14 & **80.0** (-0.4) \\ \cline{1-1} \cline{2-7} & & & \(\mathbf{\check{\mathbf{\nu}}}\) & 600 & 24 & 81.5 \\ \cline{1-1} \cline{2-7} Training & \multirow{3}{*}{ResNet101} & \multirow{3}{*}{RSB [9] A2} & \multirow{3}{*}{\(\mathbf{\check{\mathbf{\nu}}}\)} & 300 & 20 & **82.1** (+0.6) \\ \cline{1-1} \cline{2-7} & & & \(\mathbf{\check{\mathbf{\nu}}}\) & 600 & 32 & 82.0 \\ \cline{1-1} \cline{2-7} & & & \(\mathbf{\check{\mathbf{\nu}}}\) & 300 & 29 & **82.8** (+0.8) \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Comparison in the same training budget. All training has been conducted with NVIDIA V100 8 GPUs. GPU days refer to the number of days required for training when using a single V100 GPU.**
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Model & +AugMask & IN-1k & IN-V2 & IN-Real & IN-A & IN-R & ObjNet & SI-size & SI-loc & SI-rot \\ \hline \multirow{2}{*}{ViT-S} & - & 81.4 & 70.1 & 87.0 & 23.4 & 46.4 & 32.6 & 55.0 & 39.8 & 37.8 \\ & \(\mathbf{\check{\mathbf{\nu}}}\) & **81.7** & **71.0** & **87.4** & **26.9** & **47.2** & **33.5** & **56.7** & **42.5** & **39.9** \\ \hline \multirow{2}{*}{ViT-B} & - & 83.8 & 73.4 & 88.2 & 36.8 & 54.1 & 35.7 & 58.0 & 42.7 & 41.5 \\ & \(\mathbf{\check{\mathbf{\nu}}}\) & **84.2** & **74.0** & **88.6** & **41.9** & **54.4** & **37.2** & **59.0** & **44.8** & **43.3** \\ \hline \multirow{2}{*}{ViT-L} & - & 84.9 & 74.8 & 88.8 & 45.3 & 57.4 & 38.8 & 59.8 & 46.5 & 45.0 \\ & \(\mathbf{\check{\mathbf{\nu}}}\) & **85.3** & **75.8** & **89.2** & **51.1** & **58.5** & **40.0** & **60.2** & **46.8** & **45.9** \\ \hline \multirow{2}{*}{ViT-H} & - & 85.2 & 75.7 & 89.2 & 51.9 & 58.8 & 40.1 & 61.9 & 49.0 & 46.8 \\ & \(\mathbf{\check{\mathbf{\nu}}}\) & **85.7** & **76.5** & **89.6** & **58.3** & **59.9** & **41.7** & **62.4** & **50.1** & **48.4** \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Robustness benchmark. Table shows the robustness benchmark for ViT pretrained with/without AugMask. In all metrics, higher scores indicate better results.**
background, and viewpoints (ObjNet [37]), and SI-scores [38] (SI-size, SI-loc, and SI-rot). The results demonstrate that the improvement of AugMask is not limited to ImageNet validation and has been verified across various robustness metrics.
### Downstream tasks
Improvement on pretraining can boost the performance of downstream tasks [39]. Thus, we also verify the downstream performance of AugMask using 800 epochs pretrained weight from Table 3.
**Semantic segmentation.** Using the segmentation recipe of BEiT v2 [29], we train UpperNet [40] with ViT backbone on ADE-20k [41] dataset. Table 8 shows the results in two ways: single-scale and multi-scale evaluation. On both evaluations, the backbone pretrained with AugMask demonstrates superior performance in ViT-B and ViT-L.
**Object detection and instance segmentation.** We utilize Cascaded Mask R-CNN [42] with ViT backbones [43] for MS COCO [44], which conducts object detection and instance segmentation simultaneously. ViTDet [43] is used as a training recipe for this experiment. Table 9 shows the results. The metric \(AP^{box}\) quantifies the performance in object detection, while \(AP^{mask}\) provides performance in instance segmentation. In both measures, the backbone pretrained with AugMask outperforms the DeiT-III backbone.
**Transfer learning.** We measure transfer learning performance on small-scale datasets. We use the CIFAR-10 [45], CIFAR-100 [45], Oxford Flowers-102 [46], Stanford Cars [47] and iNaturalist [48] datasets. We use AdamW training recipe [11]. We also evaluate performance when AugMask (50%) is applied to the finetuning process. Table 10 shows the results. The backbone pretrained with AugMask consistently outperforms the DeiT-III backbone across all cases. Moreover, when AugMask is applied to the finetuning process, it further boosts performance in most cases except CIFAR.
## 5 Conclusion
In this work, we have presented a new method for additional regularization across various training recipes. Our method, Augmented Sub-model (AugSub), is designed to leverage drop-based regularization within a sub-model, which is separated from main training and uses a relaxed loss function. Our extensive analysis reveals that AugSub effectively mitigates malicious effects of additional regularization while accelerating the convergence speed, yielding superior performance. We verify AugSub on various training recipes, including diverse architecture. Notably, AugMask, a specific implementation of AugSub for random masking, demonstrates significant performance improvements across diverse scenarios. We claim that AugSub is a substantial advancement in training recipes and contributes to building novel regularization strategies.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Model} & Pretraining & Finetuning & \multirow{2}{*}{CIFAR-10} & \multirow{2}{*}{CIFAR-100} & \multirow{2}{*}{Flowers} & \multirow{2}{*}{Cars} & iNat-18 & iNat-19 \\ & + AugMask & & & & & & \\ \hline \multirow{3}{*}{ViT-S/16} & - & - & 98.8 & 90.0 & 94.5 & 80.9 & 70.1 & 76.7 \\ & \(\mathbf{\psi}\) & - & **98.9** & **90.6** & 95.2 & 81.2 & 70.8 & 77.0 \\ & \(\mathbf{\psi}\) & \(\mathbf{\psi}\) & 98.8 & 89.9 & **98.3** & **92.2** & **71.2** & **77.1** \\ \hline \multirow{3}{*}{ViT-B/16} & - & - & 99.1 & 91.7 & 97.5 & 90.0 & 73.2 & 78.5 \\ & \(\mathbf{\psi}\) & - & **99.2** & **91.9** & 97.7 & 90.2 & 73.6 & 78.8 \\ \cline{1-1} & \(\mathbf{\psi}\) & \(\mathbf{\psi}\) & 98.8 & 89.6 & **98.7** & **92.8** & **73.9** & **79.1** \\ \hline \hline \end{tabular}
\end{table}
Table 10: **Transfer learning to small-scale datasets.** Table shows transfer learning performance with/without AugMask. We measure the performance when AugMask is applied to pretraining and finetuning. The table presents the average values over three separate runs, and the standard deviations are reported in Table A.3
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{2}{c}{Single-scale mIoU} & \multicolumn{2}{c}{Multi-scale mIoU} \\ \cline{2-4} & DeiT-III [11] & + AugMask & DeiT-III [11] & + AugMask \\ \hline ViT-B & 48.8 & **49.4** (+0.6) & 49.7 & **50.2** (+0.5) \\ ViT-L & 51.7 & **52.2** (+0.5) & 52.3 & **52.7** (+0.4) \\ \hline \hline \end{tabular}
\end{table}
Table 8: **Semantic segmentation on ADE-20k.** UpperNet for ViT backbone is trained with the BEiTv2 [29] segmentation recipe.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \(AP^{box}\) & \(AP^{mask}\) \\ \hline DeiT-III [11] & 50.7 & 43.6 \\ +AugMask & **50.9** (+0.2) & **43.9** (+0.3) \\ \hline \hline \end{tabular}
\end{table}
Table 9: **Detection and instance segmentation on MS COCO.** Cascaded Mask R-CNN with ViT-B is used.
**Limitation.** Our research of AugSub was constrained by limited computational resources, preventing us from fully exploring its potential. Given that AugSub enables additional regularization, which is not applicable in previous training, we believe there is substantial room for further improvement. Additionally, our study does not cover the extension of AugSub to tasks beyond visual recognition, highlighting a promising direction for future research.
**Broader Impact.** This work does not present any foreseeable societal consequence.
|
2305.10577 | Revisiting the Complexity of and Algorithms for the Graph Traversal Edit
Distance and Its Variants | The graph traversal edit distance (GTED), introduced by Ebrahimpour Boroojeny
et al.~(2018), is an elegant distance measure defined as the minimum edit
distance between strings reconstructed from Eulerian trails in two edge-labeled
graphs. GTED can be used to infer evolutionary relationships between species by
comparing de Bruijn graphs directly without the computationally costly and
error-prone process of genome assembly. Ebrahimpour Boroojeny et al.~(2018)
propose two ILP formulations for GTED and claim that GTED is polynomially
solvable because the linear programming relaxation of one of the ILPs always
yields optimal integer solutions. The claim that GTED is polynomially solvable
is contradictory to the complexity results of existing string-to-graph matching
problems. We resolve this conflict in complexity results by proving that GTED
is NP-complete and showing that the ILPs proposed by Ebrahimpour Boroojeny et
al. do not solve GTED but instead solve for a lower bound of GTED and are not
solvable in polynomial time. In addition, we provide the first two, correct ILP
formulations of GTED and evaluate their empirical efficiency. These results
provide solid algorithmic foundations for comparing genome graphs and point to
the direction of heuristics. The source code to reproduce experimental results
is available at https://github.com/Kingsford-Group/gtednewilp/. | Yutong Qiu, Yihang Shen, Carl Kingsford | 2023-05-17T21:23:20Z | http://arxiv.org/abs/2305.10577v2 | # Revisiting the Complexity of and Algorithms for the Graph Traversal Edit Distance and Its Variants
###### Abstract
The graph traversal edit distance (GTED) is an elegant distance measure defined as the minimum edit distance between strings reconstructed from Eulerian trails in two edge-labeled graphs. GTED can be used to infer evolutionary relationships between species by comparing de Bruijn graphs directly without the computationally costly and error-prone process of genome assembly. Ebrahim-pour Boroojeny et al. (2018) suggest two ILP formulations for GTED and claim that GTED is polynomially solvable because the linear programming relaxation of one of the ILP always yields optimal integer solutions. The result that GTED is polynomially solvable is contradictory to the complexity results of existing string-to-graph matching problems.
We resolve this conflict in complexity results by proving that GTED is NP-complete and showing that the ILPs proposed by Ebrahim-pour Boroojeny et al. do not solve GTED but instead solve for a lower bound of GTED and are not solvable in polynomial time. In addition, we provide the first two, correct
ILP formulations of GTED and evaluate their empirical efficiency. These results provide solid algorithmic foundations for comparing genome graphs and point to the direction of approximation heuristics.
The source code to reproduce experimental results is available at
[https://github.com/Kingsford-Group/gtednewilp/](https://github.com/Kingsford-Group/gtednewilp/).
## 1 Introduction
Graph traversal edit distance (GTED) [1] is an elegant measure of the similarity between the strings represented by edge-labeled Eulerian graphs. For example, given two de Bruijn assembly graphs [2], computing GTED between them measures the similarity between two genomes without the computationally intensive and possibly error-prone process of assembling the genomes. Using an approximation of GTED between assembly graphs of Hepatitis B viruses, Ebrahimpour Boroojeny et al. [1] group the viruses into clusters consistent with their taxonomy. This can be extended to inferring phylogeny relationships in metagenomic communities or comparing heterogeneous disease samples such as cancer. There are several other methods to compute a similarity measure between strings encoded by two assembly graphs [3, 4, 5]. GTED has the advantage that it does not require prior knowledge on the type of the genome graph or the complete sequence of the input genomes. The input to the GTED problem is two unidirectional, edge-labeled Eulerian graphs, which are defined as:
**Definition 1** (Unidirectional, edge-labeled Eulerian Graph).: _A unidirectional, edge-labeled Eulerian graph is a connected directed graph \(G=(V,E,\ell,\Sigma)\), with node set \(V\), edge multi-set \(E\), constant-size alphabet \(\Sigma\), and single-character edge labels \(\ell:E\rightarrow\Sigma\), such that \(G\) contains an Eulerian trail that traverses every edge \(e\in E\) exactly once. The unidirectional condition means that all edges between the same pair of nodes are in the same direction._
Such graphs arise in genome assembly problems (e.g. the de Bruijn subgraphs). Computing GTED is the problem of computing the minimum edit distance between the two most similar strings represented by Eulerian trails each input graph.
**Problem 1** (Graph Traversal Edit Distance (GTED) [1]).: _Given two unidirectional, edge-labeled Eulerian graphs \(G_{1}\) and \(G_{2}\), compute_
\[\mathrm{GTED}(G_{1},G_{2})\triangleq\min_{\begin{subarray}{c}t_{1}\in\text{ trails}(G_{1})\\ t_{2}\in\text{trails}(G_{2})\end{subarray}}\text{edit}(\text{str}(t_{1}), \text{str}(t_{2})). \tag{1}\]
_Here, trails\((G)\) is the collection of all Eulerian trails in graph \(G\), \(\text{str}(t)\) is a string constructed by concatenating labels on the Eulerian trail \(t=(e_{0},e_{1},\ldots,e_{n})\), and edit\((s_{1},s_{2})\) is the edit distance between strings \(s_{1}\) and \(s_{2}\)._
Ebrahimpour Boroojeny et al. [1] claim that GTED is polynomially solvable by proposing an integer linear programming (ILP) formulation of GTED and arguing that the constraints of the ILP make it polynomially solvable. This result, however, conflicts with several complexity results on string-to-graph matching problems. Kupferman and Vardi [6] show that it is NP-complete to determine if a string exactly matches an Eulerian tour in an edge-labeled Eulerian graph. Additionally, Jain et al. [7] show that it is NP-complete to compute an edit distance between a string and strings represented by a labeled graph if edit operations are allowed on the graph. On the other hand, polynomial-time algorithms exist to solve string-to-string alignment [8] and string-to-graph alignment [7] when edit operations on graphs are not allowed.
We resolve the conflict among the results on complexity of graph comparisons by revisiting the complexity of and the proposed solutions to GTED. We prove that computing GTED is NP-complete by reducing from the Hamiltonian Path problem, reaching an agreement with other related results on complexity. Further, we point out with a counter-example that the optimal solution of the ILP formulation proposed by Ebrahimpour Boroojeny et al. [1] does not solve GTED.
We give two ILP formulations for GTED. The first ILP has an exponential number of constraints and can be solved by subtour elimination iteratively [9; 10]. The second ILP has a polynomial number of constraints and shares a similar high-level idea of the global ordering approach [10] in solving the Traveling Salesman problem [11].
In Qiu and Kingsford [12], Flow-GTED (FGTED), a variant of GTED is proposed to compare two sets of strings instead of two strings encoded by graphs. FGTED is equal to the edit distance between the most similar sets of strings spelled by the decomposition of flows between a pair of predetermined source and sink nodes. The similarity between the sets of strings reconstructed from the flow decomposition is measured by the Earth Mover's Edit Distance [12; 13]. FGTED is used to compare pan-genomes, where both the frequency and content of strings are essential to represent the population of organisms. Qiu and Kingsford [12] reduce FGTED to GTED, and via the claimed polynomial-time algorithm of GTED, argue that FGTED is also polynomially solvable. We show that this claim is false by proving that FGTED is also NP-complete.
While the optimal solution to ILP proposed in Ebrahimpour Boroojeny et al. [1] does not solve GTED, it does compute a lower bound to GTED. We characterize the cases when GTED is equal to this lower bound. In addition, we point out that solving this ILP formulation finds a minimum-cost matching between closed-trail decompositions in the input graphs, which may be used to compute the similarity between repeats in the genomes. Ebrahimpour Boroojeny et al. [1] claim their proposed ILP formulation is solvable in polynomial time by arguing that the constraint matrix of the linear relaxation of the ILP is always totally unimodular. We show that this claim is false by proving that the constraint matrix is not always totally unimodular and showing that there exists optimal fractional solutions to its linear relaxation.
We evaluate the efficiency of solving ILP formulations for GTED and its lower bound on simulated genomic strings and show that it is impractical to compute GTED on larger genomes.
In summary, we revisit two important problems in genome graph comparisons: Graph Traversal Edit Distance (GTED) and its variant FGTED. We show that both GTED and FGTED are NP-complete, and provide the first correct ILP formulations for GTED. We also show that the ILP formulation proposed by [1] is a lower bound to GTED. We evaluate the efficiency of the ILPs for GTED and its lower bound on genomic sequences. These
results provide solid algorithmic foundations for continued algorithmic innovation on the task of comparing genome graphs and point to the direction of approximation heuristics.
## 2 GTED and FGTED are NP-complete
### Conflicting results on computational complexity of GTED and string-to-graph matching
The natural decision versions of all of the computational problems described above and below are clearly in NP. Under the assumption that \(\text{P}\neq\text{NP}\), the results on the computational complexity of GTED and string-to-graph matching claimed in Ebrahimpour Boroojeny et al. [1] and Kupferman and Vardi [6], respectively, cannot be both true.
Kupferman and Vardi [6] show that the problem of determining if an input string can be spelled by concatenating edge labels in an Eulerian trail in an input graph is NP-complete. We call this problem Eulerian Trail Equaling Word. We show in Theorem 1 that we can reduce ETEW to GTED, and therefore if GTED is polynomially solvable, then ETEW is polynomially solvable. The complete proof is in Appendix A.1.
**Problem 2** (Eulerian Trail Equaling Word [6]).: _Given a string \(s\in\Sigma^{*}\), an edge-labeled Eulerian graph \(G\), find an Eulerian trail \(t\) of \(G\) such that \(\text{str}(t)=s\)._
**Theorem 1**.: _If \(\text{GTED}\in\text{P}\) then \(\text{ETEW}\in\text{P}\)._
Proof sketch.: We first convert an input instance \(\langle s,G\rangle\) to ETEW into an input instance \(\langle G_{1},G_{2}\rangle\) to GTED by (a) creating graph \(G_{1}\) that only contains edges that reconstruct string \(s\) and (b) modifying \(G\) into \(G_{2}\) by extending the anti-parallel edges so that \(G_{2}\) is unidirectional. We show that if \(\text{GTED}(G_{1},G_{2})=0\), there must be an Eulerian trail in \(G\) that spells \(s\), and if \(\text{GTED}(G_{1},G_{2})>0\), \(G\) must not contain an Eulerian trail that spells \(s\).
Hence, an (assumed) polynomial-time algorithm for GTED solves ETEW in polynomial time. This contradicts Theorem 6 of Kupferman and Vardi [6] of the NP-completeness of
ETEW (under \(\mathrm{P}\neq\mathrm{NP}\)).
### Reduction from Hamiltonian Path to GTED and FGTED
We resolve the contradiction by showing that GTED is NP-complete. The details of the proof are in Appendix A.2.
**Theorem 2**.: GTED _is NP-complete._
Proof sketch.: We reduce from the Hamiltonian Path problem, which asks whether a directed, simple graph \(G\) contains a path that visits every vertex exactly once. Here simple means no self-loops or parallel edges. The reduction is almost identical to that presented in Kupferman and Vardi [6], and from here until noted later in the proof the argument is identical except for the technicalities introduced to force unidirectionality (and another minor change described later).
Let \(\langle G=(V,E)\rangle\) be an instance of Hamiltonian Path, with \(n=|V|\) vertices. We first create the Eulerian closure of \(G\), which is defined as \(G^{\prime}=(V^{\prime},E^{\prime})\) where
\[V^{\prime}=\{v^{in},v^{out}:v\in V\}\cup\{w\}. \tag{2}\]
Here, each vertex in \(V\) is split into \(v^{in}\) and \(v^{out}\), and \(w\) is a newly added vertex. \(E^{\prime}\) is the union of the following sets of edges and their labels:
* \(E_{1}=\{(v^{in},v^{out}):v\in V\}\), labeled \(\mathtt{a}\),
* \(E_{2}=\{(u^{out},v^{in}):(u,v)\in E\}\), labeled \(\mathtt{b}\),
* \(E_{3}=\{(v^{out},v^{in}):v\in V\}\), labeled \(\mathtt{c}\),
* \(E_{4}=\{(v^{in},u^{out}):(u,v)\in E\}\), labeled \(\mathtt{c}\),
* \(E_{5}=\{(u^{in},w):u\in V\}\), labeled \(\mathtt{c}\),
* \(E_{6}=\{(w,u^{in}):u\in V\}\), labeled \(\mathtt{b}\).
\(G^{\prime}\) is an Eulerian graph by construction but contains anti-parallel edges. We further create \(G^{\prime\prime}\) from \(G^{\prime}\) by adding dummy nodes so that each pair of antiparallel edges is split into two parallel, length-2 paths with labels x#, where x is the original label.
We also create a graph \(C\) that has the same number of edges as \(G^{\prime\prime}\) and spells out a string
\[q=\texttt{a#}(\texttt{b#a#})^{n-1}(\texttt{c#})^{2n-1}(\texttt{c#b#})^{|E|+1}. \tag{3}\]
We then argue that \(G\) has a Hamiltonian path if and only if \(G^{\prime\prime}\) spells out the string \(q\), which uses the same line of arguments and graph traversals as in Kupferman and Vardi [6]. We then show that \(\text{GTED}(G^{\prime\prime},C)=0\) if and only if \(G^{\prime\prime}\) spells \(q\).
Following a similar argument, we show that FGTED is also NP-complete, and its proof is in Appendix A.3.
**Theorem 3**.: FGTED _is NP-complete._
## 3 Revisiting the correctness of the proposed ILP solutions to GTED
In this section, we revisit two proposed ILP solutions to GTED by Ebrahimpour Boroojeny et al. [1] and show that the optimal solution to these ILP is not always equal to GTED.
### Alignment graph
The previously proposed ILP formulations for GTED are based on the alignment graph constructed from input graphs. The high-level concept of an alignment graph is similar to the dynamic programming matrix for the string-to-string alignment problem [8].
**Definition 2** (Alignment graph).: _Let \(G_{1},\ G_{2}\) be two unidirectional, edge-labeled Eulerian graphs. The alignment graph \(\mathcal{A}(G_{1},G_{2})=(V,E,\delta)\) is a directed graph that has vertex set \(V=V_{1}\times V_{2}\) and edge multi-set \(E\) that equals the union of the following:_
**Vertical edges**: \([(u_{1},u_{2}),(v_{1},u_{2})]\) _for_ \((u_{1},v_{1})\in E_{1}\) _and_ \(u_{2}\in V_{2}\)_,_
**Horizontal edges**: \([(u_{1},u_{2}),(u_{1},v_{2})]\) _for_ \(u_{1}\in V_{1}\) _and_ \((u_{2},v_{2})\in E_{2}\)_,_
**Diagonal edges**: \([(u_{1},u_{2}),(v_{1},v_{2})]\) _for_ \((u_{1},v_{1})\in E_{1}\) _and_ \((u_{2},v_{2})\in E_{2}\)_._
_Each edge is associated with a cost by the cost function \(\delta:E\rightarrow\mathbb{R}\)._
Each diagonal edge \(e=[(u_{1},v_{1}),(u_{2},v_{2})]\) in an alignment graph can be projected to \((u_{1},v_{1})\) and \((u_{2},v_{2})\) in \(G_{1}\) and \(G_{2}\), respectively. Similarly, each vertical edge can be projected to one edge in \(G_{1}\), and each horizontal edge can be projected to one edge in \(G_{2}\).
We define the edge projection function \(\pi_{i}\) that projects an edge from the alignment graph to an edge in the input graph \(G_{i}\). We also define the path projection function \(\Pi_{i}\) that projects a trail in the alignment graph to a trail in the input graph \(G_{i}\). For example, let a trail in the alignment graph be \(p=(e_{1},e_{2},\ldots,e_{m})\), and \(\Pi_{i}(p)=(\pi_{i}(e_{1}),\pi_{i}(e_{2}),\ldots,\pi_{i}(e_{m}))\) is a trail in \(G_{i}\).
An example of an alignment graph is shown in Figure 1(b). The horizontal edges correspond to gaps in strings represented by \(G_{1}\), vertical edges correspond to gaps in strings
Figure 1: (a) An example of two edge labeled Eulerian graphs \(G_{1}\) (top) and \(G_{2}\) (bottom). (b) The alignment graph \(\mathcal{A}(G_{1},G_{2})\). The cycle with red edges is the path corresponding to \(\mathrm{GTED}(G_{1},G_{2})\). Red solid edges are matches with cost \(0\) and red dashed-line edge is mismatch with cost \(1\).
represented by \(G_{2}\), and diagonal edges correspond to the matching between edge labels from the two graphs. In the rest of this paper, we assume that the costs for horizontal and vertical edges are \(1\), and the costs for the diagonal edges are \(1\) if the diagonal edge represents a mismatch and \(0\) if it is a match. The cost function \(\delta\) can be defined to capture the cost of matching between edge labels or inserting gaps. This definition of alignment graph is also a generalization of the alignment graph used in string-to-graph alignment [7].
### The first previously proposed ILP for GTED
Lemma 1 in Ebrahimpour Boroojeny et al. [1] provides a model for computing GTED by finding the minimum-cost trail in the alignment graph. We reiterate it here for completeness.
**Lemma 1** ([1]).: _For any two edge-labeled Eulerian graphs \(G_{1}\) and \(G_{2}\),_
\[\mathrm{GTED}(G_{1},G_{2})=\mathrm{minimize}_{c} \delta(c)\] \[\mathrm{subject\ to} c\text{ is a trail in }\mathcal{A}(G_{1},G_{2}), \tag{4}\] \[\Pi_{i}(c)\text{ is an Eulerian trail in }G_{i}\text{ for }i=1,2,\]
_where \(\delta(c)\) is the total edge cost of \(c\), and \(\Pi_{i}(c)\) is the projection from \(c\) to \(G_{i}\)._
An example of such a minimum-cost trail is shown in Figure 1(b). Ebrahimpour Boroojeny et al. [1] provide the following ILP formulation and claim that it is a direct translation
of Lemma 1:
\[\underset{x\in\mathbb{N}^{|E|}}{\text{minimize}} \sum_{e\in E}x_{e}\delta(e)\] (5) subject to \[Ax=0 \tag{6}\] \[\sum_{e\in E}x_{e}I_{i}(e,f)=1\quad\text{for $i=1,2$ and for all $f\in E_{i}$}\] (7) \[A_{ue}=\begin{cases}-1&\text{if $e=(u,v)\in E$ for some vertex $v\in V$}\\ 1&\text{if $e=(v,u)\in E$ for some $u\in V$}\\ 0&\text{otherwise}\end{cases} \tag{8}\]
Here, \(E\) is the edge set of \(\mathcal{A}(G_{1},G_{2})\). \(A\) is the negative incidence matrix of size \(|V|\times|E|\), and \(I_{i}(e,f)\) is an indicator function that is 1 if edge \(e\) in \(E\) projects to edge \(f\) in the input graph \(G_{i}\) (and 0 otherwise). We define the domain of each \(x_{e}\) to include all non-negative integers. However, due to constraints (7), the values of \(x_{e}\) are limited to either 0 or 1. We describe this ILP formulation with the assumption that both input graphs have closed Eulerian trails, which means that each node has equal numbers of incoming and outgoing edges. We discuss the cases when input graphs contain open Eulerian trails in Section 4.
While the ILP in (5)-(8) allows the solutions to select disjoint cycles in the alignment graph, the projection of edges in these disjoint cycles does not correspond to a single string represented by either of the input graphs. We show that the ILP in (5)-(8) does not solve GTED by giving an example where the objective value of the optimal solution to the ILP in (5)-(8) is not equal to GTED.
Construct two input graphs as shown in Figure 2(a). Specifically, \(G_{1}\) spells circular permutations of TTTGAA and \(G_{2}\) spells circular permutations of TTTAGA. It is clear that GTED\((G_{1},G_{2})=2\) under Levenshtein edit distance. On the other hand, as shown in Figure 2(a), an optimal solution in \(\mathcal{A}(G_{1},G_{2})\) contains two disjoint cycles with nonzero \(x_{e}\) values that have a total edge cost equal to 0. This solution is a feasible solution to the ILP in (5)-(8). It is also an optimal solution because the objective value is zero, which is
the lower bound on the ILP in (5)-(8). This optimal objective value, however, is smaller than GTED\((G_{1},G_{2})\). Therefore, the ILP in (5)-(8) does not solve GTED since it allows the solution to be a set of disjoint components.
### The second previously proposed ILP formulation of GTED
We describe the second proposed ILP formulation of GTED by Ebrahimpour Boroojeny et al. [1]. Following Ebrahimpour Boroojeny et al. [1], we use simplices, a notion from geometry, to generalize the notion of an edge to higher dimensions. A \(k\)-simplex is a \(k\)-dimensional polytope which is the convex hull of its \(k+1\) vertices. For example, a 1-simplex is an undirected edge, and a 2-simplex is a triangle. We use the orientation of a simplex, which is given by the ordering of the vertex set of a simplex up to an even permutation, to generalize the notion of the edge direction [14, p. 26]. We use square brackets \([\cdot]\) to denote an oriented simplex. For example, \([v_{0},v_{1}]\) denotes a 1-simplex with orientation \(v_{0}\to v_{1}\), which is a directed edge from \(v_{0}\) to \(v_{1}\), and \([v_{0},v_{1},v_{2}]\) denotes a 2-simplex with orientation corresponding to the vertex ordering \(v_{0}\to v_{1}\to v_{2}\to v_{0}\). Each \(k\)-simplex has two possible
Figure 2: (a) The subgraph in the alignment graph induced by an optimal solution to the ILP in (5)-(8) and the ILP in (11)-(12) with input graphs on the left and top. The red and blue edges in the alignment graph are edges matching labels in red and blue font, respectively, and are part of the optimal solution to the ILP in (5)-(8). The cost of the red and blue edges are zero. (b) The subgraph induced by \(x^{init}\) with \(s_{1}=u_{1}\) and \(s_{2}=v_{1}\) according to the ILP in (11)-(12). The rest of the edges in the alignment graph are omitted for simplicity.
unique orientations, and we use the signed coefficient to connect their forms together, e.g. \([v_{0},v_{1}]=-[v_{1},v_{0}]\).
For each pair of graphs \(G_{1}\) and \(G_{2}\) and their alignment graph \(\mathcal{A}(G_{1},G_{2})\), we define an oriented 2-simplex set \(T(G_{1},G_{2})\) which is the union of:
* \([(u_{1},u_{2}),(v_{1},u_{2}),(v_{1},v_{2})]\) for all \((u_{1},v_{1})\in E_{1}\) and \((u_{2},v_{2})\in E_{2}\), or
* \([(u_{1},u_{2}),(u_{1},v_{2}),(v_{1},v_{2})]\) for all \((u_{1},v_{1})\in E_{1}\) and \((u_{2},v_{2})\in E_{2}\),
We use the boundary operator [14, p. 28], denoted by \(\partial\), to map an oriented \(k\)-simplex to a sum of oriented \((k-1)\)-simplices with signed coefficients.
\[\partial[v_{0},v_{1},\ldots,v_{k}]=\sum_{i=0}^{p}(-1)^{i}[v_{0},\ldots,\hat{v _{i}},\ldots,v_{k}], \tag{9}\]
where \(\hat{v_{i}}\) denotes the vertex \(v_{i}\) is to be deleted. Intuitively, the boundary operator maps the oriented \(k\)-simplex to a sum of oriented \((k-1)\)-simplices such that their vertices are in the \(k\)-simplex and their orientations are consistent with the orientation of the \(k\)-simplex. For example, when \(k=2\), we have:
\[\partial[v_{0},v_{1},v_{2}]=[v_{1},v_{2}]-[v_{0},v_{2}]+[v_{0},v_{1}]=[v_{1}, v_{2}]+[v_{2},v_{0}]+[v_{0},v_{1}]. \tag{10}\]
We reiterate the second ILP formulation proposed in Ebrahimpour Boroojeny et al. [1]. Given an alignment graph \(\mathcal{A}(G_{1},G_{2})=(V,E,\delta)\) and the oriented 2-simplex set \(T(G_{1},G_{2})\)
Figure 3: (a) A graph that contains an unoriented 2-simplex with three unoriented 1-simplices. (b), (c) The same graph with two different ways of orienting the simplices and the corresponding boundary matrices.
\[\begin{split}\underset{x\in\mathbb{N}^{|E|},y\in\mathbb{Z} |^{T(G_{1},G_{2})|}}{\text{minimize}}&\sum_{e\in E}x_{e}\delta(e) \\ \text{subject to}& x=x^{init}+[\partial]y\end{split} \tag{11}\]
Entries in \(x\) and \(y\) correspond to 1-simplices and 2-simplices in \(E\) and \(T(G_{1},G_{2})\), respectively. \([\partial]\) is a \(|E|\times|T(G_{1},G_{2})|\) boundary matrix where each entry \([\partial]_{i,j}\) is the signed coefficient of the oriented 1-simplex (the directed edge) in \(E\) corresponding to \(x_{i}\) in the boundary of the oriented 2-simplex in \(T(G_{1},G_{2})\) corresponding to \(y_{j}\). The index \(i,j\) for each 1-simplex or 2-simplex is assigned based on an arbitrary ordering of the 1-simplices in \(E\) or the 2-simplices in \(T(G_{1},G_{2})\). An example of the boundary matrix is shown in Figure 3. \(\delta(e)\) is the cost of each edge. \(x^{init}\in\mathbb{R}^{|E|}\) is a vector where each entry corresponds to a 1-simplex in \(E\) with \(|E_{1}|+|E_{2}|\) nonzero entries that represent one Eulerian trail in each input graph. \(x^{init}\) is a feasible solution to the ILP. Let \(s_{1}\) be the source of the Eulerian trail in \(G_{1}\), and \(s_{2}\) be the sink of the Eulerian trail in \(G_{2}\). Each entry in \(x^{init}\) is defined by
\[x_{e}^{init}=\begin{cases}1&\text{if }e=[(u_{1},s_{2}),(v_{1},s_{2})]\text{ or }e=[(s_{1},u_{2}),(s_{1},v_{2})],\\ 0&\text{otherwise}.\end{cases} \tag{12}\]
If the Eulerian trail is closed in \(G_{i}\), \(s_{i}\) can be any vertex in \(V_{i}\). An example of \(x^{init}\) is shown in Figure 2(b).
We provide a complete proof in Section B of the Appendix that the ILP in (5)-(8) is equivalent to the ILP in (11)-(12). Therefore, the example we provided in Section 3.2 is also an optimal solution to the ILP in (11)-(12) but not a solution to GTED. Thus, the ILP in (11)-(12) does not always solve GTED.
## 4 New ILP solutions to GTED
To ensure that our new ILP formulations are applicable to input graphs regardless of whether they contain an open or closed Eulerian trail, we add a source node \(s\) and a sink
node \(t\) to the alignment graph. Figure 4 illustrates three possible cases of input graphs.
1. If only one of the input graphs has closed Eulerian trails, wlog, let \(G_{1}\) be the input graph with open Eulerian trails. Let \(a_{1}\) and \(b_{1}\) be the start and end of the Eulerian trail that have odd degrees. Add edges \([s,(a_{1},v_{2})]\) and \([(b_{1},v_{2}),t]\) to \(E\) for all nodes \(v_{2}\in V_{2}\) (Figure 4(a)).
2. If both input graphs have closed Eulerian trails, let \(a_{1}\) and \(a_{2}\) be two arbitrary nodes in \(G_{1}\) and \(G_{2}\), respectively. Add edges \([s,(a_{1},v_{2})]\), \([s,(v_{1},a_{2})]\), \([(a_{1},v_{2}),t]\) and \([(v_{1},a_{2}),t]\) for all nodes \(v_{1}\in V_{1}\) and \(v_{2}\in V_{2}\) to \(E\) (Figure 4(b)).
3. If both input graphs have open Eulerian trails, add edges \([s,(a_{1},a_{2})]\) and \([t,(b_{1},b_{2})]\), where \(a_{i}\) and \(b_{i}\) are start and end nodes of the Eulerian trails in \(G_{i}\), respectively (Figure 4(c)).
According to Lemma 1, we can solve GTED\((G_{1},G_{2})\) by finding a trail in \(\mathcal{A}(G_{1},G_{2})\) that satisfies the projection requirements. This is equivalent to finding a \(s\)-\(t\) trail in \(\mathcal{A}(G_{1},G_{2})\) that satisfies constraints:
\[\sum_{(u,v)\in E}x_{uv}I_{i}((u,v),f)=1\quad\text{for all }(u,v)\in E,f\in G_{i}, \ u\neq s,\ v\neq t, \tag{13}\]
Figure 4: Modified alignment graphs based on input types. (a) \(G_{1}\) has open Eulerian trails while \(G_{2}\) has closed Eulerian trails. (b) Both \(G_{1}\) and \(G_{2}\) have closed Eulerian trails. (c) Both \(G_{1}\) and \(G_{2}\) have open Eulerian trails. Solid red and blue nodes are the source and sink nodes of the graphs with open Eulerian trails. “s” and “t” are the added source and sink nodes. Colored edges are added alignment edges directing from and to source and sink nodes, respectively.
where \(I_{i}(e,f)=1\) if the alignment edge \(e\) projects to \(f\) in \(G_{i}\). An optimal solution to GTED in the alignment graph must start and end with the source and sink node because they are connected to all possible starts and ends of Eulerian trails in the input graphs.
Since a trail in \(\mathcal{A}(G_{1},G_{2})\) is a flow network, we use the following flow constraints to enforce the equality between the number of in- and out-edges for each node in the alignment graph except the source and sink nodes.
\[\sum_{(s,u)\in E}x_{su} =1 \tag{14}\] \[\sum_{(v,t)\in E}x_{vt} =1\] (15) \[\sum_{(u,v)\in E}x_{uv} =\sum_{(v,w)\in E}x_{vw}\quad\text{for all }v\in V \tag{16}\]
Constraints (13) and (16) are equivalent to constraints (7) and (6), respectively. Therefore, we rewrite the ILP in (5)-(8) in terms of the modified alignment graph.
\[\underset{x\in\mathbb{N}^{|E|}}{\text{minimize}} \sum_{e\in E}x_{e}\delta(e)\] (lower bound ILP) subject to constraints ( 13 )-( 16 ).
As we show in Section 3.2, constraints (13)-(16) do not guarantee that the ILP solution is one trail in \(\mathcal{A}(G_{1},G_{2})\), thus allowing several disjoint covering trails to be selected in the solution and fails to model GTED correctly. We show in Section 5 that the solutions to this ILP is a lower bound to GTED.
According to Lemma 1 in Dias et al. [10], a subgraph of a directed graph \(G\) with source node \(s\) and sink node \(t\) is a \(s\)-\(t\) trail if and only if it is a flow network and every strongly connected component (SCC) of the subgraph has at least one edge outgoing from it. Thus, in order to formulate an ILP for the GTED problem, it is necessary to devise constraints that prevent disjoint SCCs from being selected in the alignment graph. In the following, we describe two approaches for achieving this.
### Enforcing one trail in the alignment graph via constraint generation
Section 3.2 of Dias et al. [10] proposes a method to design linear constraints for eliminating disjoint SCCs, which can be directly adapted to our problem. Let \(\mathcal{C}\) be the collection of all strongly connected subgraphs of the alignment graph \(\mathcal{A}(G_{1},G_{2})\). We use the following constraint to enforce that the selected edges form one \(s\)-\(t\) trail in the alignment graph:
\[\text{If}\ \sum_{(u,v)\in E(C)}x_{uv}=|E(C)|,\,\text{then}\ \sum_{(u,v)\in \varepsilon^{+}(C)}x_{uv}\geq 1\quad\text{for all}\ C\in\mathcal{C}, \tag{17}\]
where \(E(C)\) is the set of edges in the strongly connected subgraph \(C\) and \(\varepsilon^{+}(C)\) is the set of edges \((u,v)\) such that \(u\) belongs to \(C\) and \(v\) does not belong to \(C\). \(\sum_{(u,v)\in E(C)}x_{uv}=|E(C)|\) indicates that \(C\) is in the subgraph of \(\mathcal{A}(G_{1},G_{2})\) constructed by all edges \((u,v)\) with positive \(x_{uv}\), and \(\sum_{(u,v)\in\varepsilon^{+}(C)}x_{uv}\geq 1\) guarantees that there exists an out-going edge of \(C\) that is in the subgraph.
We use the same technique as Dias et al. [10] to linearize the "if-then" condition in (17) by introducing a new variable \(\beta\) for each strongly connected component:
\[\sum_{(u,v)\in E(C)}x_{uv}\geq|E(C)|\beta_{C}\quad\text{for all} \ C\in\mathcal{C} \tag{18}\] \[\sum_{(u,v)\in E(C)}x_{uv}-|E(C)|+1-|E(C)|\beta_{C}\leq 0\quad \text{for all}\ C\in\mathcal{C}\] (19) \[\sum_{(u,v)\in\varepsilon^{+}(C)}x_{uv}\geq\beta_{C}\quad\text{ for all}\ C\in\mathcal{C}\] (20) \[\beta_{C}\in\{0,1\}\quad\text{for all}\ C\in\mathcal{C} \tag{21}\]
To summarize, given any pair of unidirectional, edge-labeled Eulerian graphs \(G_{1}\) and \(G_{2}\) and their alignment graph \(\mathcal{A}(G_{1},G_{2})=(V,E,\delta)\), GTED\((G_{1},G_{2})\) is equal to the optimal
solution of the following ILP formulation:
\[\underset{x\in\{0,1\}^{|E|}}{\text{minimize}} \sum_{e\in E}x_{e}\delta(e)\] (exponential ILP) subject to constraints (13)-(16) and constraints (18)-(21).
This ILP has an exponential number of constraints as there is a set of constraints for every strongly connected subgraph in the alignment graph. To solve this ILP more efficiently, we can use the procedure similar to the iterative constraint generation procedure in Dias et al. [10]. Initially, solve the ILP with only constraints (13)-(16). Create a subgraph, \(G^{\prime}\), induced by edges with positive \(x_{uv}\). For each disjoint SCC in \(G^{\prime}\) that does not contain the sink node, add constraints (18)-(21) for edges in the SCC and solve the new ILP. Iterate until no disjoint SCCs are found in the solution. The pseudo-code of this procedure is in Appendix C.
### A compact ILP for GTED with polynomial number of constraints
In the worst cases, the number of iterations to solve (exponential ILP) via constraint generation is exponential. As an alternative, we introduce a compact ILP with only a polynomial number of constraints. The intuition behind this ILP is that we can impose a partially increasing ordering on all the edges so that the selected edges forms a \(s\)-\(t\) trail in the alignment graph. This idea is similar to the Miller-Tucker-Zemlin ILP formulation of the Travelling Salesman problem (TSP) [11].
We add variables \(d_{uv}\) that are constrained to provide a partial ordering of the edges in the \(s\)-\(t\) trail and set the variables \(d_{uv}\) to zero for edges that are not selected in the \(s\)-\(t\) trail. Intuitively, there must exist an ordering of edges in a \(s\)-\(t\) trail such that for each pair of consecutive edges \((u,v)\) and \((v,w)\), the difference in their order variable \(d_{uv}\) and \(d_{vw}\) is 1. Therefore, for each node \(v\) that is not the source or the sink, if we sum up the order variables for the incoming edges and outgoing edges respectively, the difference between the
two sums is equal to the number of selected incoming/outgoing edges. Lastly, the order variable for the edge starting at source is \(1\), and the order variable for the edge ending at sink is the number of selected edges. This gives the ordering constraints as follows:
\[\text{If }x_{uv}=0,\text{ then }d_{uv} =0\quad\text{for all }(u,v)\in E \tag{22}\] \[\sum_{(v,w)\in E}d_{vw}-\sum_{(u,v)\in E}d_{uv} =\sum_{(v,w)\in E}x_{vw}\quad\text{for all }v\in V\setminus\{s,t\}\] (23) \[\sum_{(s,u)\in E}d_{su} =1\] (24) \[\sum_{(v,t)\in E}d_{vt} =\sum_{(u,v)\in E}x_{uv} \tag{25}\]
We enforce that all variables \(x_{e}\in\{0,1\}\) and \(d_{e}\in\mathbb{N}\) for all \(e\in E\).
The "if-then" statement in Equation (22) can be linearized by introducing an additional binary variable \(y_{uv}\) for each edge [10, 15]:
\[-x_{uv}-|E|y_{uv} \leq-1 \tag{26}\] \[d_{uv}-|E|(1-y_{uv}) \leq 0\] (27) \[y_{uv} \in\{0,1\}. \tag{28}\]
Here, \(y_{uv}\) is an indicator of whether \(x_{uv}\geq 0\). The coefficient \(|E|\) is the number of edges in the alignment graph and also an upper bound on the ordering variables. When \(y_{uv}=1\), \(d_{uv}\leq 0\), and \(y_{uv}\) does not impose constraints on \(x_{uv}\). When \(y_{uv}=0\), \(x_{uv}\geq 1\), and \(y_{uv}\) does not impose constraints on \(d_{uv}\). As we show in Lemma 3 (Appendix D), these constraints prevent finding disjoint components, thus guaranteeing the correctness of the ILP.
Due to Lemma 1 and Lemma 3, given input graphs \(G_{1}\) and \(G_{2}\) and the alignment graph
\(\mathcal{A}(G_{1},G_{2})\), GTED\((G_{1},G_{2})\) is equal to the optimal objective of
\[\underset{x\in\{0,1\}^{|E|}}{\text{minimize}} \sum_{e\in E}x_{e}\delta(e)\] subject to constraints (13)-(16), (compact ILP) constraints (23)-(25) and constraints (26)-(28).
## 5 A lower bound on GTED
While the (lower bound ILP) and the ILP in (11)-(12) do not solve GTED, the optimal solutions to these ILPs are lower bounds on GTED. These ILPs solve an interesting variant of GTED (Appendix E), which is a local similarity measure between two genome graphs. We call this variant Closed-trail Cover Traversal Edit Distance (CCTED).
Let the variables in an optimal solution to (lower bound ILP) be \(x^{opt}\) and the optimal objective value be \(c^{opt}\). Since the constraints for (lower bound ILP) are a subset of (exponential ILP), and two ILPs have the same objective function, \(c^{opt}\leq\text{GTED}(G_{1},G_{2})\) for any pair of graphs.
Moreover, when the solution to (lower bound ILP) forms only one connected component, the optimal value of (lower bound ILP) is equal to GTED.
**Theorem 4**.: _Let \(\mathcal{A}^{\prime}(G_{1},G_{2})\) be the subgraph of \(\mathcal{A}(G_{1},G_{2})\) induced by edges \((u,v)\in E\) with \(x^{opt}_{uv}=1\) in the optimal solution to (lower bound ILP). There exists \(\mathcal{A}^{\prime}(G_{1},G_{2})\) that has exactly one connected component if and only if \(c^{opt}=\text{GTED}(G_{1},G_{2})\)._
Proof.: We first show that if \(c^{opt}=\text{GTED}(G_{1},G_{2})\), then there exists \(\mathcal{A}^{\prime}(G_{1},G_{2})\) that has one connected component. A feasible solution to (exponential ILP) is always a feasible solution to (lower bound ILP), and since \(c^{opt}=\text{GTED}(G_{1},G_{2})\), an optimal solution to (exponential ILP) is also an optimal solution to (lower bound ILP), which can induce a subgraph in the alignment graph that only contains one connected component.
Conversely, if \(x^{opt}\) induces a subgraph in the alignment graph with only one con
nected component, it satisfies constraints (18)-(21) and therefore is feasible to the ILP for GTED (exponential ILP). Since \(c^{opt}\leq\text{GTED}(G_{1},G_{2})\), this solution must also be optimal for \(\text{GTED}(G_{1},G_{2})\).
In practice, we may estimate GTED approximately by the solution to (lower bound ILP). As we show in Section 6, the time needed to solve (lower bound ILP) is much less than the time needed to solve GTED. However, in adversarial cases, \(c^{opt}\) could be zero but GTED could be arbitrarily large. We can determine if the \(c^{opt}\) is a lower bound on GTED or exactly equal to GTED by checking if the subgraph induced by the solution to (lower bound ILP) has multiple connected components.
### Characterizations of the ILP in (11)-(12)
Ebrahimpour Boroojeny et al. [1] propose the ILP in (11)-(12), and we show that the \(x\) variables in this ILP have the same feasible region as the \(x\) variables in lower bound ILP. However, Ebrahimpour Boroojeny et al. [1] argue that the linear programming relaxation of the ILP in (11)-(12) always yields integer optimal solutions, and therefore the ILP in (11)-(12) can be solved in polynomial time. We provide a counterexample where the ILP in (11)-(12) yields fractional optimal solutions with fractional variable values. Additionally, we show that the constraint matrix of the LP relaxation of the ILP in (11)-(12) is not totally unimodular given most non-trivial input graphs. The details of the proofs and the counterexample are in the Section F of the Appendix.
## 6 Empirical evaluation of the ILP formulations for GTED and its lower bound
### Implementation of the ILP formulations
We implement the algorithms and ILP formulations for (exponential ILP), (compact ILP) and (lower bound ILP). In practice, the multi-set of edges of each input graph may contain
many duplicates of edges that have the same start and end vertices due to repeats in the strings. We reduce the number of variables and constraints in the implemented ILPs by merging the edges that share the same start and end nodes and record the multiplicity of each edge. Each \(x\) variable is no longer binary but a non-negative integer that satisfies the modified projection constraints (13):
\[\sum_{(u,v)\in E}x_{uv}I_{i}((u,v),f)=M_{i}(f)\quad\text{for all }(u,v)\in E,\ f\in G _{i},u\neq s,v\neq t, \tag{29}\]
where \(M_{i}(f)\) is the multiplicity of edge \(f\) in \(G_{i}\). Let \(C\) be the strongly connected component in the subgraph induced by positive \(x_{uv}\), now \(\sum_{(u,v)\in E(C)}x_{uv}\) is no longer upper bounded by \(|E(C)|\). Therefore, constraints (19) is changed to
\[\sum_{(u,v)\in E(C)}x_{uv}-|E(C)|+1-W(C)\beta_{C}\leq 0\quad\text{for all }C\in\mathcal{C}, \tag{30}\]
\[W(C)=\sum_{(u,v)\in E(C)}\max\left(\sum_{f\in G_{1}}M_{1}(f)I_{1}((u,v),f), \sum_{f\in G_{2}}M_{2}(f)I_{2}((u,v),f)\right),\]
where \(W(C)\) is the maximum total multiplicities of edges in the strongly connected subgraph in each input graph that is projected from \(C\).
Likewise, constraints (27) that set the upper bounds on the ordering variables also need to be modified as the upper bound of the ordering variable \(d_{uv}\) for each edge no longer represents the order of one edge but the sum of orders of copies of \((u,v)\) that are selected, which is at most \(|E|^{2}\). Therefore, constraint (27) is changed to
\[d_{uv}-|E|^{2}(1-y_{uv})\leq 0. \tag{31}\]
The rest of the constraints remain unchanged.
We ran all our experiments on a server with 48 cores (96 threads) of Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz and 378 GB of memory. The system was running Ubuntu 18.04 with Linux kernel 4.15.0. We solve all the ILP formulations and their linear relaxations using
the Gurobi solver [16] using 32 threads.
### GTED on simulated TCR sequences
We construct 20 de Bruijn graphs with \(k=4\) using 150-character sequences extracted from the V genes from the IMGT database [17]. We solve the linear relaxation of (compact ILP), (exponential ILP) and (lower bound ILP) and their linear relaxation on all 190 pairs of graphs. We do not show results for solving (compact ILP) for GTED on this set of graphs as the running time exceeds 30 minutes on most pairs of graphs.
To compare the time to solve the ILP formulations when GTED is equal to the optimal objective of (lower bound ILP), we only include 168 out of 190 pairs where GTED is equal to the lower bound (GTED is slightly higher than the lower bound in the remaining 22 pairs). On average, it takes 26 seconds wall-clock time to solve (lower bound ILP), and 71 seconds to solve (exponential ILP) using the iterative algorithm. On average, it takes 9 seconds to solve the LP relaxation of (compact ILP) and 1 second to solve the LP relaxation of (lower bound ILP). The time to construct the alignment graph for all pairs is less than 0.2 seconds. The distribution of wall-clock running time is shown in Figure 5(a). The time to solve (exponential ILP) and (lower bound ILP) is generally positively correlated with the GTED values (Figure 5(b)). On average, it takes 7 iterations for the iterative algorithm to find the optimal solution that induces one strongly connected subgraph (Figure 5(c)).
In summary, it is fastest to compute the lower bound of GTED. Computing GTED exactly by solving the proposed ILPs on genome graphs of size 150 is already time consuming. When the sizes of the genome graphs are fixed, the time to solve for GTED and its lower bound increases as GTED between the two genome graphs increases. In the case where GTED is equal to its lower bound, the subgraph induced by some optimal solutions of (lower bound ILP) contains more than one strongly connected component. Therefore, in order to reconstruct the strings from each input graph that have the smallest edit distance, we generally need to obtain the optimal solution to the ILP for GTED. In all cases, the time to solve the (exponential ILP) is less than the time to solve the (compact ILP).
### GTED on difficult cases
Repeats, such as segmental duplications and translocations [18, 19] in the genomes increase the complexity of genome comparisons. We simulate such structures with a class of graphs that contain \(n\) simple cycles of which \(n-1\) peripheral cycles are attached to the \(n\)-th central cycle at either a node or a set of edges (Figure 6(a)). The input graphs in Figure 2 belong to this class of graphs that contain 2 cycles. This class of graphs simulates the complex structural variants in disease genomes or the differences between genomes of different species.
We generate pairs of 3-cycle graphs with varying sizes and randomly assign letters from {A,T,C,G} to edges. We compute the lower bound of GTED and GTED using (lower bound ILP) and (compact ILP), respectively. We denote the lower bound of GTED computed by solving (lower bound ILP) as GTED\({}_{l}\). We group the generated 3-cycle graph pairs based on the value of \((\mathrm{GTED}-\mathrm{GTED}_{l})\) and select 20 pairs of graphs randomly for each \((\mathrm{GTED}-\mathrm{GTED}_{l})\) value ranging from 1 to 5. The maximum number of edges in all selected graphs is 32.
We show the difficulty of computing GTED using the iterative algorithm on the 100
Figure 5: (a) The distribution of wall-clock running time for constructing alignment graphs, solving the ILP formulations for GTED and its lower bound, and their linear relaxations on the log scale. (b) The relationship between the time to solve (lower bound ILP), (exponential ILP) iteratively and GTED. (c) The distribution of the number of iterations to solve exponential ILP. The box plots in each plot show the median (middle line), the first and third quantiles (upper and lower boundaries of the box), the range of data within 1.5 inter-quantile range between Q1 and Q3 (whiskers), and the outlier data points.
selected pairs of 3-cycle graphs. We terminate the ILP solver after 20 minutes. As shown in Figure 6, as the difference between GTED and GTED\({}_{l}\) increases, the wall-clock time to solve (exponential ILP) for GTED increases faster than the time to solve (compact ILP) for GTED. For pairs on graphs with \((\mathrm{GTED-GTED}_{l})=5\), on average it takes more than 15 minutes to solve (exponential ILP) with more than 500 iterations. On the other hand, it takes an average of 5 seconds to solve (compact ILP) for GTED and no more than 1 second to solve for the lower bound. The average time to solve each ILP is shown in Table S1.
In summary, on the class of 3-cycle graphs introduced above, the difficulty to solve GTED via the iterative algorithm increases rapidly as the gap between GTED and GTED\({}_{l}\) increases. Although (exponential ILP) is solved more quickly than (compact ILP) for GTED when the sequences are long and the GTED is equal to GTED\({}_{l}\) (Section 6.2), (compact ILP) may be more efficient when the graphs contain overlapping cycles such that the gap between GTED and GTED\({}_{l}\) is larger.
Figure 6: (a) An example of a 3-cycle graph. Cycle 1 and 2 are attached to cycle 3. (b) The distribution of wall-clock time to solve the compact ILP and the iterative exponential ILP on 100 pairs of 3-cycle graphs.
Conclusion
We point out the contradictions in the result on the complexity of labeled graph comparison problems and resolve the contradictions by showing that GTED, as opposed to the results in Ebrahimpour Boroojeny et al. [1], is NP-complete. On one hand, this makes GTED a less attractive measure for comparing graphs since it is unlikely that there is an efficient algorithm to compute the measure. On the other hand, this result better explains the difficulty of finding a truly efficient algorithm for computing GTED exactly. In addition, we show that the previously proposed ILP of GTED [1] does not solve GTED and give two new ILP formulations of GTED.
While the previously proposed ILP of GTED does not solve GTED, it solves for a lower bound of GTED, and we show that this lower bound can be interpreted as a more "local" measure, CCTED, of the distance between labeled graphs. Further, we characterize the LP relaxation of the ILP in (11)-(12) and show that, contrary to the results in Ebrahimpour Boroojeny et al. [1], the LP in (11)-(12) does not always yield optimal integer solutions.
As shown previously [1, 12], it takes more than 4 hours to solve (lower bound ILP) for graphs that represent viral genomes that contain \(\approx 3000\) bases with a multi-threaded LP solver. Likewise, we show that computing GTED using either (exponential ILP) or (compact ILP) is already slow on small genomes, especially on pairs of simulated genomes that are different due to segmental duplications and translations. The empirical results show that it is currently impossible to solve GTED or its lower bound directly using this approach for bacterial- or eukaryotic-sized genomes on modern hardware. The results here should increase the theoretical interest in GTED along the directions of heuristics or approximation algorithms as justified by the NP-hardness of finding GTED.
## Acknowledgements
The authors would like to thank the members of the Kingsford Group for their helpful comments throughout this project, in particular Guillaume Marcais. This work was supported
in part by the US National Science Foundation [DBI-1937540, III-2232121], the US National Institutes of Health [R01HG012470] and by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program.
Conflict of Interest: C.K. is a co-founder of Ocean Genomics, Inc.
|
2301.02484 | Graph-Collaborated Auto-Encoder Hashing for Multi-view Binary Clustering | Unsupervised hashing methods have attracted widespread attention with the
explosive growth of large-scale data, which can greatly reduce storage and
computation by learning compact binary codes. Existing unsupervised hashing
methods attempt to exploit the valuable information from samples, which fails
to take the local geometric structure of unlabeled samples into consideration.
Moreover, hashing based on auto-encoders aims to minimize the reconstruction
loss between the input data and binary codes, which ignores the potential
consistency and complementarity of multiple sources data. To address the above
issues, we propose a hashing algorithm based on auto-encoders for multi-view
binary clustering, which dynamically learns affinity graphs with low-rank
constraints and adopts collaboratively learning between auto-encoders and
affinity graphs to learn a unified binary code, called Graph-Collaborated
Auto-Encoder Hashing for Multi-view Binary Clustering (GCAE). Specifically, we
propose a multi-view affinity graphs learning model with low-rank constraint,
which can mine the underlying geometric information from multi-view data. Then,
we design an encoder-decoder paradigm to collaborate the multiple affinity
graphs, which can learn a unified binary code effectively. Notably, we impose
the decorrelation and code balance constraints on binary codes to reduce the
quantization errors. Finally, we utilize an alternating iterative optimization
scheme to obtain the multi-view clustering results. Extensive experimental
results on $5$ public datasets are provided to reveal the effectiveness of the
algorithm and its superior performance over other state-of-the-art
alternatives. | Huibing Wang, Mingze Yao, Guangqi Jiang, Zetian Mi, Xianping Fu | 2023-01-06T12:43:13Z | http://arxiv.org/abs/2301.02484v1 | # Graph-Collaborated Auto-Encoder Hashing for Multi-view Binary Clustering
###### Abstract
Unsupervised hashing methods have attracted widespread attention with the explosive growth of large-scale data, which can greatly reduce storage and computation by learning compact binary codes. Existing unsupervised hashing methods attempt to exploit the valuable information from samples, which fails to take the local geometric structure of unlabeled samples into consideration. Moreover, hashing based on auto-encoders aims to minimize the reconstruction loss between the input data and binary codes, which ignores the potential consistency and complementarity of multiple sources data. To address the above issues, we propose a hashing algorithm based on auto-encoders for multi-view binary clustering, which dynamically learns affinity graphs with low-rank constraints and adopts collaboratively learning between auto-encoders and affinity graphs to learn a unified binary code, called Graph-Collaborated Auto-Encoder Hashing for Multi-view Binary Clustering (GCAE). Specifically, we propose a multi-view affinity graphs learning model with low-rank constraint, which can mine the underlying geometric information from multi-view data. Then, we design an encoder-decoder paradigm to collaborate the multiple affinity graphs, which can learn a unified binary code effectively. Notably, we impose the decorrelation and code balance constraints on binary codes to reduce the quantization errors. Finally, we utilize an alternating iterative optimization scheme to obtain the multi-view clustering results. Extensive experimental results on \(5\) public datasets are provided to reveal the effectiveness of the algorithm and its superior performance over other state-of-the-art alternatives.
Graph-collaborated, Auto-encoder, Multi-view clustering, Binary code
## I Introduction
With the development of information digitization [1, 2, 3] and computer technology, researchers have proposed a large number of feature extraction methods to extract features from multiple views of the same sample [4, 5, 6, 7]. For example, an image can be extracted as different feature representations by multiple descriptors, i.e., LBP [8], Gabor [9], HOG [10] and SIFT [11]. However, these multi-view data extracted from different feature descriptors have properties that are large-scale and heterogeneous, which cry out for reliable mining methods to explore the discriminative information from multiple views. In order to effectively process the large-scale data, most existing researches introduce hash methods due to its fast running speed and economical storage cost. Specifically, hash methods encode the large-scale data by a set of compact binary codes in a low-dimensional Hamming space. Therefore, existing hash algorithms have been widely applied to various large-scale visual application tasks, such as cross-modal retrieval [12], object re-identification [13], image detection [14] and multi-view learning [15, 16, 17, 18] etc.
Considering the effectiveness of binary codes for various vision tasks with large-scale data, several methods have been proposed to explore the more discriminative binary code representation. Over the past few decades, several supervised hashing methods have been proposed, such as Supervised Discrete Hashing (SDH) [19], Strongly Constrained Discrete Hashing (SCDH) [20] and Fast Discriminative Discrete Hashing (FDDH) [21]. Note that while these aforementioned approaches have achieved great performance with hashing, most of them deeply depend on the manual labels, which is time-consuming and less effective process the large-scale unlabeled data. Therefore, some unsupervised hashing methods have been proposed to deal with the unlabeled problem. The typical unsupervised hashing is Locality-Sensitive Hashing (LSH) [22] which adopts random projections to generate discrete binary codes. Based on LSH, Spectral Hashing (SH) [23], Discrete Graph Hashing (DGH) [24] and Scalable Graph Hashing (SGH) [25] have been proposed to explore similar information from the large-scale data. Even though the above methods have effectively learned compact binary codes in an unsupervised manner, most existing hashing methods usually utilize the data from single source. For multi-view data, these hashing methods are difficult to uncover the multi-view information holistically and ignore the consistent and complementary information from different views.
Compared with the data from a single source, multi-view data usually contain more compatible and complementary information hidden in different views, which are extracted from same samples. Therefore, multi-view clustering methods have been proposed to explore the latent structure of different views and integrate complementary information from multi-view data. Kumar et al. [26] introduced a co-regularized model to complete spectral clustering with a centroid-based algorithm and pairwise algorithm which can mine the underlying structure from original data. Zhan et al. [27] proposed a graph-learning method with the rank constraint to integrate different graphs into a global graph for multi-view clustering tasks. Wang et al. [28] proposed a multi-graph laplacian regularized LRR model, which can separately impose a low-rank constraint on each graph to achieve agreeable results.
Besides, Wang et al. [29] performed reinforcement learning on the graph of each view and the unified graph of all views by considering the weights of different views. Xiao et al. [30] proposed a graph-based multi-view clustering framework with knowledge elements, which can combine knowledge and language for clustering. Moreover, Shi et al. [31] proposed a common joint graph learning strategy, which utilizes non-negative constraint to fully explore the structure information from multi-view data. This strategy aims to directly obtain cluster results and avoid post-processing. The above methods mostly measure the distance between features in Euclidean space, while they still need a high computational cost and low efficiency for processing large-scale data.
Some researchers proposed multi-view hash methods to learn compact binary codes and utilize efficient XOR operation [32], which can improve the speed and accuracy of the algorithm. Jin et al. [33] proposed a binary function clustering scheme that captures the function semantics as semantic hashing to quickly cluster the high degree of similarity samples. Tian et al. [34] provided a variant of the LRR [35] model to recover the latent structure of original data, which can effectively learn similarity graphs for binary code learning. Wang et al. [36] utilized \(l_{2,1}\)-norm to learn compact binary codes, which can improve the robustness of the model. Shen et al. [37] constructed a novel semantic-rebased model, which adopted a sparse graph setting and rebased the similarity graph. Notably, most related hashing works focus on retrieval tasks, which ignore the complementary information and underlying cluster structure from multi-view data. Recently, several hashing algorithms have been proposed to solve large-scale image clustering problems. Wang et al. [38] provided a cluster-wise unsupervised hashing framework, which projects the multi-view original data into latent low-dimensional space to learn cluster centroid for searching. Zhang et al. [39] explored a highly-economized algorithm for image clustering, which jointly learned binary representation and binary cluster results. Even though the above methods can process large-scale data effectively and achieve great performance, most of them heavily rely on affinity graphs from original data directly and fail to mine the local structures. Meanwhile, some studies simplified the optimization problem by relaxing binary constraints, which may cause quantization errors. Therefore, it is essential to compose an effective graph collaboration framework to explore the local geometric information from multiple views and utilize suitable binary constraints.
To address the above limitations, this paper proposes a novel method, termed as Graph-Collaborated Auto-Encoder Hashing for Multi-view Binary Clustering (GCAE). GCAE constructs auto-encoders to learn binary codes for processing multi-view data, which emphasizes collaboratively learning between affinity graphs and auto-encoders to learn a unified binary codes for multi-view clustering. Firstly, GCAE constructs affinity graphs from each view by imposing a low-rank constraint on the original data, which can preserve essential information and the latent structure from the multi-view data. Secondly, to effectively explore the compatible and complementary information from multi-view data, GCAE adopts auto-encoders to collaborate multiple affinity graphs, which aim to learn unified binary codes for clustering and preserve the discrete binary constraint. Subsequently, GCAE utilizes the matrix factorization strategy to directly obtain cluster results without post-processing, which can avoid error accumulation. Finally, an alternating iterative optimization strategy is adopted to update each variable of the objective function. The whole model of GCAE has been shown in Fig. 1. The major contributions of the proposed method are summarized as follows:
* We propose Graph-Collaborated Auto-Encoder Hashing for Multi-view Binary Clustering (GCAE), which utilizes affinity graphs and auto-encoders collaboratively to learn compact binary codes for multi-view clustering.
* In particular, GCAE imposes the low-rank constraint on graphs to mine essential information effectively and utilizes auto-encoders to collaborate multiple graphs for learning unified binary codes, which can explore complementary information from multi-view data and guide the learning of binary codes. Besides, our proposed GCAE directly obtain cluster results to avoid the accumulation of errors caused by post-processing.
The remainder of the paper is outlined as follows. Section 2 introduces the related work. Section 3 presents the proposed GCAE model and the optimization process. Extensive experiments including complexity analysis and convergence analysis are conducted to verify our proposed model in Section 4. Finally, Section 5 concludes this paper.
## II Related Work
In this section, we briefly review the related studies about graph-based multi-view clustering and multi-view hashing methods with graphs.
Graph-based multi-view clustering methods mostly aim to integrate information from multiple views and calculate similarity graphs in Euclidean distance for clustering. For example, Nie et al. [40] proposed a framework based on standard spectral learning which learns weights for multiple graphs automatically without introducing additive parameters. Hou et al. [41] presented an automatic method to learn a common similarity graph to characterize the structures across different views and tune balance weights. However, the above methods require an additional clustering step to obtains the final clusters by utilizing K-means [42]. In order to avoid the impact of post-processing for obtain the cluster results, Wang et al. [29] proposed a model which can produce clusters directly without post-processing for clustering and construct each view graph and fusion graph simultaneously. Besides, Zhang et al. [43] utilize Hadamard product to integrate multiple graphs into a global graph which can recover the graph structure effectively. Shi et al. [44] proposed a unified framework for jointly learning multiple similarity graphs and spectral embedding, which can obtain cluster results in a unified framework. Although the above methods achieved great clustering results, they directly build graphs by original data, which may make the learned similarity graph inaccurate and ignore the underlying cluster structure of multi-view data. Meanwhile, the real-value multi-view data requires high computational costs for generating
graphs with large-scale multi-view data. Some researchers explore hash methods which utilize binary codes rather than real-value features. Therefore, multi-view hashing methods are widely used to integrate large-scale data with the recent advances.
Existing multi-view hashing methods with graphs can be roughly divided into supervised and unsupervised methods based on the usage of semantic presentation (e.g., class labels). For supervised methods, Jin et al. [45] construct a semantic graph by jointly taking the semantic presentation and the local similarity structure into consideration. Guan et al. [46] proposed a supervised learning model to construct similarity graphs that capture the intrinsic manifold structure from semantic supervision. Despite these supervised methods can achieve great performance, it is not practical to label information manually for large-scale data. Therefore, Liu et al. [24] utilized anchor graphs to capture the latent structure inherent in a given massive dataset and calculated hash codes in Hamming space. Xiang et al. [47] proposed a novel hashing method that adopted a quantization regularization term to reduce the distortion error during constructing similarity graphs. Besides, Fang et al. [48] jointly learned the intra-modal similarity graph and reconstructed the similarity graph by symmetric nonnegative matrix factorization and then utilized binary code inner product to learn binary codes. The above approaches have obtained good results, but relaxing the discrete constraint on binary codes will cause quantization errors. Meanwhile, for the multi-view clustering tasks, existing multi-view hashing methods have not effectively mined the underlying cluster structure from different views.
## III Graph-Collaborated Auto-Encoder Hashing
In this paper, boldface lowercase letters and boldface uppercase letters are used to denote the vectors and matrices, receptively, e.g., \(\mathbf{x}\) and \(\mathbf{X}\). For any matrix \(\mathbf{X}\), \(\mathbf{x}_{ij}\) means the \((i,j)\)-element of \(\mathbf{X}\). Besides some important mathematic notations are listed in Table I. For given multi-data sets, \(\mathbf{X}=\{\mathbf{X}^{1},\mathbf{X}^{2},...,\mathbf{X}^{v}\}\) where \(\mathbf{X}^{v}\in\mathbb{R}^{N\times d^{v}}\) is the \(v\)-th view data. \(d^{v}\) denotes the feature dimension and \(N\) is the number of sample.
We propose a novel model of graph-collaborated auto-encoder hashing for clustering. Specifically, we jointly learned the affinity graphs from each view and constructed auto-encoders for the unified hash code learning which can minimize reconstruction loss between affinity graphs and hash code. Besides, our method utilized matrix factorization strategy to obtain cluster results.
Fig. 1: The whole model of Graph-Collaborated Auto-Encoder Hashing for Multi-view Binary Clustering (GCAE), which learns affinity graphs by low-rank constraint to integrate specific information from multiple views and adopts auto-encoder to generate unified binary code for clustering.
\begin{table}
\begin{tabular}{l|c|c} \hline Scalar & \(N\) & \(N\in\mathbb{R}\) \\ \hline Vector & \(\mathbf{a}\) & \(\mathbf{a}\in\mathbb{R}^{N}\) \\ & \(\mathbf{a}_{i}\) & \(i\)-th row \\ & \(\mathbf{a}_{i}^{v}\) & \(i\)-th row with \(v\)-th view \\ & \(\mathbf{1}\) & \(all\)-1 vector \\ & \(\mathbf{0}\) & \(all\)-0 vector \\ & \(||\mathbf{a}||_{2}\) & \(l_{2}\) norm as \(\sqrt{\sum_{i,j}^{N}\mathbf{a}_{i}^{2}}\) \\ \hline Martix & \(\mathbf{A}^{v}\) & \(\mathbf{A}^{v}\in\mathbb{R}^{N\times d^{v}}\) \\ & \(\mathbf{A}_{i,j}\) & \((i,j)\)-th element \\ & \(\mathbf{I}\) & the identity matrix \\ & \(tr(\mathbf{A})\) & the trace of matrix \(\mathbf{A}\) \\ & \(\mathbf{A}^{T}\) & the transpose of matrix \(\mathbf{A}\) \\ & \(\sigma\) & \(\sigma=diag(\mathbf{S})\) \\ & \(||\mathbf{A}||_{F}\) & Frobenius norm as\(\sqrt{\sum_{i,j}\mathbf{A}_{i,j}^{2},||\sigma||_{2}}\) \\ & \(||\mathbf{A}||_{*}\) & nuclear norm as \(tr(\sqrt{\mathbf{A}^{T}\mathbf{A}})\) \\ & \(\phi(\mathbf{A}^{v})\) & kernelized representation of \((\mathbf{A}^{v})\) \\ \hline \end{tabular}
\end{table} TABLE I: The Descriptions of Some Important Formula Symbols.
### _Multi-view Affinity Graph Learning_
For given data matrix \(\mathbf{X}=\{\mathbf{X}^{1},\mathbf{X}^{2},...,\mathbf{X}^{v}\}\), where \(\mathbf{X}^{v}\) presents the \(v\)-th view data matrix. Different from existing methods that directly utilize original multi-view data for generating multiple graphs. GCAE firstly adopts nonlinear kernel mapping for each view, which aims to unify the dimension of multi-view data so that we can learn the underlying geometric structure from data. Inspired by [32], we employ the nonlinear RBF kernel mapping method for each view as follows:
\[\phi(\mathbf{X}^{v})=[\exp(-\left\|\mathbf{x}_{1}^{v}-\mathbf{a}_{1}^{v} \right\|^{2}/\eta),...,\exp(-\left\|\mathbf{x}_{N}^{v}-\mathbf{a}_{t}^{v} \right\|^{2}/\eta)]^{T} \tag{1}\]
where \(\eta\) is the kernel width, \(\phi(\mathbf{X}^{v})\in\mathbb{R}^{N\times t}\) indicates the \(t\)-dimensional nonlinear mapping from the \(v\)-th view. Besides, to ensure that the original data structure is not destroyed after nonlinear projection, we randomly select \(t\) anchor samples \(\mathbf{a}_{t}^{v}\) from the \(v\)-th view.
After \(\phi(\mathbf{X}^{v})\) yielded by Eq. 1, the affinity graph \(\mathbf{Z}^{v}\) is constructed by processing each view of \(\phi(\mathbf{X}^{v})\). Above this, we propose a new variant of the LRR [49] model that can make the learned affinity graph retain the important information of the original data instead of noise and redundant information. Then, we give the following model:
\[\min_{\mathbf{Z}^{v}}\sum_{v=1}^{M}\left\|\phi(\mathbf{X}^{v})-\mathbf{Z}^{v} \phi(\mathbf{X}^{v})\right\|_{F}^{2}+\left\|\mathbf{Z}^{v}\right\|_{*} \tag{2}\]
where \(\mathbf{Z}^{v}\in\mathbb{R}^{N\times N}\) represents the learned affinity graph for \(v\)-th view data. \(\left\|\mathbf{Z}^{v}\right\|_{*}\) indicates the nuclear norm which calculates the sum of singular values for low-rank constraint. Minimizing the objective function in Eq. 2 aims to preserve important information while exploring the low-rank structure from the original data.
### _Graph-collaborated Auto-Encoders Hashing for Clustering_
In order to encourage collaborative learning of affinity graphs from different views, we proposed an auto-encoders hashing by the multi-graphs model which can effectively mine the consistency and complementarity information and generate binary code for clustering. Different from the existing hash methods, the auto-encoders hashing model preserves the discrete constraint and the uncorrelated constraint on binary bits.
\[\begin{split}&\min_{\mathbf{W}^{v},\mathbf{B},\mathbf{Z}^{v}} \sum_{v=1}^{M}(\left\|\mathbf{W}^{v}\mathbf{Z}^{v}-\mathbf{B}\right\|_{F}^{2} +\left\|\mathbf{Z}^{v}-\mathbf{W}^{vT}\mathbf{B}\right\|_{F}^{2})\\ & s.t.\quad\mathbf{W}^{v}\mathbf{W}^{vT}=\mathbf{I},\mathbf{B}\in \{-1,+1\}^{b\times N},\mathbf{BB}^{T}=N\mathbf{I}\end{split} \tag{3}\]
Eq. 3 aims to minimize the difference between low-rank affinity graphs mapping and discrete hash code. \(b\) represents binary bits. The decorrelation and code balance constraints \(\mathbf{B}\in\{-1,+1\}^{b\times N},\mathbf{BB}^{T}=N\mathbf{I}\) are imposed to generate mutually independent binary codes and reduce quantization errors. \(\mathbf{W}^{v}\) presents the projection matrix. For simplicity, we utilize the transpose matrix of \(\mathbf{W}^{v}\) to replace the inverse mapping matrix in auto-encoder, and add a constraint \(\mathbf{W}^{v}\mathbf{W}^{vT}=\mathbf{I}\) on projection matrix. Meanwhile, the regularization term \(\left\|\mathbf{W}^{v}\right\|_{F}^{2}\) is the constant due to \(\left\|\mathbf{W}^{v}\right\|_{F}^{2}=tr(\mathbf{W}^{v}\mathbf{W}^{vT})=tr( \mathbf{I})=const\). Finally, we construct a matrix factorization model with the generated unified binary codes, which can avoid the suboptimal results caused by the two-step clustering methods. Above all, Eq. 3 can be reformulated as follows:
\[\begin{split}&\min_{\mathbf{Z}^{v},\mathbf{W}^{v},\mathbf{B}, \mathbf{Q},\mathbf{H}}\sum_{v=1}^{M}(\left\|\mathbf{W}^{v}\mathbf{Z}^{v}- \mathbf{B}\right\|_{F}^{2}+\left\|\mathbf{Z}^{v}-\mathbf{W}^{vT}\mathbf{B} \right\|_{F}^{2}))\\ &\hskip 14.226378pt+\lambda\left\|\mathbf{B}-\mathbf{Q}\mathbf{ H}\right\|_{F}^{2}\\ & s.t.\quad\mathbf{W}^{v}\mathbf{W}^{vT}=\mathbf{I},\mathbf{B}\in \{-1,\ +1\}^{b\times N},\mathbf{BB}^{T}=N\mathbf{I},\\ &\mathbf{Q}^{T}\mathbf{1}=\mathbf{0},\mathbf{Q}\in\{-1,\ +1\}^{b \times c},\mathbf{H}\in\{0,\ 1\}^{c\times N},\sum_{i=1}^{c}\mathbf{h}_{is}=1\end{split} \tag{4}\]
where \(\lambda\) is the regularization parameter and \(c\) is the number of clusters. \(\mathbf{Q}\) and \(\mathbf{H}\) represent the clustering centroids and cluster indicator matrix, respectively. Meanwhile, in order to generate the efficient binary code and maximize the information of each bits for clustering, we add the balance constraint on clustering centroids \(\mathbf{Q}\), which can produce efficient code and make our model to adapt binary clustering task.
### _Overall Objective Function of GCAE_
In order to collaborating affinity graphs and auto-encoders to learn a unified binary code for clustering, we summarize the above graph-collaborated auto-encoders hashing and the low-rank affinity graphs learning, which are both crucial to multi-view clustering. The above consideration can be fulfilled as follows:
\[\begin{split}&\min\mathcal{L}(\mathbf{Z}^{v},\mathbf{W}^{v}, \mathbf{B},\mathbf{Q},\mathbf{H},\mathbf{p}^{v})\\ &=\sum_{\begin{subarray}{c}\mathbf{v}=1\\ \mathbf{u}\end{subarray}}^{M}\left\|\phi(\mathbf{X}^{v})-\mathbf{Z}^{v}\phi( \mathbf{X}^{v})\right\|_{F}^{2}+\left\|\mathbf{Z}^{v}\right\|_{*}\\ &\sum_{\begin{subarray}{c}\mathbf{v}=1\\ \mathbf{u}\end{subarray}}^{M}(\mathbf{p}^{v})^{k}(\left\|\mathbf{W}^{v}\mathbf{ Z}^{v}-\mathbf{B}\right\|_{F}^{2}+\left\|\mathbf{Z}^{v}-\mathbf{W}^{vT} \mathbf{B}\right\|_{F}^{2})+\lambda\left\|\mathbf{B}-\mathbf{Q}\mathbf{H} \right\|_{F}^{2}\\ &\hskip 14.226378pt\text{Auto--Encoders Hashing by Multi--Graphs for Clustering}\\ s.t.\quad\mathbf{W}^{v}\mathbf{W}^{vT}=\mathbf{I},\mathbf{B}\in\{-1,\ +1\}^{b \times N},\mathbf{BB}^{T}=N\mathbf{I},\\ &\mathbf{Q}^{T}\mathbf{1}=\mathbf{0},\mathbf{Q}\in\{-1,\ +1\}^{b \times c},\mathbf{H}\in\{0,\ 1\}^{c\times N},\\ &\sum_{v=1}^{M}\mathbf{p}^{v}=1,\mathbf{p}^{v}>0,\sum_{i=1}^{c} \mathbf{h}_{is}=1\end{split} \tag{5}\]
where \(\mathbf{p}^{v}\) indicates the normalized weighting coefficient, which aims to balance the affinity graphs from each view according to the contribution. Besides. we also add the constraint \(\mathbf{p}^{v}>0\) to ensure nonnegative vector. Above all, we summarize multi-view affinity graph learning and auto-encoders hashing by multi-graphs in a unified framework. The proposed GCAE model learns the low-rank affinity graph from each view, which can effectively preserve important information from original data to improve the quality of affinity graphs. And then, the proposed model utilizes binary matrix factorization model for the unified binary codes, which can effectively generate cluster results in a one-step model. By jointly optimizing
the above equation, we obtain the graph-collaborated binary code representation to obtain the cluster results. We propose a novel optimization algorithm for the objective function Eq. 5 in the following section.
### _Optimization Process for GCAE_
In this section, we describe the optimization process for GCAE in detail. It is obvious that Eq. 5 is a nonconvex optimization problem that can not directly obtain the holistic optimal solution in Eq. 5. We develop an auxiliary matrices strategy, which separates the problem into two sub-problems include low-rank affinity graphs learning and auto-encoders hash for obtaining optimized cluster results. Based on the auxiliary matrices strategy, we introduce two auxiliary matrices to relax the nuclear norm and control the rank of affinity graphs. And then the proposed auto-encoders utilize the affinity graphs from each view to generate binary code for clustering. Finally, we develop an iterative alternative strategy, which alternatively updates each variable when fixing others.
The proposed auxiliary matrices strategy introduces the multiplication of two auxiliary matrices \(\mathbf{F}^{v}\) and \(\mathbf{G}^{v}\) (i.e., \(\mathbf{Z}^{v}=\mathbf{F}^{v}\mathbf{G}^{vT}\)) to replace the low-rank affinity graph \(\mathbf{Z}^{v}\), and then problem Eq. 2 can be converted as:
\[\min_{\mathbf{F}^{v},\mathbf{G}^{v}}\left\|\phi(\mathbf{X}^{v})-\mathbf{F}^{v }\mathbf{G}^{vT}\phi(\mathbf{X}^{v})\right\|_{F}^{2} \tag{6}\]
where \(\mathbf{F}^{v}\in\mathbb{R}^{N\times r}\) and \(\mathbf{G}^{v}\in\mathbb{R}^{N\times r}\). \(r\) is a parameter that we use to relax nuclear norm and control the rank of \(\mathbf{Z}^{v}\). And this strategy is based on the fact that \(rank(\mathbf{Z}^{v})=rank(\mathbf{F}^{v}\mathbf{G}^{vT})\leq\min(rank( \mathbf{F}^{v}),rank(\mathbf{G}^{v}))\leq r\) (generally \(r\ll N\)). The updating process of \(\mathbf{F}^{v}\) and \(\mathbf{G}^{v}\) are shown as follows:
**Updating**\(\mathbf{F}^{v}\): Based on the operational rules of matrix trace, Eq. 6 can be unfolded as:
\[\begin{split} O(\mathbf{F}^{v})=\\ tr((\phi(\mathbf{X}^{v})-\mathbf{F}^{v}\mathbf{G}^{vT}\phi( \mathbf{X}^{v}))(\phi(\mathbf{X}^{v})-\mathbf{F}^{v}\mathbf{G}^{vT}\phi( \mathbf{X}^{v}))^{T})\end{split} \tag{7}\]
then we calculate \(\mathbf{F}^{v}\) iteratively by setting the derivation \(\frac{\partial O(\mathbf{F}^{v})}{\partial\mathbf{F}^{v}}=\mathbf{0}\):
\[\begin{split}\frac{\partial O(\mathbf{F}^{v})}{\partial\mathbf{F} ^{v}}=& 2\mathbf{F}^{v}\mathbf{G}^{vT}\phi(\mathbf{X}^{v})\phi( \mathbf{X}^{v})^{T}\mathbf{G}^{v}\\ &-2\phi(\mathbf{X}^{v})\phi(\mathbf{X}^{v})^{T}\mathbf{G}^{v}= \mathbf{0}\end{split} \tag{8}\]
Obviously, the close solution of \(\mathbf{F}^{v}\) is written as:
\[\mathbf{F}^{v}=\phi(\mathbf{X}^{v})\phi(\mathbf{X}^{v})^{T}\mathbf{G}^{v}( \mathbf{G}^{vT}\phi(\mathbf{X}^{v})\phi(\mathbf{X}^{v})^{T}\mathbf{G}^{v})^{ -1} \tag{9}\]
**Updating**\(\mathbf{G}^{v}\): With \(\mathbf{F}^{v}\) fixed, we can solve the optimization problem of \(\mathbf{G}^{v}\) which is similar with above method. The solution of \(\mathbf{G}^{v}\) can be easily obtained:
\[\mathbf{G}^{v}=\mathbf{F}^{v}(\mathbf{F}^{vT}\mathbf{F}^{v})^{-1} \tag{10}\]
Based on the above solution of \(\mathbf{F}^{v}\) and \(\mathbf{G}^{v}\), we summarize the above iteratively method in Algorithm 1. Notably, in order to avoid obtaining the singular value, we add a small smooth item in optimization. That is, \(\mathbf{F}^{v}=\phi(\mathbf{X}^{v})\phi(\mathbf{X}^{v})^{T}\mathbf{G}^{v}( \mathbf{G}^{vT}\phi(\mathbf{X}^{v})\phi(\mathbf{X}^{v})^{T}\mathbf{G}^{v}+ \theta\mathbf{I})^{-1}\) and \(\mathbf{G}^{v}=\mathbf{F}^{v}(\mathbf{F}^{vT}\mathbf{F}^{v}+\theta\mathbf{I} )^{-1}\) where \(\theta\) is set in range of \([1e^{-4},1e^{-6}]\) in our paper.
After the above low-rank affinity graphs learning with auxiliary matrices strategy, we have rewritten the formula to auto-encoders hash for clustering as Eq. 11. In order to obtain the optimal solution of Eq. 11, we propose an iterative optimization method. The proposed method can effectively maintain discrete constraints for binary code and get more efficient binary code for clustering.
\[\begin{split}&\min_{\mathbf{Z}^{v},\mathbf{B},\mathbf{Q},\mathbf{H}, \mathbf{p}^{v}}\sum_{v=1}^{M}(\|\mathbf{F}^{v}\mathbf{G}^{vT}-\mathbf{Z}^{v} \|_{F}^{2}+(\mathbf{p}^{v})^{k}(\|\mathbf{W}^{v}\mathbf{Z}^{v}-\mathbf{B}\|_{ F}^{2}\\ &\qquad\qquad\qquad+\|\mathbf{Z}^{v}-\mathbf{W}^{vT}\mathbf{B}\|_ {F}^{2}))+\lambda\left\|\mathbf{B}-\mathbf{Q}\mathbf{H}\right\|_{F}^{2}\\ s.t.&\mathbf{W}^{v}\mathbf{W}^{vT}=\mathbf{I}, \mathbf{B}\in\{-1,\;+1\}^{b\times N},\mathbf{B}\mathbf{B}^{T}=N\mathbf{I},\\ &\mathbf{Q}^{T}\mathbf{I}=\mathbf{0},\mathbf{Q}\in\{-1,\;+1\}^{b \times c},\mathbf{H}\in\{0,\;1\}^{c\times N},\\ &\sum_{v=1}^{M}\mathbf{p}^{v}=1,\mathbf{p}^{v}>0,\sum_{i=1}^{c} \mathbf{h}_{is}=1\end{split} \tag{11}\]
Updating**\(\mathbf{Z}^{v}\): By fixing all variables but \(\mathbf{Z}^{v}\), problem(11) reduces to:
\[\begin{split}\min_{\mathbf{Z}}&\sum_{v=1}^{M}(\| \mathbf{F}^{v}\mathbf{G}^{vT}-\mathbf{Z}^{v}\|_{F}^{2}\\ &\quad+(\mathbf{p}^{v})^{k}(\|\mathbf{W}^{v}\mathbf{Z}^{v}- \mathbf{B}\|_{F}^{2}+\|\mathbf{Z}^{v}-\mathbf{W}^{vT}\mathbf{B}\|_{F}^{2})) \end{split} \tag{12}\]
In order to minimize the above equation, we convert the Frobenius norm in the above equation to the trace form of the matrix which is convenient for derivation. And then we take the derivative of the trace of the matrix with regard to \(\mathbf{Z}^{v}\) as follows:
\[\begin{split}\frac{\partial\mathcal{L}_{\mathbf{Z}^{v}}}{ \partial\mathbf{Z}^{v}}&=\sum_{v=1}^{M}tr(\mathbf{Z}^{v}\mathbf{Z}^{ vT}-2\mathbf{F}^{v}\mathbf{G}^{vT}\mathbf{Z}^{vT}\\ &\quad+(\mathbf{p}^{v})^{k}(\mathbf{W}^{v}\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Updating\(\mathbf{W}^{v}\): In this stage, other variables are fixed. We take the constraint \(\mathbf{W}^{v}\mathbf{W}^{vT^{T}}=\mathbf{I}\) in consider, the whole loss function related to \(\mathbf{W}^{v}\) in Eq. 11 can be rewritten as:
\[\begin{split}\mathcal{L}_{\mathbf{W}^{v}}&=\min_{ \mathbf{W}^{v}}\sum_{v=1}^{M}(\|\mathbf{W}^{v}\mathbf{Z}^{v}-\mathbf{B}\|_{F}^{ 2}+\|\mathbf{Z}^{v}-\mathbf{W}^{vT}\mathbf{B}\|_{F}^{2})\\ &=\max tr(\mathbf{W}^{v}\mathbf{Z}^{v}\mathbf{B}^{T})\\ s.t.&\mathbf{W}^{v}\mathbf{W}^{vT}=\mathbf{I}\end{split} \tag{15}\]
where we also need to consider the condition \(\mathbf{B}\mathbf{B}^{T}=\mathbf{I}\) which is used during the optimization process. Specifically, we firstly convert Eq. 15 to trace of matrix form, and then we utilize the SVD algorithm [50] to solve the optimization problem.
\[\mathbf{W}^{v}=\mathbf{S}\mathbf{D}^{T} \tag{16}\]
where \(\mathbf{S}\) and \(\mathbf{D}\) are the left and right singular vectors of the compact Singular Value Decomposition (SVD) of \(\mathbf{Z}^{v}\mathbf{B}^{T}\).
Updating\(\mathbf{B}\): The common gradient method is not suitable for solving the discrete binary codes \(\mathbf{B}\). We rewrite Eq11 related to \(\mathbf{B}\) as follows:
\[\begin{split}&\max_{\mathbf{B}}tr(\mathbf{B}^{T}(2\sum_{v=1}^{M} \left(\mathbf{p}^{v}\right)^{k}\mathbf{W}^{v}\mathbf{Z}^{v}+\lambda\mathbf{Q} \mathbf{H}))\\ s.t.&\mathbf{B}\in\{-1,\;+1\}^{b\times N},\mathbf{B} \mathbf{B}^{T}=N\mathbf{I}\end{split} \tag{17}\]
We need to preserve constraints of \(\mathbf{B}\) which can generate compact and effective binary code. Thus, the optimal binary code \(\mathbf{B}\) can be obtained as follows, and \(sgn(\cdot)\) means symbolic function.
\[\mathbf{B}=sgn(\sum_{v=1}^{M}\left(2(\mathbf{p}^{v})^{k}\mathbf{W}^{v} \mathbf{Z}^{v}\right)+\lambda\mathbf{Q}\mathbf{H}) \tag{18}\]
Updating\(\mathbf{Q}\quad and\quad\mathbf{H}\): In this part, we iteratively optimize the binary clustering model by matrix factorization in Hamming space. By removing the irrelevant terms, the problem can be rewritten as:
\[\begin{split}&\min_{\mathbf{Q},\mathbf{H}}\left\|\mathbf{B}- \mathbf{Q}\mathbf{H}\right\|_{F}^{2}\\ & s.t.\quad\mathbf{Q}^{T}\mathbf{1}=\mathbf{0},\mathbf{Q}\in\{-1, \;+1\}^{b\times c},\mathbf{H}\in\{0,\;1\}^{c\times N},\\ &\sum_{i=1}^{c}\mathbf{h}_{is}=1,\quad\mathbf{B}\in\{-1,\;+1\}^{ b\times N}\end{split} \tag{19}\]
We simply reformulate Eq. 19 to:
\[\begin{split}&\min_{\mathbf{Q},\mathbf{H}}\left\|\mathbf{B}- \mathbf{Q}\mathbf{H}\right\|_{F}^{2}+\rho\Big{\|}\mathbf{Q}^{T}\mathbf{1} \Big{\|}^{2}\\ & s.t.\quad\mathbf{Q}\in\{-1,\;+1\}^{b\times c},\mathbf{H}\in\{0, \;1\}^{c\times N},\sum_{i=1}^{c}\mathbf{h}_{is}=1\end{split} \tag{20}\]
which is equal Eq. 19 with adaptively large \(\rho\). The proposed matrix factorization strategy iteratively optimizes the cluster centroids and indicators as follows.
Updating\(\mathbf{Q}\quad\): Due to the discrete constraint on \(\mathbf{Q}\), we utilize the discrete proximal linearized minimization (DPLM) [51] method, which can effectively obtain high-quality binary solutions. With \(\mathbf{H}\) fixed, the optimal \(\mathbf{Q}\) can be obtained as follow:
\[\begin{split}\min\mathcal{L}_{\mathbf{Q}}&=\left\| \mathbf{B}-\mathbf{Q}\mathbf{H}\right\|_{F}^{2}+\rho\Big{\|}\mathbf{Q}^{T} \mathbf{1}\Big{\|}^{2}\\ &=-2tr(\mathbf{B}^{T}\mathbf{Q}\mathbf{H})+\rho\Big{\|}\mathbf{Q} ^{T}\Big{\|}^{2}+\text{{\it con}}\\ & s.t.\quad\mathbf{Q}\in\{-1,\;+1\}^{l\times c}\end{split} \tag{21}\]
According to the DPLM, in the \(t+1\)-th iteration \(\mathbf{Q}\) can be updated as:
\[\mathbf{Q}^{t+1}=sgn(\mathbf{Q}^{t}-\frac{1}{\mu}\nabla\mathcal{L}_{\mathbf{Q }^{t}}) \tag{22}\]
where \(\nabla\mathcal{L}_{\mathbf{Q}^{t}}\) represents the gradient of \(\mathcal{L}_{\mathbf{Q}}\)
Updating\(\mathbf{H}\): We utilize vectors-based method to optimize the indicator matrix \(\mathbf{H}\), the solution to \(\mathbf{h}_{i,j}\) can be easily obtained by
\[\mathbf{h}_{i,j}^{t+1}=\left\{\begin{array}{ll}1,&j=\arg\min_{s}D(\mathbf{ b}_{i},\mathbf{q}_{s}^{t+1})\\ 0,&otherwise\end{array}\right. \tag{23}\]
where \(D(\mathbf{b}_{i},\mathbf{q}_{s}^{t+1})\) is the distance between \(i\)-th binary code \(\mathbf{b}_{i}\) and the \(s\)-th cluster centroid \(\mathbf{q}_{s}\) in Hamming space. Notably, we use binary code rather than real-value to calculate Eq. 23, which adopt Hamming distance that can save time compared to the Euclidean distance.
Updating\(\mathbf{p}^{v}\): For simplicity, we let \(\mathrm{a}^{v}=\|\mathbf{W}^{v}\mathbf{Z}^{v}-\mathbf{B}\|_{F}^{2}+\|\mathbf{Z} ^{v}-\mathbf{W}^{vT}\mathbf{B}\|_{F}^{2}\). Based on the attributes of different views, the weighting coefficient \(\mathbf{p}^{v}\) can be equivalent as following equation:
\[\min_{\mathbf{p}^{v}}\sum_{v=1}^{M}\left(\mathbf{p}^{v}\right)^{k}\!\mathrm{a} ^{v}\qquad s.t.\quad\sum_{v=1}^{M}\mathbf{p}^{v}=1,\!\mathbf{p}^{v}>0 \tag{24}\]
Due to the constraint on \(\mathbf{p}^{v}\), we can solve this problem by the Lagrange multiplier method. By setting the Lagrange multiplier \(\Gamma\), Eq. 24 can be rewritten as:
\[\min\mathcal{L}(\mathbf{p}^{v},\Gamma)=\sum_{v=1}^{M}\left(\mathbf{p}^{v} \right)^{k}\!\mathrm{a}^{v}-\Gamma(\sum_{v=1}^{M}\mathbf{p}^{v}-1) \tag{25}\]
and then, we calculate the partial derivative of \(\mathcal{L}(\mathbf{p}^{v},\Gamma)\) about \(\mathbf{p}^{v}\) and \(\Gamma\), we can get:
\[\left\{\begin{array}{ll}\frac{\partial\mathcal{L}}{\partial\mathbf{p}^{v}}&=k (\mathbf{p}^{v})^{k-1}\mathrm{a}^{v}-\Gamma\\ \frac{\partial\mathcal{L}}{\partial\Gamma}&=\sum_{v=1}^{M}\left(\mathbf{p}^{v} \right)^{k}-1\end{array}\right. \tag{26}\]
Therefore, we set partial derivative to zero which can get:
\[\mathbf{p}^{v}=\frac{(\mathrm{a}^{v})^{\frac{1}{1-k}}}{\sum\limits_{v=1}^{M}( \mathrm{a}^{v})^{\frac{1}{1-k}}} \tag{27}\]
We have presented the whole optimization process for Eq. 11. And we summarize the whole process in Algorithm 2, which iteratively updates the variables until convergence.
## IV Experiments
In this section, we describe the datasets and comparison methods to verify the effectiveness of the proposed GCAE for clustering tasks. We evaluated the performance of GCAE by comparing several hash methods and multi-view clustering methods on \(5\) widely used datasets. Moreover, we analyze the parameter sensitivity of our model, which can affect the fluctuations of results. We also summarize the running times on all datasets and evaluate the convergence of our model. All the experiments are conducted using Matlab 2020a on a Windows PC with Intel 2.8-GHz CPU and 64-GB RAM.
### _Experimental settings_
In this part, we describe the utilized datasets and the comparison methods in detail. We also introduce some evaluation metrics which aim to verify the effectiveness of GCAE.
#### Iv-A1 Datasets
To evaluate the clustering performance of the proposed model and comparison models, five widely-accepted multi-view datasets are selected. The details of the selected datasets are as follows:
**Caltech1011** contains 9144 samples of 101 objects. 6 publicly available features are utilized as multiple views, including Gabor feature with 48 dimension, LBP feature with 928 dimension, GIST feature with 512 dimension, CENTRIST feature with 254 dimension, wavelet moments(WM) with 40 dimension and HOG feature with 1984 dimension.
**Caltech2562** contains 30607 images by 4 kinds of features. It includes 256 classes with more than 80 samples per class. Besides, each image is represented by color histogram with 729 dimension, Gist feature with 1024 dimension, HOG feature with 1152 dimension and features based on convolution network with 1440 dimension, which are four different types of presentation.
Footnote 1: [http://www.vision.caltech.edu/ImageDatasets/Caltech101/](http://www.vision.caltech.edu/ImageDatasets/Caltech101/)
**Cifar-103** is composed by 60000 tiny color images in 10 kinds of classes. Specifically, we represent the features by
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Datasets & Metrics & SH & DSH & SP & ITQ & SGH & RSSH & RFDH & HSIC & BMVC & GCAE \\ \hline \multirow{8}{*}{100leaves} & ACC & 0.4713 & 0.4494 & 0.4750 & 0.4569 & 0.5088 & 0.3631 & 0.4513 & 0.6563 & 0.4981 & **0.8888** \\ & NMI & 0.7214 & 0.7320 & 0.7240 & 0.7405 & 0.7579 & 0.6203 & 0.7161 & 0.8245 & 0.7291 & **0.9426** \\ & Purity & 0.4950 & 0.4988 & 0.5138 & 0.4944 & 0.5338 & 0.3888 & 0.4838 & 0.6788 & 0.5331 & **0.9031** \\ & F-score & 0.3471 & 0.3219 & 0.3312 & 0.3324 & 0.3996 & 0.2209 & 0.3234 & 0.5431 & 0.3057 & **0.8366** \\ & Precision & 0.3184 & 0.2653 & 0.2942 & 0.2528 & 0.3626 & 0.2094 & 0.2708 & 0.5128 & 0.2547 & **0.8166** \\ & ARI & 0.3404 & 0.3141 & 0.3240 & 0.3240 & 0.3933 & 0.2131 & 0.3158 & 0.5385 & 0.2978 & **0.8350** \\ \hline \multirow{8}{*}{Caltech-101} & ACC & 0.1747 & 0.1610 & 0.2077 & 0.2427 & 0.2256 & 0.2860 & 0.2196 & 0.2429 & 0.2930 & **0.3005** \\ & NMI & 0.3252 & 0.3600 & 0.4034 & 0.4404 & 0.4349 & 0.4846 & 0.4424 & 0.4451 & **0.4900** & 0.4711 \\ & Purity & 0.3089 & 0.3439 & 0.3878 & 0.4231 & 0.4051 & 0.4729 & 0.4228 & 0.4125 & **0.4907** & 0.4421 \\ & F-score & 0.1486 & 0.1250 & 0.1738 & 0.2311 & 0.2089 & 0.2533 & 0.2081 & 0.2055 & 0.2465 & **0.3023** \\ & Precision & 0.2604 & 0.1975 & 0.2897 & 0.3544 & 0.3085 & 0.4071 & 0.3419 & 0.3430 & 0.4147 & **0.4229** \\ & ARI & 0.1347 & 0.1091 & 0.1596 & 0.2167 & 0.1935 & 0.2406 & 0.1943 & 0.1919 & 0.2336 & **0.2880** \\ \hline \multirow{8}{*}{Cifar-10} & ACC & 0.1708 & 0.2189 & 0.2275 & 0.2245 & 0.2205 & 0.1960 & 0.2252 & 0.2153 & 0.2350 & **0.2498** \\ & NMI & 0.0282 & 0.0938 & 0.0979 & 0.0987 & 0.0995 & 0.0659 & 0.1011 & 0.0920 & 0.1016 & **0.1020** \\ & Purity & 0.1727 & 0.22271 & 0.2285 & 0.2297 & 0.2218 & 0.2046 & 0.2333 & 0.2236 & 0.2368 & **0.2549** \\ & F-score & 0.1137 & 0.1462 & 0.1342 & 0.1266 & 0.1458 & 0.1316 & 0.1560 & 0.1441 & 0.1590 & **0.1597** \\ & Precision & 0.1129 & 0.1520 & 0.1495 & 0.1544 & 0.1408 & 0.1284 & 0.1457 & 0.1420 & 0.1533 & **0.1563** \\ & ARI & 0.0145 & 0.0624 & 0.0584 & 0.0610 & 0.0600 & 0.0325 & 0.0582 & 0.0476 & 0.0618 & **0.0642** \\ \hline \multirow{8}{*}{SUNRGBD} & ACC & 0.1164 & 0.1577 & 0.1925 & 0.1895 & 0.1823 & 0.1613 & 0.1738 & 0.1616 & 0.1379 & **0.2423** \\ & NMI & 0.1319 & 0.2198 & 0.2180 & 0.2108 & 0.2172 & 0.1980 & 0.2032 & 0.2202 & 0.1545 & **0.2207** \\ \cline{1-1} & Purity & 0.2441 & 0.3271 & 0.3418 & 0.3366 & 0.3433 & 0.3279 & 0.3332 & **0.3524** & 0.2803 & 0.3435 \\ \cline{1-1} & F-score & 0.0650 & 0.1033 & 0.1223 & 0.1274 & 0.1184 & 0.0976 & 0.1145 & 0.1059 & 0.0822 & **0.1541** \\ \cline{1-1} & Precision & 0.1193 & 0.1861 & 0.1722 & 0.1802 & 0.1822 & 0.1869 & 0.1901 & **0.2013** & 0.1550 & 0.1909 \\ \cline{1-1} & ARI & 0.0309 & 0.0699 & 0.0895 & 0.0951 & 0.0859 & 0.0661 & 0.0823 & 0.0744 & 0.0496 & **0.1076** \\ \hline \multirow{8}{*}{Caltech256} & ACC & 0.0622 & 0.0782 & 0.0856 & 0.0865 & 0.0826 & 0.0937 & 0.0783 & 0.0678 & 0.0932 & **0.1071** \\ \cline{1-1} & NMI & 0.2557 & 0.2866 & 0.2947 & 0.2717 & 0.2969 & 0.3058 & 0.2855 & 0.2350 & **0.3185** & 0.2871 \\ \cline{1-1} & Purity & 0.1096 & 0.1255 & 0.1399 & 0.1386 & 0.1363 & 0.1512 & 0.1278 & 0.1064 & **0.1534** & 0.1415 \\ \cline{1-1} & F-score & 0.0465 & 0.0644 & 0.0635 & 0.0975 & 0.0623 & 0.0805 & 0.0610 & 0.0402 & 0.0745 & **0.1145** \\ \cline{1-1} & Precision & 0.0527 & 0.0681 & 0.0658 & 0.0950 & 0.0649 & 0.0932 & 0.0644 & 0.0305 & 0.0811 & **0.1037** \\ \cline{1-1} & ARI &
DSD feature with 220 dimension, HOG feature with 512 dimension and GIST feature with 768 dimension. Besides, a subset of this dataset is selected in the experiment that contains 10000 samples.
**100leaves4[52]** contains 1600 samples from each of 100 plant species, which form the UCI repository with three different views, i.e., Shape, Texture and Margin features.
Footnote 4: [https://archive.ics.uci.edu/ml/dataset](https://archive.ics.uci.edu/ml/dataset)
**SUNRGBD5** contains 10335 indoor scene images by 3D cameras in 45 classes. Similar with [53], we utilize two views with 4096 dimension and features extracted by different convolution network.
Footnote 5: [http://rgbd.cs.princeton.edu/](http://rgbd.cs.princeton.edu/)
#### Iv-A2 Comparing Algorithms and Evaluation Metrics
In our experiments, we evaluate the performance of GCAE by comparing several state-of-the-art multi-view algorithms and hash algorithms. Specifically, we utilize seven single-view hash methods and two multi-view hash clustering algorithms, including SH [23], DSH [54], SP [55], ITQ [56], SGH [25], RSSH [34], RFDH [36], HSIC [39], BMVC [32]. For single-view hash methods, we utilize K-means algorithm to process the generated binary code for clustering and we preserve the best result of each view clustering. Besides, we adopt eight real-value multi-view clustering algorithms as comparing algorithms, including K-means [42], SC [57], Co-regularize [26], AMGL [40], Mul-NMF [58], MLAN [59], mPAC [60], GMC [7]. We utilize the source code from the author's public homepage for comparing. Notably, we set the length of binary code as 128-bits in our experiments, which can effectively preserve critical information.
In order to generally evaluate the superiority of our method, we utilize six widely used evaluation metrics, i.e., clustering accuracy(ACC), Normalized Mutual Information(NMI), Purity, F-score, Precision and ARI [61, 7]. For all algorithms, better performance is improved by the higher value of evaluation metrics.
### _Experimental Results and Analysis_
In this section, we conducted experiments on \(5\) multi-view datasets to show the superiority of the proposed GCAE. The detailed clustering results are shown in Table II, III. We utilize the bold values to represent the best performance in each table corresponding to the dataset. Besides, this section also introduced the analysis of parameter sensitivity, which can reflect clustering results with different parameters. Finally, we provide the complexity analysis and convergence analysis of the proposed model to verify the stability of GCAE.
#### Iv-B1 Comparison with Hash Methods
In this section, we conduct experiments with hash methods on five multi-view datasets to verify the performance of our proposed model. We compare GCAE with seven single-view hash methods and two multi-view hash clustering methods. In single-view hash methods, we generate cluster results by adopting k-means algorithm to finish cluster task. Specifically, we adopt K-means algorithm with cluster binary code to obtain sample labels. The results with different datasets are reported in Table II.
We summarized the clustering performance with compared methods on all multimedia datasets in Table II, and the best results are highlighted in bold. As shown in Table II, we can obvious that GCAE obtains the best clustering accuracy on five multi-view datasets compared with hash methods. Specifically, we proposed GCAE method evidently outperforms other state-of-the-art methods for 100leaves and Cifar-10 datasets. For the 100leaves dataset, the performance of GCAE improves around \(23.2\%\), \(11.8\%\), and \(22.4\%\) with terms of ACC, NMI, and Purity over the second-best method. Moreover, for the Cifar-10 dataset, the performance improvements over the second-best method are \(1.5\%\), \(0.4\%\), \(1.81\%\) with terms of ACC, NMI, and Purity.
The experiments demonstrate that the multi-view hash clustering methods get better results than single-view methods, which proves multi-view methods can exploit complementary information from multi-view. For single-view methods, we perform clustering on each views and select the best results to report in the above table. Generally speaking, multi-view methods take the complementary information and consistent information between multi-view data into consideration. Therefore, multi-view methods can get better performance and single-view methods can not obtain satisfactory performance in most situations. The results in Table II also verify the low-rank affinity graphs construction is a vital part of our proposed GCAE, which aims to effectively generate binary code with essential information.
Overall, the clustering performance about our proposed model outperforms the other hash methods in most situations. The phenomenon denotes that GCAE can effectively preserve essential information by auto-encoders structure and keep the discrete constraint on binary code. Besides, GCAE has the ability to integrate comprehensive information from multi-view data.
#### Iv-B2 Multi-view Methods Experimental Results and Analysis
In this section, we present the detailed comparison clustering results with multi-view clustering methods in Table III. As shown in this table, we compare GCAE with eight real-value clustering methods and two hash clustering methods. Moreover, for single-view methods i.e., K-means and SC, we concatenates all multiple views into one view for clustering.
As shown in Table III, it is clear that GCAE obtains the best clustering accuracy with the comparison results on all multi-view datasets. Specifically, the proposed GCAE model outperforms all other methods on ACC, NMI, Purity, F-score, Precision, and ARI in most situations. We can obviously find the performance improvements over the second-best method are \(1.9\%\), \(1.4\%\) and \(14.1\%\) respectively in metrics of ACC, NMI, and Purity for 100leaves dataset. Moreover, for the Cifar-10 dataset, our proposed model can improve \(1.5\%\), \(0.4\%\), and \(1.81\%\) in the above metrics which is compared with the second-best method.
We conducted the experiments with comparing real-value based multi-view clustering method and hash methods, which aims to verify the stability of clustering in Hamming space. It is clear that hash methods can obtain satisfactory performance in most situations. This is because real-value methods utilize Euclidean distance to measure two samples, which have
problems of low efficiency and high computational complexity. And hash methods learn binary code and obtain cluster results in Hamming space, which can improve calculation efficiency. The adoption of the affinity graph is more conductive to preserving vital information of mining original data, but utilizing hash methods only improves significantly in computational complexity, which can not significantly improve evaluation metrics.
It is worth noting that the proposed GCAE obtain the best cluster accuracy and satisfactory performance on other evaluation metrics for five multi-view datasets. This is because GCAE can explore consistency and complementary information in multi-view data structure by learning low-rank affinity graphs, which learn high-quality collaborative representation well and eliminates some redundant and noise information in the original real-valued features. Besides, GCAE can also improve the computational complexity of algorithm significantly.
### _Parameter Sensitivity and Complexity Analysis_
In this section, we discover that the fluctuation of several parameters has a significant influence on the experiments. In order to evaluate the sensitivity of the parameters in the proposed GCAE, we select 100leaves, Caltech-101, Cifar-10, and SUNRGBD as the benchmark datasets. For GCAE, there are six parameters to be tuned, i.e., \(\lambda,k,t,\theta,b\) and \(r\). The above parameters represent regularization parameter for binary clustering, the power of normalized weighting coefficient, the dimension of nonlinear kernel, the small smooth item, binary bits, respectively. We firstly conducted experiments on the Caltech-101, Cifar-10, and SUNRGBD datasets to show the influence of parameters \(\lambda,k,t\) and \(\theta\) change trend on the results. Specifically, we calculate the cluster accuracy values and display them in line charts, which can intuitively find the influence of parameters changing. Fig. 2 shows the different change trends on three benchmark datasets. In Fig. 2(a), we can observe the different \(\lambda\) values have effected
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Datasets & Metrics & k-means & SC & Co-re-p & Co-re-c & AMGL & Mul-NMF & MLAN & mPAC & GMC & HSIC & BMVC & GCAE \\ \hline \multirow{8}{*}{100leaves} & ACC & 0.6200 & 0.4894 & 0.7253 & 0.7939 & 0.7631 & 0.8694 & 0.7356 & 0.8238 & 0.4338 & 0.6563 & 0.4981 & **0.8888** \\ & NMI & 0.8284 & 0.7649 & 0.8835 & 0.9257 & 0.9065 & 0.9288 & 0.8848 & 0.9292 & 0.7014 & 0.8245 & 0.7291 & **0.9426** \\ & Purity & 0.6550 & 0.5575 & 0.7565 & 0.8272 & 0.8063 & 0.8981 & 0.7625 & 0.8506 & 0.5350 & 0.6788 & 0.5331 & **0.9031** \\ & F-score & 0.5233 & 0.2144 & 0.6595 & 0.7558 & 0.4513 & 0.8236 & 0.6583 & 0.5042 & 0.3145 & 0.5431 & 0.3057 & **0.8366** \\ & Precision & 0.4689 & 0.1307 & 0.6107 & 0.7011 & 0.3086 & 0.7832 & 0.6082 & 0.3521 & 0.3744 & 0.5128 & 0.2547 & **0.8166** \\ & ARI & 0.5183 & 0.2021 & 0.6560 & 0.7533 & 0.4437 & 0.8219 & 0.6548 & 0.4974 & 0.3070 & 0.5385 & 0.2978 & **0.8350** \\ \hline \multirow{8}{*}{Caltech-101} & ACC & 0.1331 & 0.1751 & 0.2611 & 0.2587 & 0.1476 & 0.1908 & 0.2274 & 0.1950 & 0.2672 & 0.2429 & 0.2940 & **0.3005** \\ & NMI & 0.3078 & 0.3207 & 0.4752 & 0.4912 & 0.3757 & 0.3519 & 0.4564 & 0.3446 & 0.4408 & 0.4451 & **0.4900** & 0.4711 \\ & Purity & 0.2907 & 0.3107 & 0.4622 & 0.4664 & 0.1681 & 0.3184 & 0.4401 & 0.3012 & 0.3497 & 0.4125 & **0.4907** & 0.4421 \\ & F-score & 0.0985 & 0.1362 & 0.2202 & 0.2226 & 0.0338 & 0.0470 & 0.1930 & 0.0496 & 0.2658 & 0.2055 & 0.2465 & **0.3023** \\ & Precision & 0.1351 & 0.1440 & 0.3954 & 0.3999 & 0.0175 & 0.0248 & 0.3185 & 0.0261 & 0.2337 & 0.3430 & 0.4147 & **0.4229** \\ & ARI & 0.0796 & 0.1126 & 0.2078 & 0.2103 & 0.0155 & -0.0068 & 0.1790 & -0.0042 & 0.2475 & 0.1919 & 0.2336 & **0.2880** \\ \hline \multirow{8}{*}{Cifar-10} & ACC & 0.2036 & 0.1712 & 0.2169 & 0.2084 & 0.2232 & 0.1217 & 0.2304 & 0.1038 & 0.2312 & 0.2153 & 0.2350 & **0.2498** \\ & NMI & 0.0891 & 0.0773 & 0.0944 & 0.0903 & 0.0845 & 0.2044 & 0.0919 & 0.0066 & 0.1014 & 0.0920 & 0.1016 & **0.1020** \\ & Purity & 0.2055 & 0.1751 & 0.2214 & 0.2180 & 0.2276 & 0.1235 & 0.2522 & 1.0403 & 0.2462 & 0.2236 & 0.2368 & **0.2549** \\ & F-score & 0.1529 & 0.1443 & 0.1593 & 0.1477 & 0.1582 & 0.1570 & 0.1511 & 0.1499 & 0.1441 & 0.1590 & **0.1597** \\ & Precision & 0.1344 & 0.1080 & 0.1453 & 0.1393 & 0.1393 & 0.1005 & 0.1537 & 0.0999 & 0.1495 & 0.1420 & 0.1533 & **0.1563** \\ & ARI & 0.0443 & 0.0155 & 0.0561 & 0.0467 & 0.0594 & 0.0012 & 0.0612 & 0.0603 & 0.0607 & 0.0476 & 0.0618 & **0.0642** \\ \hline \multirow{8}{*}{SUNRGBD} & ACC & 0.1859 & 0.1060 & 0.1824 & 0.1829 & 0.1010 & 0.1346 & 0.1898 & 0.1277 & 0.2155 & 0.1616 & 0.1379 & **0.2423** \\ & NMI & 0.1866 & 0.0084 & 0.2109 & 0.2161 & 0.1883 & 0.0976 & 0.2101 & 0.0728 & 0.2204 & 0.2202 & 0.1545 & **0.2207** \\ & Purity & 0.3022 & 0.1087 & 0.3274 & 0.3360 & 0.1119 & 0.1583 & 0.3257 & 0.1415 & 0.2303 & **0.3524** & 0.2803 & 0.3435 \\ & F-score & 0.1293 & 0.1213 & 0.1172 & 0.1221 & 0.0637 & 0.1200 & 0.1209 & 0.1215 & 0.1387 & 0.1509 & 0.0822 & **0.1541** \\ & Precision & 0.1057 & 0.0646 & 0.1849 & 0.1748 & 0.0364 & 0.0645 & 0.1822 & 0.0650 & 0.1007 & **0.2013** & 0.1550 & 0.1909 \\ & ARI & 0.0976 & 0.0323 & 0.0865 & 0.0916 & 0.0262 & 0.0251 & 0.0891 & 0.008 & 0.1026 & 0.0744 & 0.0496 & **0.1076** \\ \hline \multirow{8}{*}{Caltech-256} & ACC & 0.0845 & 0.0924 & 0.0854 & 0.1025 & 0.0467 & 0.0612 & 0.0761 & 0.0904 & 0.0723 & 0.0678 & 0.0932 & **0.1071** \\ & NMI & 0.2748 & 0.2764 & 0.2997 & 0.2786 & 0.1070 & 0.1486 &
the cluster accuracy notably on three datasets. We conduct experiments with different parameters \(\lambda\) selected from the range of \([1e-9,1e-3]\). It is obvious that the change trend of GCAE on the three datasets and GCAE can achieve ideal performance when \(\lambda\) is set to an appropriate value. Besides, it is obvious that the parameter \(k\) can effect cluster accuracy values selected from the range of \([3,8]\) in Fig. 2(b). The proposed GCAE can achieve excellent performance with a good selection of \(k\). As shown in Fig. 2(c) and Fig. 2(d), the change trend of \(t\) and \(\theta\) do not influence cluster results obviously. It is notable that GCAE can still achieve a relatively stable and good performance with the change in the dimension of nonlinear kernel \(t\) and the small smooth item \(\theta\), which are selected in range of \([200,1400]\) and \([1e^{-8},1e^{-1}]\), respectively.
As shown in Fig. 3, we adopt comparison experiments of the length of binary code to verify our proposed GCAE. Therefore, we change the binary code length and calculate ACC, NMI, and Purity on 100leaves, Caltech-101 and Cifar-10 datasets to show the performance of GCAE. Above the figure, we select 128-bits binary code in GCAE, which aims to fully preserve essential information for clustering. Besides, in order to demonstrate the relation between the rank of affinity graphs and the length of binary code. We conducted a three-dimensional statistical graph of clustering results by adjusting the values of \(r\) and \(b\). The statistical results of 100leaves, Caltech-101 and Cifar-10 datasets are shown in Fig. 4, which clustering performance as ACC. We can obviously find GCAE can obtain excellent performance with a good selection of \(r\) and \(b\), which are designed in the range of \([20,200]\) and \([8,128]\) respectively. In conclusion, we select 128-bits binary codes and \(r=100\) on these datasets, which achieves the best clustering performance with the proposed GCAE algorithm.
We summarized the overall algorithm of the proposed GCAE method in Algorithm 1 and Algorithm 2, which present low-rank affinity graphs learning and graph-collaborated auto-encoder hashing respectively. The computational burden of GCAE mainly consists of the above two sub-process, and we provide the analysis of GCAE in detail. Specifically, the two closed-form solutions, i.e., Eq. 9 and Eq. 10 cost \(O(Nd^{v}r+Nr^{2}+r^{3})\). Therefore, the whole computational complexity of Algorithm 1 is \(O(N\times max(d^{v},r)\times r\times Iter)\), where we set the \(Iter\) to 80. For Algorithm 2, we adopt the iterative optimization method to obtain the solution for each parameter. When updating \(\mathbf{Z}^{v}\), which is the most time-consuming part of GCAE. It costs \(O(N^{3}+N^{2}b+N^{2}r)\) because it needs to calculate the inverse of the matrix. And then, we need \(O(N^{2}b)\) for updating \(\mathbf{W}^{v}\), which contains SVD operation and matrix multiplication. After that, in order to update \(\mathbf{B}\) and 23, we need \(O(N^{2}b+Nbc)\) which includes Frobenius norm and ADPLM method. Besides, solving Eq. 22 and Eq. 23 needs \(O(Nc)\) and updating \(I^{v}\) costs \(O(rN)\) for each iteration. Overall, the whole computational complexity of GCAE is \(O((N^{3}+3N^{2}b+N^{2}r+Nbc)p+Nmax(d^{v},r)rIter)\), which \(p\) is the number of epoch. For clarity, we summarized the running time of GCAE for each dataset in Table IV.
analysis of the proposed GCAE in this section. Fig. 5 shows the convergence curves of GCAE on the datasets 100leaves, Caltech-101 and Cifar-10. In Fig.5, the \(x\)-axis and \(y\)-axis present the number of iterations and the value of the objective function, respectively. It can obviously find that the values of the objective function in GCAE decrease monotonically with the increased iterations. Furthermore, the objective values converge very quickly after 5-10 iterations which verifies the effectiveness of our proposed optimized solution for each subprocess.
## V Conclusion
In this paper, we proposed a novel binary multi-view clustering method termed as Graph-Collaborated Auto-encoder Hashing for Multi-view Binary Clustering (GCAE), which can effectively construct low-rank affinity graphs from each view and jointly learn binary code by auto-encoders. GCAE constructs graphs by utilizing auxiliary matrices to control the low-rank constraint, which can reasonably preserve essential information from original data. And then GCAE adopts auto-encoders to collaborate multiple graphs for learning unified binary codes and obtaining cluster results. With the proposed optimization algorithm, the objective function converges quickly. The extensive experiments demonstrated the superiority of GCAE. It can be obviously seen that different from real-value multi-view clustering methods, GCAE can effectively obtain cluster results in Hamming space. Compared with hash methods, GCAE can reasonably utilize multi-view consistency and complementarity information. We conducted our experiments on five multi-view datasets for experimental examination. In the future work, we intend to further improve the speed of the algorithm, which is the deficiency of the proposed method.
## Acknowledgment
This work was supported in part by the National Natural Science Foundation of China Grant 62002041 and 62176037, Liaoning Fundamental Research Funds for Universities Grant LJKQZ2021010, Liaoning Doctoral Research Startup Fund Project Grant 2021-BS-075, Liaoning Province Applied Basic Research Project 22JH2/101300264 and Dalian Science and Technology Innovation Fund 2022JJ12GX019, 2021JJ12GX028 and 2022JJ12GX016.
|
2306.09576 | Design and simulation of a novel 4H-SiC LGAD timing device | Silicon-based fast time detectors have been widely used in high energy
physics, nuclear physics, space exploration and other fields in recent years.
However, silicon detectors often require complex low-temperature systems when
operating in irradiation environment, and their detection performance decrease
with the increase of irradiation dose. Compared with silicon, silicon carbide
(SiC) has a wider bandgap, higher atomic displacement energy, saturated
electron drift velocity and thermal conductivity. Simultaneously, the low gain
avalanche detector avoids crosstalk and high noise from high multiplication due
to its moderate gain, and thus can maintain a high detector signal without
increasing noise. Thus, the 4H-SiC particle detector, especially the low gain
avalanche detector has the potential to detect the minimal ionized particles
(MIPs) under extreme irradiation and high temperature environments. In this
work, the emphasis was placed on the design of a 4H-SiC Low Gain Avalanche
Detector (LGAD), especially the epitaxial structure and technical process which
played the main roles. In addition, a simulation tool--RASER(RAdiation
SEmiconductoR) was developed to simulate the performances including the
electrical properties and time resolution of the 4H-SiC LGAD we proposed. The
working voltage and gain effectiveness of the LGAD were verified by the
simulation of electrical performances. The time resolution of the LGAD is (35.0
$\pm$ 0.2) ps under the electrical field of -800 V, which is better than that
of the 4H-SiC PIN detector. | Keqi Wang, Tao Yang, Chenxi Fu, Li Gong, Songting Jiang, Xiaoshen Kang, Zaiyi Li, Hangrui ShiXin Shi, Weimin Song, Congcong Wang, Suyu Xiao, Zijun Xu, Xiyuan Zhang | 2023-06-16T01:42:58Z | http://arxiv.org/abs/2306.09576v1 | # Design and simulation of a novel 4H-SiC LGAD timing device
###### Abstract
Silicon-based fast time detectors have been widely used in high energy physics, nuclear physics, space exploration and other fields in recent years. However, silicon detectors often require complex low-temperature systems when operating in irradiation environment, and their detection performance decrease with the increase of irradiation dose. Compared
with silicon, silicon carbide (SiC) has a wider bandgap, higher atomic displacement energy, saturated electron drift velocity and thermal conductivity. Simultaneously, the low gain avalanche detector avoids crosstalk and high noise from high multiplication due to its moderate gain, and thus can maintain a high detector signal without increasing noise. Thus, the 4H-SiC particle detector, especially the low gain avalanche detector has the potential to detect the minimal ionized particles (MIPs) under extreme irradiation and high temperature environments. In this work, the emphasis was placed on the design of a 4H-SiC Low Gain Avalanche Detector (LGAD), especially the epitaxial structure and technical process which played the main roles. In addition, a simulation tool-RASER(RAdiation SEmiconductoR) was developed to simulate the performances including the electrical properties and time resolution of the 4H-SiC LGAD we proposed. The working voltage and gain effectiveness of the LGAD were verified by the simulation of electrical performances. The time resolution of the LGAD is (35.0 \(\pm\) 0.2) ps under the electrical field of -800 V, which is better than that of the 4H-SiC PIN detector.
Silicon carbide, LGAD, Radiation-resistant, Time resolution 2021
## 1 Introduction
Low gain avalanche detector (LGAD), as a novel detector technology, has attracted extensive attention in recent years. By adding a gain layer under the cathode of the PIN detector, a collision-ionization multiplication of the electron-hole pair occurs in the gain layer, resulting in excellent timing resolution[1]. In particular, for the requirements of the High Granularity Timing Detector (HGTD) in ATLAS and the End-cap Timing Layer (ETL) in CMS experiments, silicon-based LGAD with 50 ps time resolution has been developed which can resist the radiation of 2\(\times\)10\({}^{15}\)\(n_{eq}\)/cm\({}^{2}\)[2; 3; 4; 5]. However, it has been reported[6] that the charge collection efficiency (CCE) and time resolution of silicon-based LGAD will deteriorate with increasing irradiation flux and will not work normally under irradiation above 2\(\times\)10\({}^{15}\)\(n_{eq}\)/cm\({}^{2}\). In addition, in order to reduce power and improve the signal-to-noise ratio (SNR) of semiconductor detectors, silicon-based LGAD devices need to operate at a low temperature of -30 \({}^{\circ}\)[7; 8], which greatly increases the operation cost of detectors. Moreover, the low temperature cooling system not only increases volume of the detector system, but also increases the difficulty of maintaining the stability of the detector system. Therefore, there is a high demand for research into a semiconductor detector with strong radiation resistance, excellent time resolution and stable operation at room temperature or even high temperature.
Silicon carbide (SiC), as the third generation semiconductor, has been investigated as a detector material for nearly 20 years[9]. In recent years, SiC has received more attention due to the interest of semiconductor industries
in the renewable energy revolution, leading to broader application in power-efficient transistors, photovoltaic inverters and electric vehicle drive trains, etc. Compared with silicon, SiC has higher band-gap and atomic displacement, which leads to its strong potential for irradiation resistance[10]. In addition, the higher breakdown electric field, saturated electron drift velocity and thermal conductivity mean that a SiC detector would have a faster time response and lower temperature sensitivity[11]. Zhang's group reported a 4H-SiC detector with a time response of 117 ps to alpha particles[12]. However, there are very few reports on the time response of Minimum Ionizing Particles (MIPs). We have investigated the time performance of a 4H-SiC PIN detector with a thickness of 100 \(\mu m\), and the time resolution of MIPs is 94 ps[13]. It is noteworthy that the average ionization rates of electron-hole pairs of MIPs in 4H-SiC and Si are 55 pairs/\(\mu m\) and 75 pairs/\(\mu m\), respectively, which means that the response of the 4H-SiC detector is less than that of the Si detector. At present, the main problem of Si-based LGAD is the reduction of timing performance caused by high-intensity irradiation. Once 4H-SiC has the excellent timing performance of LGAD, combined with its irradiation resistance and temperature stability, 4H-SiC LGAD will be a candidate for high pileup, high radiation environment and stable operation at room temperature.
In this work, a novel 4H-SiC LGAD timing device was designed by adjusting the doping concentration and epitaxial structure using the RAdiation SEmi-conductoR (RASER) software, and its electrical characteristics were simulated. The design of the technical process has also been carried out and RASER is further used to simulate the time resolution of the 4H-SiC LGAD. The time resolution of the 4H-SiC LGAD designed by adjusting the thickness and doping concentration is basically the same as that of the Si-based LGAD detector (34 ps)[14], which is obviously superior to the corresponding PIN device[13].
## 2 Design scheme of 4H-SiC LGAD
In this work, the design scheme of 4H-SiC LGAD will be introduced from two aspects: epitaxial structure and technical process.
### Epitaxial structure of 4H-SiC LGAD
In this work, it is proposed to design a 4H-SiC LGAD timing device by full epitaxial growth. Different from Si LGAD, the hole multiplication rate in 4H-SiC is greater than that of electrons. To achieve more excellent carrier multiplication, the N-type epitaxial layers including bulk layer (N\({}^{-}\)) and gain layer(N\({}^{+}\)) has been selected. In detail, due to the high average ionization energy of the 4H-SiC, a relatively thicker bulk layer should be required to generate more initial ionized electron-hole pairs for incident particles, which can be read out by electronic readout system after undergoing 10 times of low-gain multiplication (\(>\)4fC). Therefore, the thickness of the bulk layer of the 4H-SiC LGAD should be at least 50 \(\mu m\). In addition, the doping concentration of the bulk layer with the thickness of 50 \(\mu m\) should be as low as possible in order to
deplete the whole bulk region at a certain operating voltage. At present, the epitaxial growth of 4H-SiC with a doping concentration of 10\({}^{14}\) cm\({}^{-3}\) can be realized in industry.
To achieve low-gain multiplication of 4H-SiC LGAD devices, an N\({}^{+}\) gain layer should be epitaxially grown above the bulk layer. Under the reverse bias, a sharply high electric field can be generated in this region, causing the electron-hole pairs generated by particles passing through the device to be collision-ionized multiplied in this thin gain layer. Based on the experience of Si LGAD, a thickness of the gain layer is from 0.1 \(\mu m\) to 1 \(\mu m\). In this work, 1 \(\mu m\) thickness gain layer with better uniformity has been selected. Meanwhile, the doping concentration range is 1.4\(\times\)10\({}^{17}\) cm\({}^{-3}\)\(\sim\) 1.48\(\times\)10\({}^{17}\) cm\({}^{-3}\), which restrict the corresponding multiplication coefficient and meet the industrial epitaxial process requirements. In addition, the P\({}^{++}\) layer with a doping concentration of 5\(\times\)10\({}^{19}\) cm\({}^{-3}\) and a thickness of 0.3 \(\mu m\) should be epitaxially grown on the N\({}^{+}\) gain layer. The highly doped P\({}^{++}\) layer is easy to form ohmic contact, improving its charge collection efficiency and time resolution.
Overall, the epitaxial structure of 4H-SiC LGAD is \(P^{++}/N_{gain}^{+}/N_{bulk}^{-}/N_{buffer}/N_{substrate}^{++}\) as shown in Figure 1(a). Besides, the aluminum ions and nitrogen ions were used as p-doping and n-doping, respectively. The epitaxial structure of 4H-SiC LGAD was prepared based on the above design. In order to prove the technological feasibility of the design scheme and the accuracy of epitaxial parameters such as doping concentration and thickness, secondary ion mass spectrometry (SIMS) has been carried out for characterization at a depth of 3 \(\mu m\), and the results are shown as Figure 1(b). The actual thickness and doping concentration of the device were consistent with the corresponding designed parameters, which provided support for our subsequent research on the time resolution of 4H-SiC LGAD.
Figure 1: (a) Schematic diagram of SiC LGAD epitaxial structure with doping concentration and thickness. (b) Comparison of SIMS measurement of doping concentration and thickness and the designed parameters.
### Technical processes
It has been reported that the breakdown voltage of conventional planar 4H-SiC devices is mainly determined by the local electric field caused by the device boundary effect [15]. For the 4H-SiC LGAD designed in this work, the high electric field region generated in the gain layer is close to the device surface, and the boundary effect is particularly obvious, resulting in more serious premature breakdown. In addition, the operating voltage of the 4H-SiC LGAD needs to be greater than the full depletion voltage to work properly, which means that the operating voltage should be relatively high. Therefore, ensuring that the device does not break down in advance is a major consideration for the technical process. Etching the terminal junction is an effective method to avoid premature breakdown caused by the boundary effect of high electric field[16]. In this work, etching table with 7\({}^{*}\) has been used and the technical process designed on the existing epitaxial wafer is shown in Figure 2(a). In addition, the design and preparation of P-type ohmic contact electrodes are also very important to improve the performance of 4H-SiC LGAD. High temperature anneal alloy system based on Ni/Ti/Al has been widely used to produce P-type ohmic contact[17; 18], with specific contact resistance up to 1.8\(\times\)10\({}^{-5}\)\(\Omega\)\(cm^{2}\). The same alloy system is used to make the P-type ohmic contact electrodes and N-type ohmic contact electrodes and the annealing temperature is 800 \({}^{*}\)C[19]. A low resistivity ohmic contact can be formed between the metal and silicon carbide. The anode is N\({}^{++}\) layer, and the cathode is P\({}^{++}\) layer. Finally, in order to prevent gas from oxidizing the wafer, a layer of silica has been deposited as a passivation on the exposed part of electrodes. By comparing the thickness of silica and the laser wavelength, the silica with a thickness of 364 nm is finally determined. It not only provides protection, but also uses as a transparency-enhancing film for subsequent laser tests. The first version of the 4H-SiC LGAD device is named SICAR(Silicon CARbide) by our group.
To implement the above process, three masks has been designed, including 8 structures (SICAR1-1\(\sim\)8), which are shown in Figure 2(b). Mask 1 is used to determine the etching-table structure, Mask 2 is used to determine the p-type electrodes and Mask 3 is used to determine the electrode pads of the 4H-SiC LGAD. To investigate the effects of size and shape on the electrical performance and time resolution, these 8 devices with different sizes and corner radii has been designed. Specifically, the shape and electrodes of SICAR1-1 (5000 \(\times\) 5000 \(\mu m\)) refer to the existing 4H-SiC PIN detector[13]. SICAR1-2 (1000 \(\times\) 1000 \(\mu m\)), a 5x5 array of SICAR1-6 is designed to measure the position resolution. Moreover, in order to study the multiplication heterogeneity of the 4H-SiC LGAD, a laser incidence hole with radius of (25 \(\mu m\)) is reserved on SICAR1-3 (1000 \(\times\) 1000 \(\mu m\)). Devices of SICAR1-4 (1000 \(\times\) 1000 \(\mu m\)) and SICAR1-5 (1000 \(\times\) 1000 \(\mu m\)) are 2\(\times\)2 arrays with array spacing of 50 \(\mu m\) and 100 \(\mu m\), respectively, which are designed to study the size of detector "dead zone" under different array spacing. In order to study the influence of corner radius on leakage current, 100 \(\mu m\), 200 \(\mu m\) and 500 \(\mu m\) corner radius are
designed for SICAR1-6 (1000 \(\times\) 1000 \(\mu m\)), SICAR1-7 (1000 \(\times\) 1000 \(\mu m\)) and SICAR1-8 (1000 \(\times\) 1000 \(\mu m\)) devices, respectively.
## 3 Raser
To estimate the I-V and C-V characteristics and time resolution performance of SiC detectors, a simulation tool-RASER(RAdiation SEmiconductoR) is developed[20]. Firstly, Based on an open source software, DEVSIM is used to simulate the electrical performance including I-V and C-V characteristics. Different from the commercial simulation software, DEVSIM has strong expandability and is easy to interact with Geant4 and other software for detector simulation. But the DEVSIM is in the primitive development, all the finite element equations should be written by users. As a TCAD device simulation package written in C++, with a Python front end, DEVSIM is capable of
\begin{table}
\begin{tabular}{l l l} \hline Label & Type & Radius [\(\mu m\)] \\ \hline SICAR1-1 & Single & 500 \\ SICAR1-2 & \(5\times 5\) & 100 \\ SICAR1-3 & Single & 100 \\ SICAR1-4 & \(2\times 2\) & 100 \\ SICAR1-5 & \(2\times 2\) & 100 \\ SICAR1-6 & Single & 100 \\ SICAR1-7 & Single & 200 \\ SICAR1-8 & Single & 500 \\ \hline \end{tabular}
\end{table}
Table 1: The layout mask contains the device structure.
Figure 2: (a) Sketch of the structure of an LGAD (not to scale).(b) SiC LGAD \(2\times 2\)\(cm^{2}\) layout mask. Different size and different rounded corner radius pads were allocated and aligned in this mask.
simulating 1D, 2D, 3D structures with models describing advanced physical effects and uses the control volume approach for assembling partial-differential equations (PDE's) on the simulation mesh.
Secondly, RASER based on the ROOT, FEnCS and Geant4, has been used to simulate the time resolution of 4H-SiC PIN[13] and 3D 4H-SiC detector[21]. In this work, RASER is also used to simulate the time resolution of 4H-SiC LGAD. When an incident-charged particle passes through the detector, it deposits a portion of its energy, producing an electron-hole pair. The electron-hole pairs generate in the detector move towards the poles in response to the electric field, creating an induced current at the electrodes. This current can be calculated using Shockley-Ramo's theorem:
\[I(t)=-q\overrightarrow{v}(\overrightarrow{r}(t))\cdot\overrightarrow{E}_{w} (\overrightarrow{r}(t)) \tag{1}\]
where \(\overrightarrow{r}\) is the path of electron or hole drift, \(\overrightarrow{v}(\overrightarrow{r})\) is the drift velocity, \(\overrightarrow{E}_{w}(\overrightarrow{r})\) is the weighting potential of \(\overrightarrow{r}\). And the sum of induced current generated by the electrons and holes is the total current.
### The simulation of 4H-SiC PIN detector
RASER based on the open source software DEVSIM can be used to simulate 4H-SiC detectors. In order to demonstrate the reliability of the RASER simulation tool, the 4H-SiC PIN detector[13] is selected as an example to compare the simulation results with the experimental results. Firstly, the size of the 4H-SiC PIN device under investigation is 5mm\(\times\)5mm. It has an active epitaxy layer with a thickness of 100 \(\mu m\) and a doping concentration of 5.2\(\times\)10\({}^{13}\)cm\({}^{-3}\), and both of the top and bottom electrodes are ohmic contacts. The comparison of C-V performance between experimental result and the DEVSIM simulation result is shown in Figure 3(a). The good agreement between simulation result and experimental result indicates that the geometry and doping input added in the DEVSIM simulation are correct. Furthermore, for the IV simulation of the 4H-SiC PIN detector, considering the influence of tunnelling effect, the Hurkx Model has been added into the Generation & Recombination (G&R) constant model, and the simulated results are closer to the experimental results. Under the high electric field, the simulated results match well with the experimental results, but there are still significant differences under low electric field. This may be due to the complex defect features of SiC materials, which require further research. Based on the above research, the coefficients of the Hurkx model has been mathematically corrected, but the physical reasons are still unclear and require further study. The combination of IV simulation results and accurate CV simulation results indicates that RASER is capable of basic device simulation and has a certain reliability.
In addition, the influence mechanism of some deep level defects on the leakage current was also studied. Two kinds of deep level defects, Z\({}_{1/2}\) and EH\({}_{6/7}\), in 4H-SiC materials were mainly investigated[23]. For Z\({}_{1/2}\), the defect concentration ranges from 1\(\times\)10\({}^{13}\) cm\({}^{-3}\) to 1\(\times\)10\({}^{15}\) cm\({}^{-3}\), and the capture cross section ranges from 1\(\times\)10\({}^{12}\) cm\({}^{2}\)\(\sim\) 1\(\times\)10\({}^{16}\) cm\({}^{2}\). However, there is no significant effect on the leakage current from Z\({}_{1/2}\), which may be due to its relatively low energy level. On the other hand, for EH\({}_{6/7}\) with an energy level of 1.25 eV to 1.73 eV, the leakage current increases slightly with the increase of defect concentration shown as Figure 4(a). And the change of the capture cross section has little effect on the leakage current shown as Figure 4(b). From the simulation, the deep energy level defect has a slight effect on the leakage current of 4H-SiC PIN device, demonstrating it is not the dominant factor. Furthermore, it has been reported that macroscopic defects would greatly affect the carrier lifetime of the SiC devices, thus affecting the leakage current[24; 25]. Therefore, it is particularly important to avoid macroscopic defects in the preparation process of SiC detectors.
Figure 3: Comparison between DEVSIM simulations and experimental results (a) C-V performance of DEVSIM simulation and experimental result (b) I-V performance of DEVSIM simulation and experimental result
### The simulation of 4H-SiC LGAD detector
RASER was used to simulate the I-V and C-V curves of the 4H-SiC LGAD. As shown in Figure 5(a), the breakdown voltage of this device can be obtained from the I-V curve, which is approximately 3700 V of the 4H-SiC LGAD. The C-V curve, shown as the Figure 5(b), demonstrates the depletion voltage of gain layer V\({}_{GL}\) and the full depletion voltage V\({}_{FD}\). In detail, different from the PIN device, the C-V curve of the LGAD device shows a platform at around 130 V, which indicats that the gain layer of the device can be completely depleted at around 130 V. In addition, the full depletion voltage of the 4H-SiC LGAD obtained from Figure 5(b) is 400 V. Combined with the breakdown voltage, it can be considered that the working voltage range of the 4H-SiC LGAD is 400 V\(\sim\)3700 V.
Figure 5(a) and Figure 5(b) show the I-V and C-V curves of LGAD and PIN, respectively. These two devices are identical in structure and doping, except that the PIN has no gain layer. In terms of the IV curve, the breakdown voltage V\({}_{BD}\) of 4H-SiC LGAD is about 3700 V, it is smaller than that of PIN which does not breakdown until the reverse voltage is 4000 V. The reason for the phenomenon is that the high doping concentration of the gain layer in LGAD created an electric field peak, which makes easier for the carriers to accelerate and reach the breakdown energy. And at the same voltage, the leakage current of LGAD is larger than that of PIN due to the presence of the gain layer in LGAD. However, in terms of the CV curve, LGAD has one more inflection point than PIN. Different from the PIN devices, with the increase of the reverse voltage, the gain layer of the LGAD device is depleted before the bulk layer.
The IV and CV curves of PIN and LGAD we simulated show that, unlike the structure of PIN, LGAD has a gain layer and low gain multiplier effect. The structure of LGAD that we have designed is reasonable.
Figure 4: (a) Effect of different \(EH_{6/7}\) defect concentrations on leakage current. (b) Effect of capture cross-section on leakage current.
### The time resolution simulation of 4H-SiC LGAD detector
The time resolution of a detector is defined by the accuracy of the measured arrival time of a detected particle. It can be calculated by the following formula [26]:
\[\sigma_{t}^{2}=\sigma_{TimeWalk}^{2}+\sigma_{Landau}^{2}+\sigma_{Distortion}^{2 }+\sigma_{Jitter}^{2}+\sigma_{TDC}^{2} \tag{2}\]
where the terms \(\sigma_{TimeWalk}\) and \(\sigma_{Landau}\) is from the random energy deposition of the detected particle, the physical process correspoding to which is simulated by Geant4. The term \(\sigma_{TimeWalk}\), or the uncertainty of signal exceeding given threshold, can be eliminated by the constant fraction discrimination(CFD) method. The term \(\sigma_{Distortion}\) is caused by the fluctuation of carrier velocity, which could be calculated from the Langevinian random motion of the carriers. Noise in the circuit contributes to the term \(\sigma_{Jitter}\). Since we set the electronics the same as [13] in the simulation, a Gaussian noise, parameters of which measured from [13], is used. The \(\sigma_{TDC}\) is introduced by analog-to-digital converting, and is often small enough to ignore.
We simulated 50,000 events of beta particle detecting of the SiC LGAD detector under -800V bias voltage, and counts the time of arrival(ToA) of the particle to obtain the time resolution, which is (35.0 \(\pm\) 0.2) ps, shown as the Figure 6(a). We also simulated beta particle detection under different detector working voltages. As figure 6(b) shows, the time resolution of the 4H-SiC detector decreases with the increasing of the reverse voltage. This is due to the fact that the drift velocity of the carriers increases as the voltage increases, resulting in the carriers being collected at a faster rate.
Figure 5: The (a) current - voltage characteristics and the (b) capacitance - voltage characteristics of 4H-SiC LGAD and 4H-SiC PIN, used DEVSIM simualtion.
## 4 Conclusion
In conclusion, the 4H-SiC LGAD has been designed including the epitaxial structure and technical processing and is currently in production. The electrical performance and time resolution of 4H-SiC LGAD has simulated by adjusting doping concentration and device structure using an open source software DEVSIM. Combined its I-V with C-V simulation results, the device has a full depletion voltage of 400 V and a breakdown voltage of 3700 V, which means that its operating range is 400 V\(\sim\)3700 V. Furthermore, the corresponding simulation exhibited that the device has a time resolution of (35.0 \(\pm\) 0.2) ps at a reverse bias voltage of 800 V, exceeding that of the SiC PIN detector. This suggests that the 4H-SiC LGAD detector has the potential to be used in high irradiation environment at high temperature. And this work paves the way for the future development of 4H-SiC LGAD and its application in an extreme environment.
Acknowledgments.This work is supported by the National Natural Science Foundation of China (No. 11961141014 and No. 12205321), China Postdoctoral Science Foundation (2022M710085), the State Key Laboratory of Particle Detection and Electronics (No. SKLPDE-ZZ-202218 and No. SKLPDE-KF-202313), Natural Science Foundation of Shandong Province Youth Fund(ZR202111120161) under CERN RD50 Collaboration framework.
|
2308.00798 | How Many Dark Neutrino Sectors Does Cosmology Allow? | We present the very first constraints on the number of Standard Model (SM)
copies with an additional Dirac right-handed neutrino. From cosmology, we are
able to pose strong limits on large regions of the parameter space. Moreover,
we show that it is possible to account for the right dark matter density in
form of stable particles from the dark sectors. | Alan Zander, Manuel Ettengruber, Philipp Eller | 2023-08-01T19:21:15Z | http://arxiv.org/abs/2308.00798v2 | # How Many Dark Neutrino Sectors Does Cosmology Allow?
###### Abstract
We present the very first constraints on the number of Standard Model (SM) copies with an additional Dirac right-handed neutrino. We show that solving the hierarchy problem within this framework induces in turn a severe hierarchy between the neutrino Yukawa couplings. By demanding the absence of such unnatural hierarchies, we are even able to rule out the theory.
## I Introduction
The mass of the Higgs boson is affected by quantum corrections, which lead to a quadratic divergence that, in the absence of new physics, would tend to push its mass up to the cut-off of the Standard Model (SM), at around the Planck mass \(M_{P}=1.22\cdot 10^{19}\) GeV. Accounting for the discrepancy between the expected scale and the actual observed Higgs mass \(M_{\rm Higgs}\approx 125\) GeV [1] represents one of the major challenges in particle physics and it is known as the _hierarchy problem_[2; 3]. The more conventional approach relies on mechanisms that strive to explain the, otherwise unnatural, cancellation of terms necessary to make sense of the observed Higgs mass. However, there have been other attempts to address the hierarchy problem from a totally different perspective. This is the case for theories that assume a smaller fundamental scale of gravity, narrowing hereby the gap between the Higgs mass and the cut-off, or in other words, between the weak and gravity scale. The Planck scale is degraded to an effective gravity scale that results, for example, from the large size of extra dimensions [4; 5] or the large amount of extra particle species [6; 7]. In this work, we will focus on the latter approach of assuming "many species". More specifically, we will follow [6] and assume many copies of the SM.
Also worth mentioning are the very interesting dark matter (DM) candidates we immediately obtain by introducing many extra species. See, for example, [8] for a realization of DM within the framework with many species. Yet another unresolved question that can be addressed is the smallness of the active neutrino masses. If those masses arise through the Higgs mechanism as it is thought to be the case for the charged leptons and quarks, then it is puzzling why neutrinos should have a mass many orders of magnitudes smaller than the rest of the fermions, in particular within the same generation. The most established course of action in this regard is the well-known Seesaw mechanism [9; 10; 11; 12; 13], which imposes a Majorana nature on the neutrinos and implies the violation of lepton number conservation. It also requires the existence of a very heavy right-handed neutrino (RHN), whose Majorana mass is usually linked with more new physics [14]. With the framework investigated, however, we can tackle the neutrino mass problem by considering Dirac RHNs and invoking a large amount of extra species, which, because of unitarity, forces the neutrino Yukawa coupling to the Higgs to be small, resulting in very small neutrino masses. This has been developed in [15] and generalized in [16].
In this work, we will focus on the neutrino sector and its cosmological impact. The paper is organized as follows. In Section (II), we will engage in more detail with the model in question, while in Section (III) we discuss the cosmological production of dark neutrino sectors. This section is more technical and might be skipped in the first read. In Section (IV) we show our results followed by our conclusions in Section (V).
## II Model
It has been shown [6] that by introducing \(N_{\rm PS}\gg 1\) particle species, the fundamental scale of gravity \(M_{f}\) must fulfill the following relation
\[M_{f}\leq\frac{M_{P}}{\sqrt{N_{\rm PS}}}. \tag{1}\]
As mentioned before, we will follow here [6] and assume \(N\) copies of the SM. Note that since every SM copy consists of \(N_{\rm SM}={\cal O}(100)\) particle species, we have \(N_{\rm PS}=NN_{\rm SM}\). Since the number of particle species and of SM copies are equivalent and differ only by a factor of \({\cal O}(100)\), we will sometimes make no distinction between those two. Equation (1) makes evident that for a large number of extra species, or here a large number of SM copies \(N\gg 1\), the fundamental scale of gravity \(M_{f}\) is suppressed with respect to the Planck mass \(M_{P}\), such that it does not deviate much from the Electroweak (EW) scale. Or, from a different perspective, it would explain why the Planck mass is so large, and therefore the gravitational force so weak. It is worth mentioning that the elegance of the theory relies on the fact that we only need to introduce _one_ degree of freedom in order to solve the hierarchy problem, namely the number of SM copies \(N\). Furthermore, in contrast to the Higgs mass \(M_{\rm Higgs}\)
our parameter \(N\) is ultraviolet-insensitive. There should be more emphasis than on the specific value of \(N\), which is just a number. For instance, \(N=10^{30}\) yields
\[M_{f}\lesssim 1\ \text{TeV}=\mathcal{O}(M_{\text{Higgs}}), \tag{2}\]
which would make the hierarchy problem completely disappear. Yet, we do not need to stick with \(N=10^{30}\), since _any_ number of extra SM copies \(N\gg 1\) contributes to the "softening" of the hierarchy problem, in the sense that the gap between EW and gravity scale becomes narrower. The complete solution to the hierarchy problem could be in this case a combination of different mechanisms.
Moreover it was shown in [15] that by introducing RHNs, a mass term will be generated after EW symmetry breaking and this will turn out to be small as a consequence of unitarity and of the large number of extra species. This is in agreement with the expected small, yet non-vanishing, active neutrino masses. For simplicity, we will only consider the one-flavor scenario as in [15]. The corresponding Lagrangian then reads
\[\mathcal{L}_{\text{Yukawa}}=\left(\overline{L}\epsilon H^{*}\right)_{i} \lambda_{ij}\nu_{R,j}+\text{h.c.}, \tag{3}\]
where \(H_{i}\) and \(L_{i}\) stand for the Higgs and lepton SU(2)-doublets of the \(i^{\text{th}}\) SM copy while \(\epsilon\equiv i\sigma_{2}\) is the totally antisymmetric SU(2) tensor (\(\sigma_{k}\) refers to the \(k^{\text{th}}\) Pauli-matrix). Here, \(\lambda_{ij}\) is a \(N\times N\) Yukawa matrix in the space of species and \(\nu_{R,j}\) is the RHN of the \(j^{\text{th}}\) SM copy. Assuming all SM copies interact the same way with the rest of copies, we arrive to the only possible configuration of the Yukawa matrix, namely
\[\lambda_{ij}=\begin{pmatrix}a&b&\ldots&b\\ b&a&\ldots&b\\ \vdots&\vdots&\ddots&\vdots\\ b&b&\ldots&a\end{pmatrix}. \tag{4}\]
As already mentioned, the Lagrangian of equation (3) generates a Dirac mass matrix, which we can easily diagonalize in order to determine the mass eigenvalues and eigenstates. We obtain \((N-1)\)-degenerate states with mass
\[m_{\nu}=(a-b)v, \tag{5}\]
where \(\langle H\rangle\equiv v=174\) GeV is the Higgs vacuum expectation value.
Both \(a\) and \(b\) were introduced in equation (4) as Yukawa couplings. Since they are of the same nature, we do not expect them to be at totally different orders of magnitude, i.e. the ratio \(a/b\) should not be too big \(a/b\gg 1\). It was also mentioned in [8], that due to unitarity,
\[b\leq\frac{1}{\sqrt{N}}, \tag{6}\]
making the neutrino mass (5) small, for large \(N\). The philosophy in solving the neutrino mass problem within our model is very different from the Seesaw mechanism. The former gives an _infrared_ solution by introducing many light states, while the latter introduces one or few heavy states and is therefore an _ultraviolet_ solution.
Finally, there is one single heavier state with mass eigenvalue
\[m_{H}=(a-b+Nb)\,v. \tag{7}\]
The latter mass eigenstate might be very heavy, given that its mass scales with the number of particle species \(N\gg 1\).
Now, we can express the SM flavor eigenstate as a linear combination of the mass eigenstates. Without loss of generality, we denote the flavor eigenstate of our SM copy as \(\nu_{1}\):
\[\nu_{1}=\sqrt{\frac{N-1}{N}}\nu_{1}^{m}+\frac{1}{\sqrt{N}}\nu_{H}^{m}. \tag{8}\]
Here \(\nu_{1}^{m}\) corresponds to the mass given in equation (5), while \(\nu_{H}^{m}=\frac{1}{\sqrt{N}}\sum_{i}\nu_{i}\) is the heavier mass eigenstate with mass \(m_{H}\) from equation (7), which interacts in the same extent with all copies. It is interesting that the flavor eigenstate of our copy (and of all copies) can be expressed in terms of only two mass eigenstates: one of the degenerate states and the "public" state \(\nu_{H}^{m}\). The effect of the many species is outlined by one single heavy state. The model is (almost) reduced to the scenario of one active and one sterile neutrino (SN). The main difference relies on the fact that our heavy SN can decay into particles of all SM copies, changing considerably its lifetime, and that its mixing angle \(\theta\) with active neutrinos is not a free parameter, but determined by the number of copies,
\[\text{sin}\theta=\frac{1}{\sqrt{N}}. \tag{9}\]
The mixing angle of equation (9) vanishes for increasing number of particle species \(N\rightarrow\infty\), recovering the no-new physics scenario in the SM neutrino sector.
## III Production in the early universe
There are mainly two approaches when it comes to the production of both light (\(\nu_{i}^{m}\), \(i\geq 2\)) and heavy (\(\nu_{H}^{m}\)) SNs in the early universe. Either they achieve equilibrium at some point in the history of the universe, where they permanently interact with the primordial plasma, until they
eventually decouple from the thermal bath. After the decoupling time, the particle density decreases only due to the expansion of the universe and because of possible decays into lighter states (_Freeze-Out_). Or, they are always out of equilibrium and are only produced through inelastic processes like Higgs decays or oscillations of active neutrinos. This usually happens at high temperatures, such that at some point the production effectively ceases and again, the particle density would remain constant if it were not for the expansion of the universe and for the SN's instability (_Freeze-In_). We point out that only our SM copy can be present in the early universe (at least to the same extent), otherwise we would violate several cosmological constraints. Therefore, we will always presume a primordial thermal bath composed exclusively of particles of our copy. That said, the two main interactions that dominate SN production arise from the Yukawa coupling from equation (3) and from the mixing angle (9).
To determine the amount of SNs in the early universe, we solve the respective Boltzmann equations. If in equilibrium, SNs are simply Fermi-Dirac distributed. If SNs are relativistic, or at least they were at the time of decoupling, their number density reads
\[n_{F}(T)=\frac{3\zeta[3]}{2\pi^{2}}\left(\frac{g_{F}}{2}\right)T^{3}, \tag{10}\]
where \(g_{F}\) stands for the _internal_ degrees of freedom and \(T\) denotes the temperature of the SNs, which in general may differ from the temperature of the thermal bath.
On the other hand, SNs that never achieve equilibrium can be produced through the couplings from equations (3) or (9). For non-vanishing small Yukawa coupling \(b\), the decay of our Higgs populates the SNs, dominantly the light states as they couple directly with the Higgs. The production occurs at temperatures comparable to the mass of the Higgs, \(T_{p}\sim M_{\rm Higgs}\), before it disappears permanently from the thermal bath. The _total_ light SN number density \(n_{\rm{iSN}}=\sum_{i\geq 2}n_{i}\), after production has ceased, can be approximated by [17]
\[n_{\rm{iSN}}(T)\approx\frac{3}{4\pi^{2}(1.66)\sqrt{g_{*}(T_{p})}}\left(\frac{ M_{P}\Gamma_{b}}{M_{\rm{Higgs}}^{2}}\right)\cdot T^{3}, \tag{11}\]
where \(\Gamma_{b}\approx\frac{(N-1)b^{2}}{16\pi}M_{\rm{Higgs}}\) is the _total_ decay rate of our Higgs into light SNs.
Furthermore, due to the mixing angle from (9), heavy SNs can be non-resonantly produced through oscillations induced via incoherent interactions of the active neutrinos with the thermal plasma. In the context of DM SN, this is also known as the Dodelson-Widrow mechanism [18]. In order to determine the number density of heavy SN \(n_{H}\), we resort to the Zeno-ansatz approach [19; 20],
\[\left(\frac{\partial}{\partial t}-Hp\frac{\partial}{\partial p} \right)f_{H}(p,t) =\Gamma_{\rm conv}\left[f_{\nu}(p,t)-f_{H}(p,t)\right]\] \[-\frac{m_{H}}{E}\frac{1}{\tau_{H}}f_{H}(p,t), \tag{12}\]
where \(\tau_{H}\) is the lifetime, \(\Gamma_{\rm conv}=\frac{\Gamma_{a}}{2}\left\langle P(\nu_{1}\rightarrow\nu_{H})\right\rangle\) is the effective conversion rate of heavy SN, while \(\Gamma_{a}\) is the interaction rate of active neutrinos with the plasma [21]. At temperatures \(T\ll M_{W}\) (\(M_{W}\) being the mass of the \(W\)-boson) [22],
\[\Gamma_{a}(p,T)\simeq\chi_{a}\frac{7\pi}{24}G_{F}^{2}T^{4}p, \tag{13}\]
where \(\chi_{e}=\frac{13}{9}\) and \(\chi_{\mu}=\chi_{\tau}=1\). The averaged transition probability is \(\left\langle P(\nu_{1}\rightarrow\nu_{H})\right\rangle=\frac{1}{2}{\rm sin}^{2} 2\theta\left[{\rm sin}^{2}2\theta+\left(\frac{2p}{\Delta m^{2}}D\right)^{2}+ \left({\rm cos}2\theta+\frac{2p}{\Delta m^{2}}V\right)^{2}\right]^{-1}\),
where \(\Delta m^{2}\equiv m_{H}^{2}-m_{\nu}^{2}\). Since active neutrinos are in equilibrium in the early universe, the _decoherence_ or _damping function_\(D\) reduces to \(D=\frac{\Gamma_{a}}{2}\)[23]. The _weak potential_\(V=V^{L}+V^{T}\) arise from neutrino neutral and charged current forward scattering on particles in the plasma [24]. Assuming no asymmetries that generate lepton number, one has [22; 24]\(V^{L}=0\) and \(V^{T}=\frac{28\pi G_{F}^{2}{\rm sin}^{2}\theta_{W}T^{4}p}{45\alpha}\left( \zeta_{a}+\frac{{\rm cos}^{2}\theta_{W}}{2}\right)\), where \(\alpha\) is the fine-structure constant, \(G_{F}\) the Fermi constant, \(\theta_{W}\) the Weinberg angle and \(\zeta_{a}(T)\approx\Theta\left(T-m_{l_{a}}\right)\) represents the contribution to the weak potential from the charged current, which depends on whether or not there is a background of charged leptons \(l_{a}\) in the plasma at temperature \(T\).
In equation (12) we can always neglect the last term at the time of maximal production \(T_{\rm max}\approx 133\ {\rm MeV}\left(\frac{m_{H}}{{\rm keV}}\right)^{\frac{1}{3}}\), since \(\Gamma_{\rm conv}\gg\frac{m_{H}}{E}\frac{1}{\tau_{H}}\). This will only become relevant for later times, when heavy SNs start to decay, suppressing their distribution function \(f_{H}\) exponentially. Moreover, contributions proportional to the initially vanishing small \(f_{H}\ll f_{\nu}\) can also be neglected for SNs out of equilibrium, as a first approximation. We can then solve equation (12) analytically to find the heavy SN number density between their production and decay [18]:
\[n_{H}(T)\approx 1.4\sqrt{\frac{10.8}{g_{*}(T_{\rm max})}}\left(\frac{m_{H}}{{ \rm keV}}\right)\left(\frac{10^{6}}{N}\right)n_{a}(T), \tag{14}\]
where we used \(a=e\) and \(n_{a}(T)\) is the number density of active neutrinos, which is identical to \(n_{F}\) in equation (10).
Light SNs do not interact with our active neutrinos. Moreover, the degeneracy in mass makes the oscillation probability vanish. However, if heavy SNs are in
equilibrium they will constantly interact with the primordial soup, and every time this happens, there is a non-vanishing probability for the involved heavy state to oscillate into a light SN, populating the latter out of equilibrium. They can even achieve equilibrium, for large enough mixing angles. Although there are \(N-2\) light SNs, the conversion rate entering equation (12) is \(\Gamma^{\rm{ISN}}_{\rm{conv}}=\frac{\Gamma_{\rm{conv}}}{N}\), since they interact with the heavy SNs. This cancels the scaling with the number of neutrino sectors \(\propto N\). Also, the weak potential and the damping function get rescaled \(V\to V/N\), \(D\to D/N\), such that the temperature \(T_{\rm{th}}\) at which the mixing angle is not suppressed anymore by thermal effects turns into \(T_{\rm{th}}\to T_{\rm{th}}N^{1/6}\). This temperature is usually almost the same as the temperature of maximal production \(T_{\rm{max}}\), however not in this case, since at \(T_{\rm{th}}\), there are still no heavy SNs at all interacting that could oscillate, because their mixing angle with active neutrinos is still highly suppressed. Hence, the maximal production temperature will still be around \(T_{\rm{max}}\approx 133\ {\rm MeV}\left(\frac{m_{H}}{{\rm keV}}\right)^{ \frac{1}{3}}\). This means that the number density of light SNs, summed over all copies, will have a very similar evolution as of heavy SNs, when not in equilibrium. For the sake of simplicity, we therefore roughly estimate \(n_{\rm{SN}}(T)\) to be the same as the expression given in equation (14). Finally, light SNs can be also generated from heavy SN decays, but this will mostly be subdominant.
Now that we know the number densities \(n\), we are in the position of determining the energy density \(\rho=n\left\langle E\right\rangle\) of SNs. In general, for non-relativistic particles with mass \(m\), it is just \(\rho=mn\), whereas for relativistic particles \(\rho=\left\langle p\right\rangle n\), where \(\left\langle p\right\rangle\) is the average momentum. For species in equilibrium \(\left\langle p\right\rangle=3.15T\), while for particles generated from decays as in equation (11), we have \(\left\langle p\right\rangle=2.45T\). So depending on the temperature \(T\) we might use the one or the other limit for light or heavy SNs.
## IV Constraints
The conceivable presence of SNs in the early universe might have a great impact on the evolution of the universe. This allows us to constrain the theory by determining the viable parameters that are in agreement with cosmological observables. Our results are shown in Figure (1).
### Cosmic Expansion Rate (BBN, CMB)
Some of the greatest achievements of cosmology are the Cosmic Microwave Background (CMB) and Big Bang Nucleosynthesis (BBN). These are sensitive to the cosmic expansion rate, which increases with the energy density of the universe, to which also SNs contribute. Usually, the radiation energy density of the universe \(\rho_{R}\) is parametrized by the so-called _effective number of neutrino species_\(N_{\rm{eff}}\equiv N_{\rm{eff}}^{\rm{SM}}+\Delta N_{\rm{eff}}\), and is defined by
\[\rho_{R}=\rho_{\gamma}+\rho_{\nu}+\rho_{\rm{BSM}}\equiv\rho_{\gamma}\left(1+ \frac{7}{8}\left(\frac{4}{11}\right)^{4/3}N_{\rm{eff}}\right). \tag{15}\]
In the Standard Model (\(\rho_{R}^{\rm{BSM}}=0\)), \(N_{\rm{eff}}=N_{\rm{eff}}^{\rm{SM}}=3\) and \(\Delta N_{\rm{eff}}=0\). Well, there are some corrections due to the partial heating of neutrinos during \(e^{\pm}\) annihilations. A precise calculation has been done in [25; 26], obtaining \(N_{\rm{eff}}^{\rm{SM}}\approx 3.045\).
So, in order not to change much the energy density of the universe, and hence the primordial element abundances, we must impose \(\Delta N_{\rm{eff}}\lesssim 0.2\)[27], at the time of BBN, \(T_{\rm{BBN}}\approx 1\) MeV. We include all SNs in computing \(\rho_{R}\), as long as heavy SNs are still relativistic and have not decayed away before BBN. In principle, the latter can induce an early matter-dominated epoch, when they turn non-relativistic. However, at the time of BBN, all non-relativistic SNs have decayed away. Similarly, CMB enables us to constrain heavy SNs that live long enough to decay between BBN and CMB, however with such a low number density that they do not affect \(\Delta N_{\rm{eff}}\) at the time of BBN. Nevertheless, they will eventually decay into light neutrino states that act as dark radiation at recombination, which we can constrain. The point is that while non-relativistic, the heavy SN energy density gets less diluted by the cosmic expansion, which at the time of \(\nu_{H}^{m}\)-decay, is transferred to dark radiation [28], mostly through the decay into three light neutrinos. In short, the non-resonant production of SNs via oscillations, and the production of light SNs from the decays of heavy SNs and Higgs bosons, can be strongly constrained by BBN and CMB, as we can see from Figure (1).
### Overproduction
Also the total energy density _today_, should not exceed the critical density \(\rho_{c}=\frac{3H_{0}^{2}}{8\pi G}=10.537h^{2}\ {\rm GeV}\ {\rm m}^{-3}\)[29]. Therefore, the SN energy density, today, should not exceed the DM density \(\Omega_{\rm{DM}}h^{2}=0.120\pm 0.001\)[27], otherwise we would overclose the universe. In the case when SNs constitute the whole DM, we must take some constraints into account that apply to all forms of fermionic DM, like "Tremaine-Gunn-like" limits [30; 31; 32] as well as Lyman-\(\alpha\) limits [33; 34; 35]. This rules out both heavy and light SNs as DM, in particular for non-resonant produced \(\nu_{H}^{m}\). But this is no surprise, since the Dodelson-Widrow mechanism is already excluded by a combination of Lyman-\(\alpha\) limits and X-ray observations [36] (here, by Lyman-\(\alpha\) limits and SN instability).
However, even for much lower densities \(\ll\Omega_{\rm{DM}}\), light SNs can have an impact on the matter power spectrum, suppressing the formation of small scales due to their large free streaming length [37]. Therefore, we require that \(\Omega_{\rm{ISN}}h^{2}\lesssim 0.2\%\)[38], setting stronger bounds (see Figure (1)) than from imposing flatness.
Finally, we want to remark that our constraints, as usual for cosmological considerations, may be circumvented by exploring non-standard cosmologies, for example, by assuming a reheating temperature lower than the temperature at which SNs are most effectively produced. In contrast, there are upper bounds on \(N\) coming from axion physics that can complement our findings, and may be as strong as \(N\lesssim 10^{6}\)[39]. This increases once again the pressure exerted on the theory in examination.
## V Conclusions
We motivated at the beginning theories with many particle species. We focused in particular on the possible existence of many Standard Model copies, with an additional right-handed neutrino. Precisely these dark neutrino sectors can have important consequences for several cosmological observables, like CMB, the primordial element abundances, the flatness of the universe, and structure formation. That allowed us to pose very strong limits on the number of extra Standard Model copies, especially for heavy sterile neutrinos with very large masses. As for the hierarchy problem, we concluded that a solution with many Standard Model copies (\(N\sim 10^{30}\)) requires, in turn, a hierarchy between the neutrino Yukawa couplings of at least \(a/b\gtrsim 10^{10}\), which seems highly unnatural, considering their shared nature. In other words, by imposing the absence of hierarchies of the sort \(a\gg b\), we were able to exclude practically the totality of the parameter space.
###### Acknowledgements.
We would like to thank Gia Davli for helpful discussion. This work has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under the Sonderforschungsbereich (Collaborative Research Center) SFB1258 'Neutrinos and Dark Matter in Astro- and Particle Physics'.
|
2303.04854 | Structural Similarity: When to Use Deep Generative Models on Imbalanced
Image Dataset Augmentation | Improving the performance on an imbalanced training set is one of the main
challenges in nowadays Machine Learning. One way to augment and thus re-balance
the image dataset is through existing deep generative models, like
class-conditional Generative Adversarial Networks (cGAN) or Diffusion Models by
synthesizing images on each of the tail-class. Our experiments on imbalanced
image dataset classification show that, the validation accuracy improvement
with such re-balancing method is related to the image similarity between
different classes. Thus, to quantify this image dataset class similarity, we
propose a measurement called Super-Sub Class Structural Similarity
(SSIM-supSubCls) based on Structural Similarity (SSIM). A deep generative model
data augmentation classification (GM-augCls) pipeline is also provided to
verify this metric correlates with the accuracy enhancement. We further
quantify the relationship between them, discovering that the accuracy
improvement decays exponentially with respect to SSIM-supSubCls values. | Chenqi Guo, Fabian Benitez-Quiroz, Qianli Feng, Aleix Martinez | 2023-03-08T19:42:34Z | http://arxiv.org/abs/2303.04854v1 | # Structural Similarity: When to Use Deep Generative Models on Imbalanced Image Dataset Augmentation
###### Abstract
Improving the performance on an imbalanced training set is one of the main challenges in nowadays Machine Learning. One way to augment and thus re-balance the image dataset is through existing deep generative models, like class-conditional Generative Adversarial Networks (cGAN) or Diffusion Models by synthesizing images on each of the tail-class. Our experiments on imbalanced image dataset classification show that, the validation accuracy improvement with such re-balancing method is related to the image similarity between different classes. Thus, to quantify this image dataset class similarity, we propose a measurement called Super-Sub Class Structural Similarity (SSIM-supSubCls) based on Structural Similarity (SSIM). A deep generative model data augmentation classification (GM-augCls) pipeline is also provided to verify this metric correlates with the accuracy enhancement. We further quantify the relationship between them, discovering that the accuracy improvement decays exponentially with respect to SSIM-supSubCls values.
## 1 Introduction
Machine learning with imbalanced dataset is still a challenging and open task. Many efforts have been made to mitigate the bias towards the majority classes and thus enhance the generalization of classifiers trained on such image datasets. Common strategies concerning adding weights for each class in the loss function [8], re-balancing the dataset by over-sampling the minority classes or under-sampling the majority classes [5] are widely used. As one of the most popular over-sampling techniques, geometric transformations like rotations or mirroring may interfere with orientation-related features in the image. SMOTE [6] can be used to solve the over-sampling problem by creating synthetic data based on the nearest neighbors of a sample in an embedding or image space. Other techniques attempt to use images from the web [19], knowledge transfer [7] and GAN architectures generating artificial images [4, 11, 20, 21] to enlarge the minority classes. For image related tasks, one way to augment the dataset is through deep generative mod
Figure 1: An example of data augmentation application with our proposed deep generative model data augmentation classification (GM-augCls) pipeline on iNaturalist-2019 Fungi dataset. (_Top_) exampled original and synthesized images for visualization. (_Bottom_) bar plot of the image number in each class for the original and synthesized dataset. After the augmentation procedure, the new dataset becomes balanced. In this paper, a class similarity metric for image dataset, called Super-Sub Class Structural Similarity (SSIM-supSubCls) is constructed for measuring the image similarity between different classes. We further quantify the relationship between this metric value and the classification accuracy improvement after GM-augCls procedure, discovering that the accuracy improvement decays exponentially with respect to SSIM-supSubCls values.
els, like class-conditional Generative Adversarial Networks (cGAN) or Diffusion Models, by synthesizing artificial images on the tail-classes to make the training set balanced.
However, some classification tasks are inherently harder than others, especially in the absence of data. For instance, intuitively the classification task in Figure 1 is easier than Figure 2: the two classes in Figure 2 share similar background, object shape and color, and the distinction between these two bird species requires subject matter expertise. In this case, deep generative models would produce unfaithful reconstructions, since small but important differences between class object attributes are inherently difficult to be generated. Therefore, the downstream classification performance may not be improved and in some scenarios even negatively impacted. Hence the question comes naturally when we can use deep generative model data augmentation to refine classification performance.
In our work, we propose a metric measuring the dataset image similarity between different classes based on Structural Similarity (SSIM), called Super-Sub Class Structural Similarity (SSIM-supSubCls). For such metric, higher value suggests higher class similarity. Furthermore, we provide a pipeline named deep generative model data augmentation classification (GM-augCls), which uses StyleGAN2 based cGAN [22] or guided-diffusion model [10] to generate images. The synthesized images are then passed through a classifier trained on the original dataset to calculate the likelihood of each new image being part of the target class. Only images with a high likelihood are appended to the training set. Experiments reveal that the accuracy improvement decays exponentially with respect to the metric values. The whole idea is shown in image Figure 1.
We summarize our contributions as follows:
* We propose a metric measuring image dataset class similarity (section 3.3 SSIM-supSubCls). A lower metric value implies lower image similarity across different classes.
* We propose a data augmentation pipeline re-balancing the imbalanced image dataset using existing deep generative models, like cGAN or diffusion models, jointly with a classifier as image filtering technique.
* We evaluate our proposed metric and pipeline on imbalanced dataset, including iNaturalist-2019 [24], flowers [1], UTKFace [3] and scene [2] dataset. Qualitative and quantitative results suggest that the pipeline can faithfully generate high quality images. Experiments also show that the top1 validation accuracy increment after dataset re-balancing decays exponentially with respect to the metric values. With this exponential curve fitted, one can easily predict the expected accuracy improvement for any image dataset from its SSIM-supSubCls value.
## 2 Related Works
### Imbalanced Dataset
Many real-world datasets are not uniformly distributed over different classes. Such imbalanced datasets bring in biases towards the majority classes and affect the generalizability of classifiers trained on them. Widely used solutions include imposing inductive bias, or statistical re-sampling or re-weighting to alleviate the extreme imbalance. To enhance the learning performance of imbalanced data, [17] sought a principled approach to select unlabeled data in the wild scaling up the original dataset using contrastive learning approach. Showing that real world dataset often has a long-tailed and open-ended distribution, [19] defined Open Long-Tailed Recognition (OLTR) as learning from such naturally distributed data and optimized the classification accuracy by handling tail recognition robustness with dynamic meta-embedding over a balanced test set. To deal with the shift in the testing label distribution caused by training set imbalance, [23] derived a post-training prior re-balancing technique through a KL-divergence based optimization.
### Data Augmentation and Deep Generative Models
When the training image dataset is imbalanced, the classification accuracy can be significantly deteriorated, especially for the tail classes. One way to mitigate this problem is through data augmentation on minority classes. Traditional methods include geometric transformation like image rotations or translations, re-sampling with SMOTE [6] or ADASYN [13]. Efforts have also been made to use deep generative models, like class-conditional GAN (cGAN) [22] or diffusion models [10] to restore the dataset balance by synthesizing minority-class samples.
[11] experimented with different GAN architectures generating new training data for classification tasks and showed that the GAN improved the results compared with
Figure 2: An example of iNaturalist-2019 Birds dataset with high image similarity between classes. Here class 202 and class 203 share similar background, bird shape and color. Hence, expert specialities are required to distinguish between these two species.
the original imbalanced dataset. [20] proposed a balancing generative adversarial network (BAGAN) architecture to generate new images for minority-class. [4] used a Data Augmentation Generative Adversarial Network (DAGAN) for data augmentation by taking data from a source domain and generalising it to a different low-data target domain. Class-conditional GAN (cGAN) extends the standard unconditional GAN to learning data-label distributions and hence capable of generating images belonging to specific classes. Observing that class-conditioning causes mode collapse in limited data settings, where unconditional learning leads to satisfactory generative ability, [22] suggested a transitional training strategy for cGANs that effectively prevents mode-collapse by leveraging unconditional learning. This strategy starts with an unconditional GAN and gradually injects conditional information into the generator and the objective function.
Diffusion model is a parameterized Markov chain trained with variational inference to produce samples matching the data after finite steps [16]. Transitions of this chain are learned to reverse a diffusion process, which is a Markov chain that gradually adds noise to the data in the opposite direction of sampling until the signal is destroyed. Showing that diffusion models can achieve image sample quality superior to the current state-of-the-art GANs, [10] improved the synthesized image quality with a classifier guidance.
In our proposed data augmentation pipeline, both cGAN with transitional training strategy [22] and the guided-diffusion model [10] are adopted and experimented.
## 3 Method
In this section, we are going to talk about why we choose SSIM as the basis of our proposed metric in Section 3.1, how we group the dataset sub-classes (i.e., the original classes) into super-classes to compute the metric value in Section 3.2, how the metric is constructed in Section 3.3, and how we are going to augment the imbalanced image dataset using our proposed pipeline in Section 3.4.
### Structural Similarity
As introduced in [26], human visual system is able to assess the perceptual similarity between two images and highly sensitive to structural differences. To measure images differences, naive metrics such as mean squared error (MSE) and peak signal-to-noise ratio (PSNR) cannot capture the perceived visual quality. Hence as an alternative, Structural Similarity Index (SSIM) [25] seeks to compare the structures of the reference and the distorted image signals, which indeed correlates more with human cognitive understanding.
The SSIM compares the local patterns of pixel intensities in images that the luminance and contrast have been normalized. It separates the task of similarity measurement into three comparisons: luminance, constrast and structure. Suppose \(\mathbf{x}\) and \(\mathbf{y}\) are two images aligned with each other. Denote the mean intensity as \(\mu_{x}\) and \(\mu_{y}\), the standard deviation as \(\sigma_{x}\) and \(\sigma_{y}\), and the covariance as \(\sigma_{xy}\). Choose the constant \(C_{1}=(K_{1}L)^{2}\) where \(L\) is the dynamic range of the pixel values (255 for 8-bit grayscale images), and \(K_{1}\ll 1\) is a small constant. Similarly, choose constant \(C_{2}=(K_{2}L)^{2}\), and \(K_{2}\ll 1\). Then these three components can be combined to yield a final image similarity measurement:
\[\mathrm{SSIM}(\mathbf{x},\mathbf{y})=\frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}+C_{2}) }{(\mu_{x}^{2}+\mu_{y}^{2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})} \tag{1}\]
The SSIM metric returns one when the two images \(\mathbf{x}\) and \(\mathbf{y}\) are identical and decreases as differences become more relevant. A value of zero indicates no structural similarity. In this paper, we extend the concept of structural similarity to image dataset classes hierarchy as shown in Section 3.2.
### Super-class Information for Image Dataset
Real world datasets usually follow long-tail instead of uniform class distributions. Figure 3 shows the class distribution of iNaturalist-2019 Birds [12] as an example of such imbalanced dataset.
Figure 3: (_Top_) Sample images of the original iNaturalist-2019 Birds dataset from different super-classes (here only 4 of 9 super-classes are presented for visualization). Though images are from different sub-classes, within each super-class they share similar bird attributes and thus are grouped together. (_Bottom_) Bar plot for its class distribution. Notice that the distribution of the original dataset is imbalanced and the one after GM-augCls is balanced.
A naive image dataset similarity measurement would be directly computing the average of SSIM values of every image versus the other images in the dataset. Unfortunately, this will be computationally expensive for large scale datasets. Thus, we seek to alleviate the time complexity by bootstrapping. Basically, what we want to calculate is an expectation of the SSIM within an image set, as in Eq. 2. In practice, each time we can randomly pick a pair of images \(\mathbf{I}_{i}\neq\mathbf{I}_{j}\) from the set \(\mathcal{X}\) and evaluate their SSIM. This procedure should be repeated enough times (here we use \(2\times\) number of images in the whole imbalanced dataset) until the average of SSIM values converge, and the final similarity measurement would be the mean of the SSIM values on each pair of images.
\[\mathrm{SSIM}_{\mathrm{set}}(\mathcal{X})=\mathbb{E}[\underset{\mathbf{I}_{i },\mathbf{I}_{j}\in\mathcal{X}}{\mathrm{SSIM}}(\mathbf{I}_{i},\mathbf{I}_{j})] \tag{2}\]
If we simply set \(\mathcal{X}\) consisting all the images in the whole imbalanced training set, we can get a straightforward metric, called Merged-Class Structural Similarity (SSIM-mergeCls). However, through mixing up images from all classes, we are in the risk of losing important information of the original dataset class structure. Thus, instead of using the flat structure (all classes at same level) for a classification task, we use a hierarchical class structure based on some ontology or human annotations.
One may notice that many existing image datasets for classification tasks can be further grouped into broader classes (i.e., super-classes) which subsume several of the original classes (i.e., sub-classes) that keep same image attributes under specific metrics [27]. This super-class information can be used in this scenario. For a natural world creatures dataset like iNaturalist-2019 Birds in Figure 3, these super-classes are formed through human inspections in a top-down manner, as in Figure 4:
1. First, all the sub-classes in the original dataset are grouped according to their species. E.g., in Birds dataset, super-class 1 (Oriole), super-class 2 (Crow), super-class 3 (Sparrow), super-class 4 (Seagull) and super-class 5 (Eagle) are grouped in this step.
2. Then, based on the image background, the sub-classes difficult to be clustered in step 1 are further grouped by their habitats. E.g., forest bird super-class 6 and super-class 7 versus aquatic bird super-class 8 and super-class 9.
3. Finally, based on the object details, super-classes acquired in step 2 can be further grouped by the bird size, shape and color. E.g., large aquatic bird super-class 8 v.s. small aquatic bird super-class 9.
Table 1 provides the number of classes in each dataset we used in this study. Note that for dataset UTKFace, scene and flowers, each super-class only contains one sub-class since the original datasets are already grouped by species. The grouping law for other datasets are provided in Supplemental Material Section S2. With these super-class information, we propose a metric taking a narrower scope of image classes into consideration, called Super-Sub Class Structural Similarity (SSIM-supSubCls). Therefore, a more relevant class similarity measurement is acquired.
### SSIM-supSubCls Metric
In our proposed SSIM-supSubCls metric, rather than calculating the SSIM from the whole training set \(\mathcal{X}\), we compute it from the super-class set \(\mathcal{X}_{k}\), as in Eq. 3. Practically, each time we randomly select a pair of images \(\mathbf{I}_{a}\neq\mathbf{I}_{b}\) from the super-class set \(\mathcal{X}_{k}\) and evaluate their SSIM. This process is also repeated enough times (here we use \(2\times\) number of images in each super-class set \(\mathcal{X}_{k}\)) until the average of SSIM values converge. The similarity measurement on each super-class \(k\) would be the mean of the SSIM values on each pair of images in \(\mathcal{X}_{k}\).
Considering the fact that for classification, the classes most difficult to be distinguished are the key impact factors on performance. Thus, our final SSIM-supSubCls measurement would be the maximum of the SSIM values on every super-class set \(\mathcal{X}_{k}\), which indicates the largest similarity among all super-classes.
\[\mathrm{SSIM}_{\mathrm{supSubCls}}(\mathcal{X})=\max_{\mathcal{X}_{k}\subset \mathcal{X}}\{\mathrm{SSIM}_{\mathrm{set}}(\mathcal{X}_{k})\} \tag{3}\]
This metric provides a prior knowledge on whether the deep generative model data augmentation works on the image data classification by measuring its image similarity within super-class set. Considering that deep generative models like cGAN or diffusion model can faithful generate images from different classes which embrace large distinctions like species, but struggle on reconstructing the very subtle distinctions on the image objects, e.g., if there exists black feathers on the birds' wings or not (as in Figure 3 (_Top_) super-class 4 Seagulls). it is not the case that deep generative model data augmentation can improve classification performance on every dataset. Therefore, our proposed metric will give the researcher insights on which data augmentation method should use instead of blindly apply the GM-augCls on any image datasets. From the definition in Eq. 3, higher SSIM-supSubCls value indicates higher similarity among super-classes in the dataset, harder for the deep generative model to reconstruct the details in the images for distinguishing between sub-classes within this super-class, and thus a less chance to get benefits from the GM-augCls.
### cGAN-aug Classification Pipeline
In this section we propose a pipeline to verify that our metric correlates with the deep generative model data
augmentation method for improving classification performance. An illustrative figure of the pipeline called GM-augCls is provided in Figure 5. This figure shows that, in an iterative way, using existing deep generative models like cGAN or diffusion model and classifiers, we can generate a new balanced dataset augmented by _qualified_ artificial images. By qualify, we mean, that only synthesized images with large probability during the classification steps are kept to enlarge the dataset.
In practice, we use a StyleGAN2 [18] based cGAN with transitional training strategy [22] or guided-diffusion model [10] to generate artificial images as potential ones to augment the imbalanced dataset. These deep generative models are trained on the original imbalanced training set. Then
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline
**Dataset** & UTKFace & Birds & Insects & scene & flowers & Fungi & Reptiles & Amphibians \\ \hline \hline
**sup-cls**\({}^{1}\) & 5 & 9 & 14 & 6 & 5 & 4 & 2 & 4 \\
**sub-cls**\({}^{2}\) & 5 & 126 & 141 & 6 & 5 & 12 & 39 & 10 \\ \hline \multicolumn{10}{l}{\({}^{1}\) The number of super-classes in the dataset} \\ \multicolumn{10}{l}{\({}^{2}\) The number of sub-classes (i.e., original classes) in the dataset} \\ \end{tabular}
\end{table}
Table 1: The number of classes in each dataset.
Figure 4: The resulting class hierarchy of iNaturalist-Birds dataset by grouping the original sub-classes. The whole dataset contains 9 super-classes and each super-class consists of various sub-classes.
Figure 5: The schematic diagram of our proposed Deep Generative Model Data Augmentation Classification (GM-augCls) method. First, a deep generative model (GM) \(M\) and a classifier \(C(\cdot)\) are trained on the whole imbalanced training set \(\mathcal{X}\). \(M\) is trained only once, while \(C(\cdot)\) gets updated each time by training on the new dataset \(\mathcal{X}^{\prime}\) augmented with qualified new images. This whole procedure is iterated several steps, and in every step \(C(\cdot)\) is replaced with \(C^{\prime}(\cdot)\) acquired in the immediate previous step. The augmented \(\mathcal{X}^{\prime\prime}\) is the final balanced dataset we want.
a ResNet [15] or Masked Autoencoder (MAE) [14] classifier also trained on the original set with sub-class labels are applied on the synthesized images to select qualified ones. The selected images combined with the original ones compose our final balanced dataset.
Our goal is to enhance the classification performance on an imbalanced image dataset. The deep generative model can generate images belonging to each sub-class under the user's control of synthesized image numbers. However, not all new images fall in a desired sub-class, even with cGAN capable of generating images belonging to specific classes, as the deep generative model is trained with the whole dataset. Thus, we need a classifier to pick images only in the expected class. As shown in Fig 5 step 1, since at the beginning no artificial images have been used to augment the dataset, we use a classifier \(C(\cdot)\) trained on the whole original imbalanced set \(\mathcal{X}\) to classify each sub-class. Under a prediction probability threshold \(\alpha\), \(C(\cdot)\) select from all the synthesized images to get _qualified_ ones as new members of the augmented balanced set \(\mathcal{X}^{\prime}\). This means that the to-be-balanced dataset and the to-be-improved classifier are combat with each other: on one hand we use the dataset to train the classifier, on the other hand we use the classifier to boost the dataset.
Thus, the above mentioned process is iterated several steps. In each step the pretrained classifier \(C(\cdot)\) is replaced with the \(C^{\prime}(\cdot)\) one trained on the new set \(\mathcal{X}^{\prime}\) acquired in the immediate previous step, until the validation accuracy with \(C^{\prime}(\cdot)\) is not improved under a specific tolerance \(\epsilon\). In practice, this is repeated for at most two steps in our experiments. Finally, the augmented set \(\mathcal{X}^{\prime\prime}\) is the balanced dataset we want.
### Selection on Classifier and Deep Generative Model
In our work, the classifier used to select qualified images and validate the classification accuracy is ResNet18 [15]. To justify our proposed metric and pipeline work regardless of the classifier selection, we also provide the results with Masked Autoencoder (MAE) ViT-H\({}_{128}\)[14] which shows state-of-the-art results in dataset such as [12].
For the GAN architecture, we use StyleGAN2 [18] based cGAN on the original imbalanced image dataset. Observed that class-conditional learning leads to mode collapse while the unconditional one still maintains satisfactory generative ability under limited data settings, [22] proposed a learning approach which starts with an unconditional GAN and gradually injects class conditional information into the generator and the objective functions. Our pipeline with cGAN adopts this transitional training strategy.
Diffusion models, which can achieve image sample quality superior to the current state-of-the-art GANs, is also experimented in our pipeline. Specifically, guided-diffusion [10] improving the synthesized image quality with a classifier guidance is used.
## 4 Experiments
### Experimental Settings
To evaluate the above proposed methods, we carried out our major experiments on dataset iNaturalist-2019 [12], flowers [1], UTKFace [3] and scene [2] which are either long-tail or imbalanced distributed. The iNaturalist-2019 has around 265K images in the long-tail training set and 3k images in a balanced validation set with 6 categories, and each category contains different number of sub-classes. Here we are using the categories of Insects, Birds, Amphibians, Fungi and Reptiles for experiments. The flowers dataset contains nearly 4k images in the imbalanced training dataset with 5 sub-classes. The UTKFace dataset is a large-scale human face dataset with 5 different ethnicity across long age span. In this work we are predicting their ethnicity. The imbalanced scene dataset contains about 25k images from a wide range of natural scenes with 6 different sub-classes. Class number distributions of all these datasets are shown in Supplemental Material Section S3.
In our experiments, the deep generative models, ResNet18 and MAE classifiers are first trained on the original imbalanced set with sub-class instead of super-class labels. cGAN models are trained until the Frechet Inception Distance (FID) scores converge. The diffusion models are trained for 100k iterations resumed on an ImageNet [9] pre-trained model. To guide the diffusion model during sampling, an additional classifier is also trained until its validation accuracy converges [10]. The ResNet18 classifiers are trained for 100 epochs and the MAE for 50 epochs when their validation accuracy converges, with their hyper-parameters remaining the same throughout the whole procedure in each case.
In this paper, we only report the classification results at the final step. The maximum top1 validation accuracy are provided before and after the GM-augCls pipeline. To verify if the SSIM-subSupCls correlates with the classification accuracy improvement, the metric values are measured on the original imbalanced training sets.
### Qualitative Evaluation Results
Table 2 provides the FID scores for the StyleGAN2 based cGAN and GD models. Figure 6 also provides sample images as a comparison between the original and GM-augCls synthesized ones. They demonstrate our data augmentation method can generate high quality images for each sub-class after classifier selection.
### Quantitative Evaluation Results
We tested our metric and pipeline on dataset iNaturalist-2019, flowers, UTKFace and scene with the original balanced validation set. Table 3 shows their top1 validation accuracy with cGAN and guided-diffusion (GD) models trained on the original and cGAN-augCls or GD-augCls augmented dataset. For ablation study, results on the cGAN augmented datasets _without_ image selection are also provided as baseline.
As in Table 3, for dataset flowers, iNaturalist-2019 Fungi, Reptiles and Amphibians, compared with the ResNet18 classifier trained on the original imbalanced set, the ones trained on the cGAN-augCls balanced set have significantly enhanced accuracy. To certify that this conclusion can be drawn regardless of the classifier selection, we also conduct the experiments with MAE classifier and the same results are obtained. Supplemental Material Section S5 also provides learning curves during this procedure.
As a well-known regularization-based method, class-balanced loss (CB-loss) [8] is specifically designed to add weights for each class in the loss function. For comparison, Table 4 also provides the top1 validation accuracy training with ResNet18 CB-loss on the original dataset. Compared with the results in Table 3, using Effective Number Class Balanced loss does not improve the accuracy as much as our proposed pipeline.
To verify that our proposed metric correlates with the accuracy improvement, we measure the SSIM-mergeCls and SSIM-supSubCls values for each dataset. Table 5 summarises their metric values. This table shows that the deep generative model data augmentation method only works when the metric values are small. This observation confirms our hypothesis that for an image dataset with high similarity between classes, the deep generative model suffers in unfaithfully reconstructing the subtle distinctions between class features and the classifiers struggles on distinguishing various classes, which leads to a lower classification accuracy improvement.
We further quantify the relationship between SSIM-supSubCls values and GM-augCls relative accuracy improvement in Table 5. As shown in Figure 7, the top1 validation accuracy increment decays exponentially with respect to the metric values, and this exponential function can be fitted as:
\[\hat{f}(x)=0.94^{202.74x-79.92} \tag{4}\]
With this exponential curve fitted, one can easily predict the expected accuracy improvement for any image dataset from its SSIM-supSubCls value. Table 6 provides goodness-of-fit (\(R^{2}\)) and Mean Absolute Error (MAE) measurements for the function \(f\). The high \(R^{2}\) and low MAE values show that the formulation of \(f\) is highly effective on modeling the relationship between SSIM-supSubCls and accuracy improvement with our proposed data augmentation pipeline.
Table 7 compares the state-of-the-art (SOTA) classification validation accuracy from [14] and ours on the whole iNaturalist-2019 dataset. From this table, our results with Masked Autoencoder (MAE) is comparable with the SOTA one, even though the SOTA is with \(448\times 448\) while ours is with \(128\times 128\) input image resolutions which largely reduce the need of running time and computational resources.
## 5 Conclusion and Discussion
We propose a metric called Super-Sub Class Structural Similarity (SSIM-supSubCls) to evaluate the image set
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Dataset** & **StyleGAN2 FID** & **GD FID** \\ \hline \hline UTKFace & 4.37 & - \\ Birds & 4.68 & 28.25 \\ Insects & 6.56 & - \\ scene & 9.84 & 54.34 \\ flowers & 15.21 & - \\ Fungi & 8.67 & 50.08 \\ Reptiles & 6.60 & - \\ Amphibians & 11.74 & - \\ \hline \end{tabular}
\end{table}
Table 2: FID scores for StyleGAN2 based cGAN and Guided Diffusion on each dataset.
Figure 6: Results for qualitative evaluation of our proposed GM-augCls pipeline. These sample images indicate that our pipeline can synthesize high quality images belonging to each sub-class even though the deep generative models are trained on the whole original dataset. More sample images are provided in Supplemental Material Section S4.
class similarity serving as an indicator of classification accuracy improvement after data re-balancing with deep generative model. We also provide a new pipeline achieving data augmentation efficiently for imbalanced image datasets, using cGAN or diffusion models and ResNet or Masked Autoencoder (MAE) classifiers. With user controlling the number of images generated for each sub-class, the original dataset can be re-balanced with qualified images.
We applied our pipeline on imbalanced dataset iNaturalist-2019, flowers, UTKFace and scene, evaluating their classification accuracy when trained on the original and balanced dataset after augmentation. Experimental results reveal that, the top1 validation accuracy increment decays exponentially with respect to the SSIM-supSubCls val
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Dataset** & **SSIM-mergeCls** & **SSIM-supSubCls** & **cGAN-augCls Acc Impr 1** & **cGAN-augCls Works** \\ \hline \hline UTKFace & 0.3649 & 0.3834 & 2.41\% & \(\times\) \\ Birds & 0.1698 & 0.3742 & 4.57\% & \(\times\) \\ Insects & 0.1373 & 0.3115 & 0.77\% & \(\times\) \\ scene & 0.1462 & 0.2625 & 0.59\% & \(\times\) \\ flowers & 0.1319 & 0.1652 & 24.73\% & ✓ \\ Fungi & 0.0617 & 0.0880 & 50.00\% & ✓ \\ Reptiles & 0.0645 & 0.0793 & 45.44\% & ✓ \\ Amphibians & 0.0725 & 0.0792 & 83.35\% & ✓ \\ \hline \end{tabular} \({}^{1}\) The relative accuracy improvement on cGAN-augCls Acc with ResNet18 classifier (calculated from Table 3 normalized by the original accuracy)
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Dataset** & **Original Acc** & \multicolumn{2}{c|}{**c|} **cGAN-augCls Acc** \\ \cline{2-5} & **ResNet18** & **MAE** & **ResNet18** & **MAE** \\ \hline \hline
**88.3\%** & 54.4\% & **83.1\%** & 55.4\% & 82.5\% \\ \hline \end{tabular} \({}^{1}\) State-of-the-art (SOTA) classification validation accuracy with Masked Autoencoder ViT-H\({}_{448}\)[14]
\end{table}
Table 6: Goodness-of-fit \(R^{2}\) and Mean Absolute Error (MAE) measurements for the SSIM-supSubCls vs GM-augCls function \(f\).
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Dataset** & **UTKFace** & **scene** & **flowers** & **Amphibians** \\ \hline \hline
**CB-loss Acc** & 73.65\% & 87.71\% & 75.86\% & 20.00\% \\ \hline \end{tabular}
\end{table}
Table 4: Top1 validation accuracy with ResNet18 CB-loss training on the original dataset.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
**Dataset** & \multicolumn{2}{c|}{**Original Acc**} & \multicolumn{2}{c|}{**c|} **cGAN-aug Acc \({}^{1}\)** & \multicolumn{2}{c|}{**c|} **cGAN-augCls Acc** \\ \cline{2-5} & **ResNet18** & **MAE** & **ResNet18** & **MAE** & **ResNet18** & **MAE** \\ \hline \hline UTKFace & 82.31\% & **87.63\%** & 83.32\% & 86.62\% & **84.29\%** & 86.82\% & - \\ Birds & 52.12\% & **80.42\%** & 52.12\% & 80.16\% & 54.50\% & 80.16\% & **55.03\%** \\ Insects & 61.23\% & **87.23\%** & 59.34\% & 86.29\% & **61.70\%** & **87.23\%** & - \\ scene & 90.12\% & 94.29\% & 90.53\% & 94.35\% & **90.65\%** & **94.41\%** & 90.24\% \\ flowers & 64.14\% & 95.40\% & 79.86\% & 95.86\% & **80.00\%** & **96.55\%** & - \\ Fungi & 50.00\% & **88.89\%** & 66.67\% & **88.89\%** & **75.00\%** & **88.89\%** & 69.44\% \\ Reptiles & 28.21\% & 81.20\% & 38.63\% & 79.49\% & **41.03\%** & **82.05\%** & - \\ Amphibians & 20.00\% & 74.67\% & **43.33\%** & 76.67\% & 36.67\% & **80.00\%** & - \\ \hline \end{tabular} \({}^{1}\) When trained on cGAN augmented balanced dataset without classifier selections (as baseline)
\end{table}
Table 3: Quantitative results of our proposed pipeline with ResNet18 and Masked Autoencoder (MAE) for image selection and evaluation. Here we list the maximum top1 classification validation accuracy within 100 epochs for ResNet18 and 50 epochs for MAE. All classifiers are trained on the original imbalanced set and the balanced set after data augmentation pipeline respectively.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Dataset** & **SSIM-mergeCls** & **SSIM-supSubCls** & **cGAN-augCls Acc Impr 1** & **cGAN-augCls Works** \\ \hline \hline UTKFace & 0.3649 & 0.3834 & 2.41\% & \(\times\) \\ Birds & 0.1698 & 0.3742 & 4.57\% & \(\times\) \\ Insects & 0.1373 & 0.3115 & 0.77\% & \(\times\) \\ scene & 0.1462 & 0.2625 & 0.59\% & \(\times\) \\ flowers & 0.1319 & 0.1652 & 24.73\% & ✓ \\ Fungi & 0.0617 & 0.0880 & 50.00\% & ✓ \\ Reptiles & 0.0645 & 0.0793 & 45.44\% & ✓ \\ Amphibians & 0.0725 & 0.0792 & 83.35\% & ✓ \\ \hline \end{tabular} \({}^{1}\) The relative accuracy improvement on cGAN-augCls Acc with ResNet18 classifier (calculated from Table 3 normalized by the original accuracy)
\end{table}
Table 5: This table provides the SSIM-mergeCls and SSIM-supSubCls metric values for each dataset. Based on the relative accuracy improvement, whether the cGAN-augCls pipeline works or not is also denoted. Note that the cGAN data augmentation only works when the SSIM-mergeCls values are smaller than threshold 0.1319, or for the SSIM-supSubCls smaller than threshold 0.1652.
ues regardless of model architecture or dataset. With this exponential curve fitted, one can easily predict the expected accuracy improvement for any image dataset from its metric value.
|
2306.08668 | On-Demand Generation of Indistinguishable Photons in the Telecom C-Band
using Quantum Dot Devices | Semiconductor quantum dots (QDs) enable the generation of single and
entangled photons, useful for various applications in photonic quantum
technologies. Specifically for quantum communication via fiber-optical
networks, operation in the telecom C-band centered around 1550$\,$nm is ideal.
The direct generation of QD-photons in this spectral range and with high
quantum-optical quality, however, remained challenging. Here, we demonstrate
the coherent on-demand generation of indistinguishable photons in the telecom
C-band from single QD devices consisting of InAs/InP QD-mesa structures
heterogeneously integrated with a metallic reflector on a silicon wafer. Using
pulsed two-photon resonant excitation of the biexciton-exciton radiative
cascade, we observe Rabi rotations up to pulse areas of $4\pi$ and a high
single-photon purity in terms of $g^{(2)}(0)=0.005(1)$ and $0.015(1)$ for
exciton and biexciton photons, respectively. Applying two independent
experimental methods, based on fitting Rabi rotations in the emission intensity
and performing photon cross-correlation measurements, we consistently obtain
preparation fidelities at the $\pi$-pulse exceeding 80$\%$. Finally, performing
Hong-Ou-Mandel-type two-photon interference experiments we obtain a
photon-indistinguishability of the full photon wave packet of up to $35(3)\%$,
representing a significant advancement in the photon-indistinguishability of
single photons emitted directly in the telecom C-band. | Daniel A. Vajner, Paweł Holewa, Emilia Zięba-Ostój, Maja Wasiluk, Martin von Helversen, Aurimas Sakanas, Alexander Huck, Kresten Yvind, Niels Gregersen, Anna Musiał, Marcin Syperek, Elizaveta Semenova, Tobias Heindel | 2023-06-14T17:59:03Z | http://arxiv.org/abs/2306.08668v2 | # On-demand Generation of Indistinguishable Photons in the Telecom C-Band using Quantum Dot Devices
###### Abstract
Semiconductor quantum dots (QDs) enable the generation of single and entangled photons for applications in quantum information and quantum communication. While QDs emitting in the 780 nm to 950 nm spectral range feature close-to-ideal single-photon purities and indistinguishabilities, they are not the best choice for applications in fiber-optical networks, due to the high optical losses in this wavelength regime. The preferable choice here are QDs operating in the lowest-loss spectral window around 1550 nm (telecom C-band). In this work, we demonstrate the coherent on-demand generation of indistinguishable photons in the telecom C-band from single QD devices consisting of InAs/InP QD-meas structures heterogeneously integrated with a metallic reflector on a silicon wafer. Using pulsed two-photon resonant excitation of the biexciton-exciton radiative cascade, we observe Rabi rotations up to pulse areas of \(4\pi\) and a high single-photon purity in terms of \(g^{(2)}(0)=0.005(1)\) and 0.015(1) for exciton and biexciton photons, respectively. We obtain two-photon interference visibilities of up to \(35(3)\%\) in Hong-Ou-Mandel-type experiments by comparing co- and cross-polarized coincidences. This represents a significant advancement in the photon-indistinguishability of single photons emitted directly in the telecom C-band without wavelength conversion.
+
Footnote †: preprint: APS/123-QED
## I Introduction
Single indistinguishable photons are a key resource for many applications in quantum information ranging from quantum communication to distributed quantum computing. They are an essential requirement for quantum networks and the quantum internet [1]. While a plethora of quantum emitters enable the generation of single photons [2; 3], epitaxial semiconductor quantum dots (QD) turned out to be advantageous in many regards [4; 5; 6; 7]. Over the last decades, photons generated on-demand using QDs demonstrated unprecedented quantum optical properties in terms of high purity, brightness and indistinguishability and have been repeatedly employed in implementations of quantum communication [8]. So far, such close-to-ideal single-photon sources have only been demonstrated at wavelengths around 780 nm for GaAs/AlGaAs QDs [9; 10] and 930 nm for InGaAs/GaAs QDs [11; 12; 13; 14; 15]. For long-distance quantum information transfer via optical fibers, however, wavelengths around 1550 nm, i.e. in the telecom C-band, are required to benefit from lowest losses in optical fibers. To shift the emission of QDs to C-band wavelength, quantum frequency conversion of QD-photons emitted at shorter wavelengths can be used [16; 17; 18], which introduces additional conversion losses ultimately limiting the source efficiency. For this reason, QDs directly emitting photons at telecom wavelengths are desirable, requiring carefully tailored growth schemes.
One solution is to introduce metamorphic buffer layers to engineer the strain and size of InAs/InGaAs QDs, which shifts the emission to longer wavelengths [19; 20]. QDs grown on metamorphic buffers have been advanced in recent years [21; 22; 23] and recently triggered photon-indistinguishabilities of \(V\approx 10\%\) and \(V=14(2)\%\) were reported under quasi-resonant [24] and resonant [25] excitation, respectively. An alternative approach uses the InP material system for the growth of InAs/InP QDs naturally emitting at telecom C-band wavelengths. Various studies investigated the properties of InP-based QDs under above-band and quasi-resonant excitation [26; 27; 28; 29; 30; 31; 32; 33]. A notable recent advancement in this context concerns the demonstration of a scalable device platform, by deterministically integrating single InAs/InP QDs into circular Bragg grating cavities, resulting in a triggered, Purcell-enhanced emission and a photon-indistinguishability of \(V=19(3)\%\) under quasi-resonant excitation [34]. The pulsed on-demand generation of InP-based QD single photons at C-band wavelengths, however, is an important requirement for applications that has not been achieved to date.
This work, presents studies on single InAs/InP QDs integrated into mesa structures using pulsed coherent excitation. Coherently driving the biexciton-exciton (XX-X) radiative cascade via triggered two-photon-resonant excitation (TPE) of the XX-state, we demonstrate the on-demand generation of single-photons with high purity in terms of \(g^{(2)}(0)\approx 1\%\)
and record-high photon-indistinguishabilities of \(35(3)\%\) in the telecom C-band. Our work thus represents a notable advancement in the generation of flying qubits for quantum networking in optical fibers.
The InAs/InP QDs used in this work have been obtained by self-assembled Stranski-Krastanow growth in the near-critical regime via metalorganic vapor phase epitaxy (MOVPE) [35]. A bottom metallic mirror (Aluminum) in combination with a top mesa structure enhances the photon extraction efficiency to \(\approx 10\%\)[33], while the silicon (Si) substrate allows future CMOS integration (see Fig. 1(a)). For more information on the device fabrication and a detailed pre-characterization we refer the interested reader to Ref. [33]. The spectrum of a single QD-mesa under above-band excitation (continuous wave (CW) laser at \(980\,\mathrm{nm}\)) is shown in Fig. 1(b). Photons originating from the XX-X radiative cascade and a trion-state of the same QD are observed with wavelengths near \(1530\,\mathrm{nm}\) in the telecom C-band. The spectral separation of the XX- and X- emission (i.e. the XX binding energy) is determined to be \(3.0(1)\,\mathrm{meV}\), allowing for TPE of the biexciton state [36]. Figure 1(c) depicts a spectrum of the same QD from (b) under pulsed TPE, revealing the same spectral fingerprint of the QD together with scattered laser light at \(1530\,\mathrm{nm}\). Here, we used laser pulses with a spectral width of \(\approx 0.5\,\mathrm{nm}\) full-width half-maximum (FWHM) (i.e. \(10\,\mathrm{ps}\) pulse duration) sliced from \(2\,\mathrm{ps}\) laser pulses (picoEmerald, APE GmbH) using a 4f pulse-shaper. We extract upper bounds for the linewidths of \(180(17)\,\mathrm{\mu eV}\) and \(120(22)\,\mathrm{\mu eV}\) FWHM for the X- and XX-transition, respectively, limited by the spectrometer resolution.
The coherent population of the XX-X cascade under TPE can be confirmed via excitation-power dependent measurements. Figure 2(a) and (b) display the laser-power-dependent integrated emission intensities of the X- and XX-state, respectively. Here, clear Rabi rotations are observed up to pulse areas of \(4\pi\) accompanied by a noticeable damping effect. Note that, unlike for strictly resonant excitation, the Rabi frequency scales quadratically with the pulse area under TPE, resulting in equidistant Rabi oscillations as a function of the laser power. Note also that in the presence of damping the pulse area that leads to a complete state occupation inversion (the theoretical \(\pi\)-power) slightly deviates from the power that leads to a maximum in the emission. However, we denote the maximum of the measured Rabi rotation as \(\pi\)-power for simplicity. To estimate a lower bound of the preparation fidelity, we fit an exponentially damped \(\sin^{2}\)-function to our experimental data [37]. Extrapolating the envelope function of the oscillations (red dashed line) yields an estimated preparation fidelity, i.e. normalized occupation at \(\pi\)-power, of \(\mathcal{F}_{\mathrm{Prep}}\approx 70\%\). The observed preparation fidelity compares favorably or is comparable with previously reported values between \(49\%\) and \(85\%\) for QDs at C-band wavelength based on metamorphic buffer layers [23; 22].
All experiments presented in the following were conducted under TPE with a \(\pi\)-pulse corresponding to \(P_{\pi}=11\,\mathrm{\mu W}\) average excitation power on the sample. Photons originating from the X- or XX-state were collected via an \(\mathrm{NA}=0.7\) aspheric lens, spectrally filtered using a tunable fiber-bandpass filter of \(0.4\,\mathrm{nm}\) bandwidth in combination with polarization suppression of the scattered excitation laser. Time-correlated single-photon counting experiments were performed using a superconducting nano-wire single-photon detector (SNSPD) system (SingleQuantum EOS by SingleQuantum BV) in combination with time-tagging electronics (quTag by quTools GmbH) with a combined timing resolution of \(\approx 50\,\mathrm{ps}\) FWHM.
To gain insight into the dynamics of the three-level system under study, we conducted time-resolved measurements (see Fig. 2(c)). We observe the typical behavior of the XX-X emission cascade. A fast exponential decay of the XX-state causes the X state to be transiently occupied, followed by a slower exponential decay of the X-state. Applying mono-exponential fits to the time-traces, we extract the respective decay constants as
\[t_{1,X}=1.279(9)\,\mathrm{ns},\quad t_{1,XX}=0.333(5)\,\mathrm{ns}\ \.\]
Interestingly, the decay of the XX-state is about 4-times faster than the decay of the X-state, which indicates the presence of dark states with short spin-flip times [39]. Investigating a larger number of quantum emitters, we find that this behavior is representative for the QD ensemble on this sample. Noteworthy, for photons generated via TPE, the indistinguishability is intrinsically limited by the ratio of the XX- and X-lifetime as \(V_{\mathrm{max\,TPE}}=1/(1+t_{1,XX}/t_{1,X})\)[40]. This results in a theoretical limit of \(\approx 80\%\) in our case, which significantly exceeds the value of \(67\%\) following from the naive expectation of \(t_{1,X}=2\times t_{1,XX}\). The intuitive explanation for the improved maximum indistinguishability is that a relatively short XX decay time reduces the timing jitter inherited by the X photon. Thus, QDs that display a larger \(t_{1,X}/t_{1,XX}\) ratio are promising candidates for combining entanglement generation with high indistinguishability. Technologically more involved approaches in this direction aim to exploit microcavities supporting an asymmetric Purcell enhancement for maximizing this lifetime-ratio [41].
Figure 1: (a) Illustration of the Si-compatible QD-mesa structure used in this study: a single InAs QD is embedded in an InP matrix with metallic bottom mirror for enhanced photon extraction efficiency. The mesa diameter is \(2\,\mathrm{\mu m}\). (b) Spectrum of a QD-mesa under above-band excitation at saturation power revealing telecom C-band photons originating from the X- (blue), XX- (green) and trion- (black) state, respectively. (c) Spectrum of the same QD under two-photon resonant excitation. Insets illustrate the excitation scheme used. All spectra are taken at \(4\,\mathrm{K}\), the sample is placed in a closed-cycle cryostat (Attocube) and the emission is collected with an \(\mathrm{NA}=0.7\) aspheric lens.
Next, we investigated the purity of the single photons in terms of the second-order auto-correlation function \(g^{(2)}(\tau)\) via Hanbury-Brown and Twiss type experiments. The resulting \(g^{(2)}(\tau)\)-histograms for X- and XX-photons are depicted in Fig. 2(d) and (e), respectively. The strong suppression of coincidences at zero time delay confirms the emission of single photons. The experimental data are approximated by a fit function corresponding to a sum of two-sided exponential decays and accounting for the noticeable blinking effect on short timescales (following Ref. [34]), while no temporal deconvolution or background subtraction was applied. The fit yields antibunching values of
\[g_{X}^{(2)}(0)=0.015(4)\,,\quad g_{XX}^{(2)}(0)=0.005(4)\]
where the errors have been determined from the quality of the fit. The decay times obtained from the fit are \(1.44(1)\,\mathrm{ns}\) and \(0.36(1)\,\mathrm{ns}\) for the X- and XX-state, respectively, in good agreement with the time-resolved photoluminescence measurements. We further extract the timescale of the blinking as \(\tau_{\mathrm{blink}}\approx 17(1)\,\mathrm{ns}\), which can be caused by either charging events or spectral wandering due to fluctuation in the QD's charge environment.
Finally, we explored the photon-indistinguishability of the coherently driven three-level system by means of Hong-Ou-Mandel (HOM) type two-photon interference experiments [42]. For this purpose, XX photons were interfered in an unbalanced Mach-Zehnder interferometer with \(12.5\,\mathrm{ns}\) delay with one input and two outputs. Here, indistinguishable photons cause coalescence in the two outputs ports, thus reducing the number of detected coincidences. Additionally, the polarization-state of the interfering photons is controlled in both arms of the interferometer, allowing for measurements in co- and cross-polarised configuration, whereas the latter serves as a reference to extract the two-photon-interference visibility \(V\).
The resulting experimental data for the co- and cross-polarized measurement configuration are presented in Fig. 3 as a close-up highlighting the zero-delay peak together with fits (solid lines) following Ref. [34] and accounting for the blinking effect. The reduction of coincidences due to two-photon interference is clearly visible. Additionally, the inset depicts the histograms for larger arrival time delays. Note, that the ratios of the coincidence side-peak areas are masked by the blinking effect compared to the expected behavior. This however does not affect the following analysis, which is solely based on the measured coincidences of the zero-delay peak areas for co- and cross-polarized measurements (without applying the fit model).
The two-photon interference visibility is extracted from our measurement data via \(V=1-(A_{\mathrm{co}}/A_{\mathrm{cross}})\), with the integrated areas \(A_{\mathrm{co}}\) and \(A_{\mathrm{cross}}\) of the co- and cross-polarized
Figure 3: HOM-type two-photon interference experiments of photons emitted by the XX-state for co- (red) and cross- (grey) polarized measurement configuration. Photon coalescence described by the Hong-Ou-Mandel effect is confirmed by the clear suppression of coincidences in the central peak for co-polarized photons. We observe record-high photon-indistinguishabilities of \(V_{raw}=29(2)\%\) and \(V_{4\,\mathrm{ns}}=35(3)\%\) by integrating the raw experimental data over the full \(12.5\,\mathrm{ns}\)- and a \(4\,\mathrm{ns}\)-wide delay-window, respectively. The inset shows the same data for a larger range of arrival time delays.
Figure 2: (a,b) Tuning the excitation laser power reveals Rabi rotations in the emission of X (blue, upper panel) and XX (green, lower panel) up to \(4\pi\), confirming the coherent population of the three-level system. Solid lines correspond to fits and red dashed lines to the extracted enveloping function. (c) The decay dynamics illustrate the cascaded emission under pulsed TPE, with the X-photons (blue line) appearing delayed with respect to the XX-photons (green). The XX-state decays \(\approx\)\(4\)-times faster than the X-state. (d,e) The second-order auto-correlation measurement confirms the single-photon-nature of X- (blue) and XX- (green) photons.
case, respectively. Integrating over the entire zero-delay time window, i.e. -6.25 to +6.25,ns (corresponding to the laser repetition rate of 80 MHz), yields \(V_{\text{raw}}=29(2)\%\), with the error inferred from the distribution of the integrated detection events of the non-interfering side-peaks. This photon-indistinguishability readily exceeds all previous reports on pulsed HOM experiments of telecom C-band photons directly generated via QDs (cf. Table 1), including pioneering work by Nawrath et al.[24; 25] as well as our most recent work in Ref. [34].
Note, that in these earlier reports, photons with a temporal separation of mostly 4 ns were interfered. Thus, for the sake of comparison, we also evaluate the two-photon interference visibility for a 4 ns-wide integration-window as \(V_{4\,\text{ns}}=35(1)\%\). Importantly, this analysis still covers more than 99% of all coincidences detected in the zero-delay peak. Thus, the increase in \(V\) is a result of the improved signal-to-noise ratio (due to reduced noise contributions by dark counts) rather than discarding signal events. In addition, by interfering photons with a temporal separation of 12.5 ns, the QD-excitonic states investigated in our work are potentially subject to larger dephasing compared to previous studies that used \(\Delta t=4\,\text{ns}\)[43]. While the photon-indistinguishability observed in our work represents a substantial improvement compared to previous reports on QD photons emitted directly at C-band wavelengths, it is not yet competitive with upconverted photons from QDs emitting at lower wavelengths[17; 18]. Further improvements in the photon-indistinguishability are expected by implementing charge control or stabilization via electrical gates.
The triggered on-demand generation of indistinguishable photons at telecom C-band wavelength via TPE achieved in our work represents an important advancement towards applications of quantum information covering large distances in optical fibers. Yet, for implementations of single-photon-based quantum communication, the most suitable excitation scheme needs to be selected depending on the specific use-case. For defining time-bin qubits and synchronizing sender and receiver units, pulsed excitation is needed, while coherent excitation provides the highest indistinguishability. Among all known approaches for pumping QD emitters, coherently driving the XX-X cascade via TPE promises the highest purity in terms of \(g^{(2)}(0)\), as it reduces the re-excitation probability[13]. In addition, the spectrally detuned excitation laser allows for straightforward spectral filtering of the emitted photons and for the generation of polarization-entangled photon-pairs[44]. While the maximum indistinguishability is limited in standard TPE[40], it can be further boosted by stimulating the XX-decay channel to distill highly indistinguishable X-photon[45; 46; 47].
Another important aspect in quantum communication is whether the emitted photon forms a pure or a mixed state in the photon number basis, which strongly depends on the excitation scheme[48]. For most quantum communication protocols the absence of photon number coherence (PNC) is required to avoid side-channel attacks and maintain security. As recently demonstrated for QD-generated photons at shorter wavelengths, standard TPE does not result in a significant degree of PNC in the emitted photon states, which can however be recovered and tuned by stimulating the XX-decay channel with a second laser pulse[49].
## II Conclusion
In summary, we demonstrated the pulsed coherent excitation of InAs/InP QDs emitting photons at telecom C-band wavelengths. Using triggered TPE of the XX-X radiative cascade, we coherently populate the three-level system, which is confirmed by the observation of Rabi rotations up to \(4\pi\). The observed two-photon-interference visibility of up to 35(3)% clearly surpasses previous results obtained for triggered photons in the telecom C-band directly emitted by QDs. The type of QD studied in our work exhibits a large ratio of exciton-to-biexciton lifetime, making them promising candidates for the generation of polarization-entangled photon-pairs with high photon-indistinguishability at C-band wavelength. The demonstrated on-demand generation of QD photons with record-high indistinguishability at wavelengths compatible with existing fiber infrastructure presents a significant step towards scalable quantum networks.
###### Acknowledgements.
The authors gratefully acknowledge helpful discussions and technical support by Yusuf Karli, Florian Kappe, Thomas Bracht, and Doris Reiter. We further acknowledge financial support by the German Federal Ministry of Education and Research (BMBF) via the project "QuSecure" (Grant No. 13N14876) within the funding program Photonic Research Germany, the BMBF joint project "tubLAN Q.0" (Grant No. 16KISQ087K), the Einstein Foundation via the Einstein Research Unit "Quantum Devices", the Danish National Research Foundation via the Research Centers of Excellence NanoPhoton (DNRF147) and the Center for Macroscopic Quantum States bigQ (DNRF142). P. H. was funded by the Polish National Science Center within the Etiuda 8 scholarship (Grant No. 2020/36/TST5/00511). N. G. acknowledges support from the European Research Council (ERCCoG "UNITY", Grant No. 865230), and from the Independent Research Fund Denmark (Grant No. DFF-9041-00046B).
\begin{table}
\begin{tabular}{l|l|l|l|l} QD Type & Device & ES & Visibility & Ref. \\ \hline InGaAs/InAs+ MB & CBG & QR & \(V_{\text{6 ns}}\approx 10\%\) & Ref.[24] \\ InGaAs/InAs+ MB & planar & R & \(V_{\text{4 ns}}=14(2)\%\) & Ref.[25] \\ InAs/InP & CBG & QR & \(V_{\text{4 ns}}=19(3)\%\) & Ref.[34] \\ InAs/InP & mesa & TPE & \(V_{\text{raw}}=29(2)\%\) & This work \\ InAs/InP & mesa & TPE & \(V_{\text{4 ns}}=35(3)\%\) & This work \\ \end{tabular}
\end{table}
Table 1: Comparison of achieved pulsed two-photon-interference visibilities for single photons emitted by QDs in the telecom C-Band. MB: Metamorphic Buffer, ES: Excitation Scheme (R: Resonant, QR: Quasi-Resonant, TPE: Two-Photon-Resonant), CBG: Circular Bragg grating. The subscript of the visibility \(V\) refers to the coincidence integration window when evaluating the HOM measurement.
## Data Availability Statement
The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
|
2303.15781 | Momentum spirals in multiphoton pair production revisited | Spirals in multiphoton pair production are revisited by two counter-rotating
fields with time delay for different cycles in pulse. Novel findings include
that for subcycle fields, the remarkable spiral structure in the momentum
spectrum can be still caused by a large time delay compared to the previous
study for supercycle case where it is easier to be generated by a small time
delay. And also there exist a range of critical polarization values for the
spirals appearance corresponding to the different cycle number. The relative
phase difference between two fields causes not only severe symmetry breaking of
the momentum spectra pattern and spiral, but also a significant change for the
shape and the number of spiral arm. Upon the number density, it is found a more
sensitive to the cycle number, in particularly, it is enhanced by more than one
order of magnitude for small cycle pulse, while it is increased about few times
when the time delay is small. These results provide an abundant theoretical
testbed for the possible experimental observation on the multiphoton pair
production in future. Meanwhile, it is applicable to regard the particles
momentum signatures as a new probing to the laser field information with it
from the vacuum. | Li-Na Hu, Orkash Amat, Li Wang, Adiljan Sawut, Hong-Hao Fan, B. S. Xie | 2023-03-28T07:51:25Z | http://arxiv.org/abs/2303.15781v2 | # Vortices in multiphoton pair production revisited
###### Abstract
Vortices in multiphoton pair production are revisited by two counter-rotating fields with time delay for different cycles in pulse. Novel findings include that for subcycle fields, the remarkable vortex structure in the momentum spectrum can be still caused by a large time delay compared to the previous study for supercycle case where it is easier to be generated by a small time delay. And also there exist a range of critical polarization values for the vortices appearance corresponding to the different cycle number. The relative phase difference between two fields causes not only severe symmetry breaking of the momentum spectra pattern and vortex, but also a significant change for the shape and the number of vortex spiral. Upon the number density, it is found a more sensitive to the cycle number, in particularly, it is enhanced by more than one order of magnitude for small cycle pulse, while it is increased about few times when the time delay is small. These results provide an abundant theoretical testbed for the possible experimental observation on the multiphoton pair production in future. Meanwhile, it is applicable to regard the particles momentum signatures as a new probing to the laser field information with it from the vacuum.
pacs: 12.20.Ds, 11.15.Tk
Introduction
In past decades, the researches on the electron-positron (\(e^{-}e^{+}\)) pair production from vacuum in strong background fields have attracted many people interest and many works have been performed theoretically [1; 2; 3; 4; 5; 6], while the Schwinger critical field strength \(E_{\rm cr}=m^{2}c^{3}/e\hbar\approx 1.3\times 10^{16}\) V/cm (where \(m\) and \(-e\) is the electron mass and charge) is still a few orders higher than present laser field and also ones of planned laser facilities such as the Extreme Light Infrastructure [7], the Exawatt Center for Extreme Light Studies, and the x-ray free electron laser [8]. In 1997, however, an impressive E-144 experiment has been performed at Stanford Linear Accelerator Center (SLAC) using 46.6GeV electrons colliding with a laser about \(10^{18}\)W/cm\({}^{2}\)[9], by which it is observed the production of \(4-5\) pairs of \(e^{-}e^{+}\). Intrigued by this multiphoton pair production experiment and also with the rapid development of high-intensity laser technology [10; 11; 12; 13], multiphoton pair creation mechanism provides a more experimental chances in future. Some new important development that include the ponderomotive force effect [14], node structures [15] and effective mass signatures [16] are revealed.
Recently, vortices have attracted people more and more attention, which has been widely investigated in atomic and molecular ionization [17; 18; 19], nonlinear optics [20], type-II superconductors [21], plasmas physics [22; 23], atomic condensates [24], and so on. Moreover, our previous studies showed that there exist also the significant vortex structures in multiphoton pair production [25; 26]. The vortices formed with an even number of spiral arms in two counter-rotating circularly polarization (CP) laser fields with one-color is reported in Ref. [25]. Then the vortices constituted by an odd number of spiral arms in two counter-rotating elliptically polarization fields with two-color are also discovered [26]. These primary studies have indicated that the vortices in multiphoton pair production are sensitive to the field parameters.
On the other hand, it should be noticed that the previous researches have been worked within some limited range, for instance, either the cycle or the time delay between two fields is fixed, etc. However, for various cycles and time delays, is there still a vortex in multiphoton pair creation? How about the vortex changes when the relative phase is introduced between two fields? Since the momentum pattern and vortex is very sensitive to the ellipticity of polarized fields, what is the range of ellipticity to observe vortices effectively?
For the purpose to clarify points mentioned above, therefore, in this paper, we shall revisit the vortices in multiphoton pair production in two counter-rotating fields with time delay by using
the Dirac-Heisenberg-Wigner (DHW) formalism. The studying is focusing on the effects of time delay and the number of cycles in pulse on the momentum vortex and the number density of created particles, in two typical cases of relative carrier envelope phase as \(0\) and \(\pi/2\), respectively. Without losing generality, we shall consider four different cases of time delay and three different cycles of supercycle, subcycle and cycle between them. It is found that there is still an obvious vortex structure in the momentum spectrum even if in the case of subcycle. Some novel features and interesting phenomena for the vortex would be revealed.
We consider the following spatially homogenous and time-varying electric field model that is composed of two counter-rotating fields with time delay [25; 27],
\[\mathbf{E}(t)=\mathbf{E}_{1}(t)+\mathbf{E}_{2}(t), \tag{1}\]
with
\[\mathbf{E}_{1,2}(t)=f_{1,2}(t)\mathbf{g}_{1,2}(t), \tag{2}\]
where
\[f_{1,2}(t)=\frac{E_{1,2}}{\cosh(\frac{t\pm T}{\tau})}\,,\quad \mathbf{g}_{1,2}(t)=\left[\cos(\omega(t\pm T)+\phi_{1,2}),\delta_{1,2}\sin( \omega(t\pm T)+\phi_{1,2}),0\right]^{\mathsf{T}}. \tag{3}\]
Here the sign \(\mathsf{T}\) denotes the transpose of the matrix, \(E_{1,2}=E_{0}/\sqrt{1+\delta_{1,2}^{2}}\) are the electric field strength, \(|\delta_{1,2}|=1\) denote the circular polarizations (where we define \(\delta_{1}=-1\) as a right-handed CP field and \(\delta_{2}=1\) as a left-handed CP field [25]), \(\omega\) represents the field frequency, \(\phi_{1,2}\) are the carrier envelope phases (the corresponding relative phase is \(\Delta\phi=\phi_{2}-\phi_{1}\)). And \(\tau=N\pi/\omega\) is the pulse duration, where \(N\) has the meaning of a number of cycles in the individual pulse. \(T\) denotes the time delay parameter of two consecutive pulses at \(\pm G\tau\), where \(G\) is a dimensionless quantity. Accordingly, the time delay between the centers of the two consecutive pulses amounts \(2T\). Since the main interest in this study is the dependence on the time delay \(T\) and the number of cycles \(N\) in the single pulse. A set of typical field profile along the \(x\)-direction for different time delays and cycles is plotted, as shown in Fig.1.
Note that, throughout this paper, we set \(E_{0}=0.1\sqrt{2}E_{\rm cr}\), \(\omega=0.6\) and \(\phi_{1}=0\). And also the natural units \(\hbar=c=1\) are applied and all quantities are presented in terms of the electron mass \(m\). For example, the field frequency and the momentum are in unit of \(m\), and the temporal scales of the electric field is in unit of \(1/m\).
The paper is organized as follows. In Sec. II, we briefly recall the DHW formalism to be self-contained in this work. In Sec. III, we examine the different vortex structures and signatures with different chosen parameters of time delay and cycle number of fields when the relative carrier envelope phase is set as 0. The case of the relative phase as \(\pi/2\) is investigated in Sec. IV. In Sec. V, the number density is presented and analyzed. Finally, the conclusion and outlook are given in Sec. VI.
## II DHW formalism
The present study is based on the DHW formalism that has been widely adopted to investigate vacuum pair production in strong background field [28; 29; 30; 31; 32; 33; 34]. Since the detailed derivation of the DHW formalism has been performed in Refs. [35; 36; 37], here we only present the key points of this approach.
We start from the gauge-covariant density operator of two Dirac field operators in the Heisen
Figure 1: (color online). A set of typical field profile along the \(x\)-direction for different time delays and cycles. Other field parameters are \(E_{1,2}=0.1\sqrt{2}E_{\rm cr}\), \(\delta_{1,2}=0\), \(\omega=0.6\) and \(\phi_{1,2}=0\).
berg picture,
\[\hat{\mathcal{C}}_{\alpha\beta}\left(r,s\right)=\mathcal{U}\left(A,r,s\right)\, \left[\bar{\psi}_{\beta}\left(r-s/2\right),\psi_{\alpha}\left(r+s/2\right) \right], \tag{4}\]
where \(r\) denotes the center-of-mass coordinate and \(s\) the relative coordinate. The Wilson line factor
\[\mathcal{U}\left(A,r,s\right)=\exp\left(\mathrm{i}\ e\ s\int_{-1/2}^{1/2}d \xi\ A\left(r+\xi s\right)\right), \tag{5}\]
is used to guarantee the density operator gauge invariant, and it is related to the elementary charge \(e\) and the background gauge field \(A\).
It is known that the important quantity of the DHW approach is the covariant Wigner operator, which could be defined as the Fourier transform of Eq. (4) with respect to the relative coordinate \(s\), i.e.,
\[\hat{\mathcal{W}}_{\alpha\beta}\left(r,p\right)=\frac{1}{2}\int d^{4}s\ \mathrm{e}^{\mathrm{i}ps}\ \hat{\mathcal{C}}_{\alpha\beta}\left(r,s\right). \tag{6}\]
By taking the vacuum expectation value of Eq. (6), we can obtain the covariant Wigner function
\[\mathbb{W}\left(r,p\right)=\langle\Phi|\hat{\mathcal{W}}\left(r,p\right)|\Phi\rangle. \tag{7}\]
Because of the fact that the Wigner function is in the Dirac algebra, it can be decomposed into 16 covariant Wigner coefficients
\[\mathbb{W}\,=\frac{1}{4}\left(1\,\mathbb{S}+\mathrm{i}\gamma_{5}\mathbb{P}+ \gamma^{\mu}\mathbb{V}_{\mu}+\gamma^{\mu}\gamma_{5}\mathbb{A}_{\mu}+\sigma^{ \mu\nu}\mathbb{T}_{\mu\nu}\right), \tag{8}\]
where \(\mathbb{S}\), \(\mathbb{P}\), \(\mathbb{V}_{\mu}\), \(\mathbb{A}_{\mu}\) and \(\mathbb{T}_{\mu\nu}\) denote scalar, pseudoscalar, vector, axial vector and tensor, respectively. According to the Refs. [33; 34; 38; 39], the equations of motion for the Wigner function can be written as
\[D_{t}\mathbb{W}\,=-\frac{1}{2}\mathbf{D}_{\mathbf{x}}[\gamma^{0}\gamma, \mathbb{W}\,]+\mathrm{i}m[\gamma^{0},\mathbb{W}\,]-\mathrm{i}\mathbf{P}[ \gamma^{0}\gamma,\mathbb{W}\,], \tag{9}\]
here \(D_{t}\), \(\mathbf{D}_{\mathbf{x}}\) and \(\mathbf{P}\) represent the pseudodifferential operators.
Inserting Eq. (8) into Eq. (9), we can get a set of partial differential equations (PDEs) for the 16 Wigner components. For the spatially uniform and time-dependent electric field Eq. (2), by applying the method of characteristics [38; 39; 40; 41; 29] and replacing the kinetic momentum \(\mathbf{p}\) with the canonical momentum \(\mathbf{q}\) via \(\mathbf{q}-e\mathbf{A}(t)\), the PDEs for the 16 Wigner components can be simplified to the ordinary differential equations (ODEs) for only 10 Wigner components. And the corresponding Wigner coefficients are
\[\mathbb{w}=\left(\mathrm{s},\mathbb{v}_{i},\mathbb{a}_{i},\mathbb{t}_{i} \right),\quad\mathbb{t}_{i}:=\mathbb{t}_{0i}-\mathbb{t}_{i0}\,. \tag{10}\]
For the specific derivation of these 10 equations, we refer the reader to Refs. [33; 34; 35; 37]. In order to perform calculations, the vacuum initial conditions are given [35; 36] by
\[\mathfrak{s}_{vac}=\frac{-2m}{\sqrt{\mathbf{p}^{2}+m^{2}}}\,,\quad\mathfrak{v}_ {i,vac}=\frac{-2p_{i}}{\sqrt{\mathbf{p}^{2}+m^{2}}}\,. \tag{11}\]
The single-particle momentum distribution function is defined as
\[f(\mathbf{q},t)=\frac{1}{2\Omega(\mathbf{q},t)}(\varepsilon-\varepsilon_{vac}), \tag{12}\]
where \(\Omega(\mathbf{q},t)=\sqrt{m^{2}+\mathbf{p}^{2}(t)}=\sqrt{m^{2}+(\mathbf{q}- e\mathbf{A}(t))^{2}}\) denotes the total energy of particles, \(\varepsilon=m\mathfrak{s}+p_{i}\mathfrak{v}_{i}\) represents the phase space energy density. To precisely calculate the distribution function \(f(\mathbf{q},t)\), it is necessary to introduce an auxiliary three-dimensional vector [40; 41]
\[v_{i}(\mathbf{q},t):=\mathfrak{v}_{i}(\mathbf{p}(t),t)-(1-f(\mathbf{q},t)) \mathfrak{v}_{i,vac}(\mathbf{p}(t),t)\,. \tag{13}\]
Therefore, we can obtain the single-particle momentum distribution function \(f(\mathbf{q},t)\) by solving the following ODEs
\[\begin{split}&\dot{f}=\frac{e\mathbf{E}\cdot\mathbf{v}}{2\Omega},\\ &\dot{\mathbf{v}}=\frac{2}{\Omega^{2}}[(e\mathbf{E}\cdot\mathbf{p })\mathbf{p}-e\mathbf{E}\Omega^{2}](f-1)-\frac{(e\mathbf{E}\cdot\mathbf{v}) \mathbf{p}}{\Omega^{2}}-2\mathbf{p}\times\mathfrak{a}-2m\mathfrak{t},\\ &\dot{\mathfrak{a}}=-2\mathbf{p}\times\mathbf{v},\\ &\dot{\mathfrak{t}}=\frac{2}{m}[m^{2}\mathbf{v}-(\mathbf{p}\cdot \mathbf{v})\mathbf{p}],\end{split} \tag{14}\]
with the initial conditions \(f(\mathbf{q},-\infty)=0\), \(\mathbf{v}(\mathbf{q},-\infty)=\mathfrak{a}(\mathbf{q},-\infty)=\mathfrak{t} (\mathbf{q},-\infty)=0\). Here the dot represents a total time derivative, \(\mathfrak{a}(\mathbf{q},t)\) and \(\mathfrak{t}(\mathbf{q},t)\) are the three-dimensional vectors corresponding to Wigner components.
Moreover, the number density of created pairs can also be obtained by integrating the distribution function \(f(\mathbf{q},t)\) over full momenta at \(t\to+\infty\), i.e.,
\[n=\lim_{t\to+\infty}\int\frac{d^{3}q}{(2\pi)^{3}}f(\mathbf{q},t)\,. \tag{15}\]
## III Vortices for fields with relative phase \(\Delta\phi=0\)
In this section, we study the effects of time delay with different cycles in pulse on the momentum vortices in multiphoton pair production by two counter-rotating fields with relative carrier envelope phase \(\Delta\phi=\phi_{2}-\phi_{1}=0\).
### \(N=4\)
We know that the time delay is fixed in previous study [26], but now we explore how the momentum spectrum changes in the case of varying time delay. Before presenting our new findings in detail, the symmetry of the momentum spectrum in the case \(T=0\) is briefly discussed. When \(N=4\), the effects of \(T\) on the momentum spectra in the polarization plane for two counter-rotating fields are shown in Fig. 2. For \(T=0\), one can see that the momentum spectrum presents four bright curved moon-shaped structures and has a good axisymmetry in the \(q_{x}\) and \(q_{y}\) directions, see Fig. 2(a). Since the momentum distribution is mainly related to the total energy of particle \(\Omega(\mathbf{q},t)=\sqrt{m^{2}+(\mathbf{q}-e\mathbf{A}(t))^{2}}=\sqrt{m^{2} +(q_{x}-eA_{x}(t))^{2}+(q_{y}-eA_{y}(t))^{2}}\), and only in the case of \(T=0\), \(A_{x}(t)\) is an odd function with respect to \(t\) while \(A_{y}(t)=0\). Therefore, under time reversal,
Figure 2: (color online). Momentum spectra of created particles in the polarization plane (where \(q_{z}=0\)) for \(N=4\) with different time delay parameters. From (a) to (d), the corresponding time delays are \(T=0,\tau,4\tau,8\tau\), respectively. Other electric field parameters are \(E_{1,2}=0.1\sqrt{2}E_{\rm cr}\), \(\delta_{1}=-1\), \(\delta_{2}=1\), \(\omega=0.6\), and \(\phi_{1,2}=0\).
the time \(t\) and the momentum \(q_{x}\) and \(q_{y}\) change sign, \(\Omega({\bf q},t)\) still stays invariant, which ensures a good axisymmetry of the momentum spectrum. Once \(T=\tau\), the symmetry in the \(q_{x}\) direction still exists, but the symmetry in the \(q_{y}\) direction gradually disappears.
In the case of varying \(T\), our new findings include that as time delay increases to \(T=\tau\), the four curved moon-shaped structures in Fig. 2(a) are gradually elongated and rotated, which eventually leads to the generation of vortex structure in the momentum spectrum, see Fig. 2(b). Importantly, it is found that the vortex is consisted of six spiral arms. With the time delay increasing to \(T=4\tau\) and \(T=8\tau\), however, we observed that the momentum spectra exhibit the signature of eight arms of vortex pattern, see Figs. 2(c) and (d). And compared to the case of \(T=\tau\), the spiral arms become longer and slender, resulting in the appearance of a more pronounced vortex structure. In particular, for \(T=8\tau\), the vortex pattern becomes almost a quasi-Rasmey interference fringe consisting of many concentric rings.
In order to clearly understand the vortex structures in the momentum spectra described above, we employ the Wentzel-Kramers-Brillouin (WKB)-like approximation method [41; 42] to make some semiquantitative understandings on obtained numerical results. It is known that \(e^{-}e^{+}\) pairs are primarily created at the maximum of the electric field, i.e., at \(t=-T\) and \(t=T\) for the electric field Eq. (2), and the creation process is dominated by the two pairs of turning points near \(t=-T\) and \(t=T\). According to WKB-like approximation [41; 42; 25; 26; 43], for a certain \({\bf q}\), the amplitude of pair creation for the first field in our model can be expressed as \(A_{1}=\exp[-iK_{s}({\bf q},t_{1}^{+})]\) and correspondingly, the second one can be written as \(A_{2}=\exp[-iK_{s}({\bf q},t_{2}^{+})]\), where \(t_{1}\) and \(t_{2}\) represent the turning points near \(t=-T\) and \(t=T\). Therefore, one can obtain the momentum distribution function
\[f({\bf q}) = \sum_{s=\pm}|A_{1}+A_{2}|^{2} \tag{16}\] \[= \sum_{s=\pm}\left|e^{-iK_{s}({\bf q},t_{1}^{+})}+e^{-iK_{s}({\bf q },t_{2}^{+})}\right|^{2},\]
where \(K_{s}({\bf q},t)=K_{0}({\bf q},t)-sK_{xy}({\bf q},t)\), \(K_{0}({\bf q},t)=2\int_{-\infty}^{t}\Omega({\bf q},t^{\prime})dt^{\prime}\), \(K_{xy}({\bf q},t)=\epsilon_{\perp}\int_{-\infty}^{t}\frac{\dot{p}_{x}(t^{ \prime})p_{y}(t^{\prime})-\dot{p}_{z}(t^{\prime})p_{z}(t^{\prime})}{\Omega({ \bf q},t^{\prime})[p_{z}^{2}(t^{\prime})+p_{z}^{2}(t^{\prime})]}dt^{\prime}\), \(s=\pm 1\) represents the electron spin, \(s=0\) denotes the scalar particle, \(\Omega({\bf q},t)=\sqrt{m^{2}+[{\bf q}-e{\bf A}(t)]^{2}}\) and \(\epsilon_{\perp}=\sqrt{m^{2}+q_{z}^{2}}\). For a larger time delay \(T\), the amplitude of pair production for the second field can be rewritten as \(A_{2}=\exp[i\theta_{s}({\bf q})]A_{1}\), where \(\theta_{s}({\bf q})={\rm Re}[K_{s}({\bf q},t_{2}^{+})-K_{s}({\bf q},t_{1}^{+})]\) denotes a phase accumulated factor between the two pulses
[25; 26]. From these, the Eq. (16) becomes
\[f({\bf q}) = \sum_{s=\pm}|A_{1}+e^{i\theta_{s}({\bf q})}A_{1}|^{2} \tag{17}\] \[= \sum_{s=\pm}2(1+\cos[\theta_{s}({\bf q})])e^{-2\theta_{s}({\bf q} _{1}^{\prime})}\] \[\propto \{1+\cos[\theta_{0}({\bf q})]\}e^{-2\theta_{0}({\bf q}_{1}^{ \prime})}.\]
Note that for a large \(T\), since both the electric field and the vector potential between \(t=-T\) and \(t=T\) are very small, the \(\theta_{0}({\bf q})\) can be expressed as \(\theta_{0}({\bf q})\approx 4\sqrt{m^{2}+{\bf q}^{2}}T\).
Actually, it is more convenient to understand the variation in the number and shape of the spiral arms of the vortex pattern in the spherical coordinates \((q,\theta,\varphi)\). Since the amplitude of pair creation in spherical coordinates is able to provide a phase factor which may well reveal the rotation properties of a CP field. Similar to the Refs. [25; 26], the amplitude \(A_{1}\) in spherical coordinates can be written as \(A_{1}\approx\exp(i\ell\delta_{1}\varphi)A_{0}(q,\theta,\varphi)\), where \(\ell\) is the the number of photons absorbed in the multiphoton pair production process and \(\varphi\) denotes the azimuthal angle, and the amplitude \(A_{2}\) can be expressed as \(A_{2}\approx\exp(i\ell\delta_{2}\varphi)\exp[i\theta_{0}(q,\theta,\varphi)]A_ {0}(q,\theta,\varphi)\). Finally, combining Eqs. (16) and (17), we can obtain the momentum distribution function in the polarization plane (where the polar angle \(\theta=\pi/2\), i.e., \(q_{z}=0\))
\[f(q,\varphi)\propto\{1+\cos[\theta_{0}(q,\varphi)+(\delta_{2}-\delta_{1})\ell \varphi]\}|A_{0}(q,\varphi)|^{2}, \tag{18}\]
here \(\theta_{0}(q,\varphi)\approx 4\sqrt{m^{2}+q^{2}}T\) for a large \(T\). It is known from Eq. (18) that the number of vortex spiral arms is associated with
\[q_{k^{\prime}}^{\rm max}(\varphi)=\sqrt{\Big{[}\frac{2k^{\prime}\pi-(\delta_ {2}-\delta_{1})\ell\varphi}{4T}\Big{]}^{2}-m^{2}}, \tag{19}\]
where \(k^{\prime}\) is an integer. Furthermore, we can know from Eq. (19) that the spiral arms number is primarily determined by \(|(\delta_{2}-\delta_{1})\ell|\), which will be illustrated in the following numerical results.
For example, when time delays are \(T=4\tau\) and \(T=8\tau\) in Figs. 2(c) and (d), the helicities of the two counter-rotating CP fields are \(\delta_{1}=-1\) and \(\delta_{2}=1\) as well as the frequencies of those are \(\omega=0.6\). According to the energy conservation equation \(\ell\omega=2\sqrt{q^{2}+m_{*}^{2}}\) with the effective mass \(m_{*}\), we know that the pair production is related to 4-photon process, i.e., \(\ell=4\). Therefore, one can obtain \(|(\delta_{2}-\delta_{1})\ell|=8\), which indicates that the vortex pattern is composed of 8 spiral arms. It has a good agreement with our numerical results.
In addition, the change in the shape of the vortex spiral with increasing time delay can also be understood. According Eq. (19), we can obtain \(\varphi(q)=(2k^{\prime}\pi-4\sqrt{q^{2}+m^{2}}T)/(\delta_{2}-\delta_{1})\ell\). The absolute value of the derivative for the above equation with respect to \(q\) can be written as
\[|\mathrm{d}\varphi(q)/\mathrm{d}q|=|4T/(\delta_{2}-\delta_{1})\ell|\cdot(q/ \sqrt{q^{2}+m^{2}}). \tag{20}\]
One can see from Eq. (20) that the increase of \(\varphi\) with \(q\) changes more quickly for a large \(T\) than a small one. It indicates that the larger the time delay, the faster the vortex structure rotates, which causes the spiral arms of the vortex pattern to become thinner, longer and tighter. These results are consistent with the variation of vortices in the momentum spectrum shown in Figs. 2(c) and (d).
### \(N=2\)
In our previous work [25], the effect of a relatively large time delay on the momentum spectrum with relative phase \(\Delta\phi=\pi/2\) is considered. Here we add some details, one is that time delay is relatively small, the other is relative phase \(\Delta\phi=0\). We shows the effects of \(T\) on the momentum distributions for \(N=2\) in Fig. 3. For \(T=0\), the result is almost the same as in Fig. 2(a) of Ref.[25], in addition, it is similar to that of \(N=4\), i.e., the momentum distribution still exists a good axisymmetry in the \(q_{x}\) and \(q_{y}\) directions. While as time delay increases to \(T=\tau\), we found some new results include that the axisymmetry is destroyed, and the H-shaped momentum distribution that is the strongest near the center in Fig. 3(a) gradually expands outward and twists simultaneously, which eventually can cause the generation of vortex structures, see Fig. 3(b). Meanwhile, the vortex pattern presents an obvious rotational symmetry. The reason is that even though there is a time delay between the two counter-rotating fields, under time reversal, the momentum \(q_{x}\) and \(q_{y}\) change sign, the total energy of particle \(\Omega(\mathbf{q},t)\) remains almost invariant. Therefore, there is a pronounced rotational symmetry in the vortex structure, and the larger the time delay, the better the rotational symmetry. Moreover, we also found that the vortex is consisted of six inhomogeneous spiral arms. Since when \(T\) is small, the two fields are not yet completely separated and there is an overlap between them, so there is a remarkable interference effect between them, which leads to the generation of the inhomogeneous vortex structure in Fig. 3(b).
As time delay further increases to \(T=4\tau\) and \(T=8\tau\), compared to the case of \(T=\tau\), we observed that the number of spiral arms changes from 6 to 8, and the distribution of the arms changes from inhomogeneity to homogeneity, as shown in Figs. 3(c) and (d). Where about the understanding of the number of spiral arms is similar to the Fig. 2. The change in homogeneity is due to the fact that when \(T\) is large, the two fields are completely separated, so the interference effect between them is significantly reduced, which leads to a relatively uniform distribution of the vortex arms. In addition, according to Eq. (19), we can determine the position of the eight spiral arms in the momentum spectrum. For instance, in the case of \(T=4\tau\), the estimated results of the maximum value positions for eight spiral arms are shown in Table 1. Compared with the numerical results in Fig. 3(c), we found a good agreement between them, and the errors lie within at about 2% \(\sim\) 6.5%.
Figure 3: (color online). Momentum spectra of created particles in the polarization plane (where \(q_{z}=0\)) for \(N=2\) with different time delay parameters. From (a) to (d), the corresponding time delays are \(T=0,\tau,4\tau,8\tau\), respectively. Other electric field parameters are the same as in Fig. 2.
On the other hand, it is found that the vortex patterns present an obvious difference between the cases of \(N=2\) and \(N=4\). For the time delay as \(T=G\tau\), under the given same \(G\), the vortex structure of \(N=2\) is more dispersed than that of \(N=4\), meanwhile, the spiral arms of \(N=2\) are also shorter and thicker than those of \(N=4\). These phenomena can be understood as below. Since it is known from field Eq. (2) that the the pulse duration is \(\tau=N\pi/\omega\), so that the time delay is \(T=GN\pi/\omega\). It leads to the fact that when \(G\) is fixed, the smaller \(N\) is, the smaller the corresponding \(T\) is. Then according to Eq. (20), we can see that for small \(T\), \(\varphi\) varies slowly with increasing \(q\), which means that the vortex structure with \(N=2\) rotates slower than that one with \(N=4\). This eventually results in the vortex structure for \(N=2\) being more dispersed than that of \(N=4\), and the spiral arms of \(N=2\) are shorter and thicker than that of \(N=4\).
### \(N=1\)
It is known that the cycles in pulse is relatively large in previous study [25; 26], but here we consider the effect of time delay on pair production for small cycles. When \(N=1\), the influences of different \(T\) on the momentum spectra are displayed in Fig. 4. For \(T=0\), the phenomenon is similar to the cases of \(N=2\) and \(N=4\). But with increasing time delay, we discover some important new phenomena. For \(T=\tau\), the elliptic momentum distribution in Fig. 4(a) is gradually distorted and elongated, at the same time, a pronounced interference phenomenon can be observed, see Fig. 4(b). Moreover, the maximum value of momentum spectrum in Fig. 4(b) is smaller than that in Fig. 4(a). These results will be qualitatively interpreted by the semiclassical picture in the following paragraph.
Importantly, as time delay increases to \(T=4\tau\) and \(T=8\tau\), there exist still obvious vortex patterns consisting of eight spiral arms in the momentum spectra, as shown in Figs. 4(c) and (d). It means that the Eq. (19) is also approximately applicable in the case of \(N=1\). Moreover, compared to the cases of \(T=\tau\) and \(T=0\), one can see that the the maximum values of momentum spectra in Figs. 4(c) and (d) are smaller than those in Figs. 4(a) and (b). On the other hand, compared to the cases of \(N=2\) and \(N=4\), we found that the vortex patterns shrink significantly in the direction of small momentum. Because we know from electric field Eq. (2) that when \(N\) reduces, the corresponding time delay and the range of effective time action on the pair production also decrease, which leads to a smaller distribution of vortices.
Figure 4: (color online). Momentum spectra of created particles in the polarization plane (where \(q_{z}=0\)) for \(N=1\) with different time delay parameters. From (a) to (d), the corresponding time delays are \(T=0,\tau,4\tau,8\tau\), respectively. Other electric field parameters are the same as in Fig. 2.
It is known that the interference effects of momentum spectrum and the number density of created particles are associated with the location of turning points in the complex \(t\) plane [44; 45; 46; 47; 48]. Specifically, the number density depends on the turning points nearest to the real \(t\) axis also called the dominant turning points, while the interference effects are dominated by the distances between the dominant turning points along the real \(t\) axis direction. The turning points structures corresponding to the maximum momentum distribution for different \(T\) in Fig. 4 are shown in Fig. 5. One can see that for \(T=0\), there is one pair of dominant turning points, see Fig. 5(a), while for \(T=\tau\), we can observe three pairs of dominant turning points that they have almost equidistant along the
Figure 5: (color online). Contour plots of \(|\Omega(\mathbf{q},t)|^{2}\) in the complex \(t\) plane, showing the turning point distribution where \(\Omega(\mathbf{q},t)=0\). These plots are for the cycle \(N=1\), and other field parameters are the same as in Fig. 4. From (a) to (d), the corresponding time delays are \(T=0,\tau,4\tau,8\tau\), respectively, and the corresponding maximums of the momentum spectra are located in (\(q_{x}=0.03,q_{y}=0.91\)), (\(q_{x}=0.03,q_{y}=0.02\)), (\(q_{x}=-0.14,q_{y}=-0.37\)), (\(q_{x}=-0.01,q_{y}=-0.43\)), respectively. The three dashed lines are for eyes guidance.
real \(t\) axis, see Fig. 5(b). It is well known that the closer the distance between the dominant turning points are along the real axis, the stronger the interference effect of the momentum spectrum. Therefore, there is an obvious interference pattern in the momentum spectrum shown in Fig. 4(b). Moreover, it is found that the dominant turning points in Fig. 5(a) are closer to the real \(t\) axis than those in Fig. 5(b). And we known that the closer the dominant turning points is to the real axis, the greater the number density of created particles. Thus \(n((q_{x}=0.03,q_{y}=0.91),t\rightarrow\infty)=1.91\times 10^{-4}\) in Fig. 4(a) is large than \(n((q_{x}=0.03,q_{y}=0.02),t\rightarrow\infty)=2.87\times 10^{-5}\) in Fig. 4(b).
As time delay increases to \(T=4\tau\) and \(T=8\tau\), the distributions of turning points become more complicated as displayed in Figs. 5(c) and (d). It is found that there are four pairs of dominant turning points and the distributions present an obvious periodic structure. Note that the four pairs of turning points are obtained by contributing two pairs per period. This means that the turning points distributions exist an interference within each period (second order interference) in addition to the interference between the two periods (first order interference). We think that the periodicity of turning points may be primarily related to the generation of vortices in the momentum spectra, while the combined effect of two orders interference may be mainly associated with the interference between the spiral arms. Therefore, one can see from Figs. 5(c) and (d) that the distributions of turning points show a remarkable periodic structure, which leads the generation of the vortices in Figs. 4(c) and (d).
Moreover, from Fig. 5(c), it is found that the distance along the real \(t\) axis direction between the dominant turning points of two periods is \(\Delta\text{Re}(t)\approx 40\), while the corresponding distance in Fig. 5(d) is \(\Delta\text{Re}(t)\approx 85\). Meanwhile, the distance along the real \(t\) axis direction between the two pairs of dominant turning points each period in Fig. 5(c) is \(\Delta\text{Re}(t)\approx 4\), while the corresponding distance in Fig. 5(d) is \(\Delta\text{Re}(t)\approx 5\). Therefore, the total interference of the turning points distribution in Fig. 5(c) is stronger than that in Fig. 5(d). It demonstrates that the interference effect between the spiral arms in Fig. 4(c) is stronger than that in Fig. 4(d). Besides, the turning points characteristics in Figs. 5(c) and (d) also reflect the fact that the time delay between the two fields in the case of Fig. 5(c) is smaller than that of Fig. 5(d), which is consistent with the information reflected by our electric field Eq. (2). On the other hand, compared to the case of \(T=\tau\), one can see that the dominant turning points in Figs. 5(c) and (d) are farther from the real \(t\) axis than those in Fig. 5(b), but the dominant turning points in Figs. 5(c) and (d) have almost the same distances from the real axis. It suggests that \(n((q_{x}=0.03,q_{y}=0.02),t\rightarrow\infty)=2.87\times 10^{-5}\) in Fig. 4(b) is large than \(n((q_{x}=-0.14,q_{y}=-0.37),t\rightarrow\infty)=4.
\((q_{x}=-0.43),t\rightarrow\infty)=4.9\times 10^{-6}\) in Fig. 4(d), while \(n((q_{x}=-0.14,q_{y}=-0.37),t\rightarrow\infty)=4.8\times 10^{-6}\) in Fig. 4(c) and \(n((q_{x}=-0.01,q_{y}=-0.43),t\rightarrow\infty)=4.9\times 10^{-6}\) in Fig. 4(d) are almost equal.
### \(N=0.8\) and \(N=0.5\)
When the cycle decreases to \(N=0.8\), we show the effects of \(T\) on the momentum spectra in Fig. 6. The results are almost similar to the case of \(N=1\), except that for \(T=4\tau\) and \(T=8\tau\), the momentum vortices are less pronounced than those in the case of \(N=1\). However, for \(T=8\tau\), a vortex structure composed of eight spiral arms can still be generated in the momentum spectrum. It indicates that the appropriate time delay under the subcycle also causes the generation of vortices in the momentum spectrum, which provides a new reference for the possible experimental observation about the multiphoton pair production in future. Based on this finding, we further explore whether there is still a vortex when the cycle decreases to \(N=0.5\)?
When the cycle decreases further to \(N=0.5\), the effects of \(T\) on the momentum spectra are shown in Fig. 7 where we found some new phenomena compared to the case of \(N=0.8\). As can be seen in Figs. 7(a) and (b), for \(T=0\), there is one of the strongest momentum distribution that is near the center, while as time delay increases to \(T=\tau\), the momentum distribution along the \(q_{y}\) direction shifts rapidly toward the large momentum direction, at the same time, the strongest momentum distribution is split into two parts that are far from the center. The reason is that \(e^{-}e^{+}\) pairs are mainly generated at the maximum of the electric field, for \(T=0\), there is a maximum field intensity that is located at \(T=0\) in the electric field Eq. (2), while for \(T=\tau\), there are two maximums that are located at \(T=-\tau\) and \(T=\tau\), respectively, which leads to the generation of two maximum momentum distributions in Fig. 7(b). As time delay increases further to \(T=4\tau\) and \(T=8\tau\), we found that the range of momentum distribution along the \(q_{x}\) direction expands and a weak interference appears, see Figs. 7(c) and (d). The interference can be interpreted as interference effect of particles created by large peaks of the two counter-rotating fields.
Importantly, compared to the case of \(N=0.8\), it is found that even if the time delay increases to \(T=8\tau\), there is still no vortex structure in the momentum spectrum. It indicates that vortex is very sensitive to the number of cycles, i.e., even if the time delay is large, the vortices still cannot be generated if the cycle is very small. Furthermore, from Fig. 7, we found that the time delay mainly affects the momentum separation in the \(q_{y}\) direction, and one can see from Fig. 6 that the number of cycles seems to primarily dominate the momentum distribution in the \(q_{x}\) direction, while the combined effect of time delay and the number of cycles affects the generation of the vortex structure.
Figure 6: (color online). Momentum spectra of created particles in the polarization plane (where \(q_{z}=0\)) for \(N=0.8\) with different time delay parameters. From (a) to (d), the corresponding time delays are \(T=0,\tau,4\tau,8\tau\), respectively. Other electric field parameters are the same as in Fig. 2.
## IV Vortices for fields with relative phase \(\Delta\phi=\pi/2\)
In this section, the influence of time delay with different cycles in pulse on the momentum vortices in two counter-rotating fields with relative phase \(\Delta\phi=\phi_{2}-\phi_{1}=\pi/2\) are investigated. Note that since the results in the cases of \(N=4\) and \(N=2\) are almost similar to those for \(\Delta\phi=0\), except that all patterns are rotated \(\Delta\phi/2=\pi/4\) counterclockwise, so we do not show the results here. In the following, we are focusing on the study of the cases of \(N=1\) and \(N=0.5\).
Figure 7: (color online). Momentum spectra of created particles in the polarization plane (where \(q_{z}=0\)) for \(N=0.5\) with different time delay parameters. From (a) to (d), the corresponding time delays are \(T=0,\tau,4\tau,8\tau\), respectively. Other electric field parameters are the same as in Fig. 2.
When \(N=1\), the effects of \(T\) on the momentum spectra are displayed in Fig. 8 where the remarkable difference could be observed. Firstly, for \(T=0\), the axisymmetry of the momentum spectrum in the \(q_{x}\) and \(q_{y}\) directions is severely destroyed, but since \(\phi_{2}=\pi/2\), the polarized axis are rotated \(\Delta\phi/2=\pi/4\) counterclockwise to produce a new coordinates as \((q^{\prime}_{x},q^{\prime}_{y})\), the symmetry in the \(q^{\prime}_{y}\) direction still exists while the symmetry in the \(q^{\prime}_{x}\) direction is broken, as shown in Fig. 8(a). The reason is that under the new coordinates, as time reversal, the time \(t\) and the momentum \(q^{\prime}_{x}\) and \(q^{\prime}_{y}\) change sign, the sign (odd/even property) of \(\Omega(\mathbf{q^{\prime}},t)=\sqrt{m^{2}+(q^{\prime}_{x}-eA^{\prime}_{x}(t)) ^{2}+(q^{\prime}_{y}-eA^{\prime}_{y}(t))^{2}}\) can still remain invariant only in the \(q^{\prime}_{y}\) direction, while its invariant is violated in the \(q^{\prime}_{x}\) direction. Therefore, the momentum spectrum presents an axisymmetry only in the \(q^{\prime}_{y}\) direction. Secondly, as time delay increases to \(T=4\tau\) and \(T=8\tau\), the rotational symmetry of vortex is also severely broken, since that the vortex pattern is mainly distributed in the third quadrant, see Figs. 8(c) and (d). This phenomenon can be understood based on the knowledge of turning points. We know
Figure 8: (color online). Momentum spectra of created particles in the polarization plane (where \(q_{z}=0\)) for \(N=1\) with different time delay parameters. The field parameters are the same as in Fig. 4 except \(\phi_{2}=\pi/2\).
that the turning points structure is related to the solution of \(\Omega(\mathbf{q}^{\prime},t)=0\). For \(T=4\tau\) and \(T=8\tau\), when the polarized axis are rotated \(\pi/4\) counterclockwise, the \(\Omega(\mathbf{q}^{\prime},t)\) can be eventually rewritten as \(\Omega(\mathbf{q}^{\prime},t)\approx\sqrt{m^{2}+f_{1}^{2}+f_{2}^{2}+{q_{x}^{ \prime}}^{2}+2\alpha q_{x}^{\prime}}\) near the \(q_{y}^{\prime}=0\), here \(\alpha\sim f_{1}(t)+f_{2}(t)\). Since that \(\alpha>0\) thus it is easier to satisfy the equation \(\Omega(\mathbf{q}^{\prime},t)=0\) with \(q_{x}^{\prime}<0\), which means that the turning points are closer to the real \(t\) axis in the region of \(q_{x}^{\prime}<0\) that leads to the dominant momentum locating the third quadrant. Moreover, we found that the number of corresponding spiral arms is significantly decreased and the shape of arms becomes slender.
When the cycle decreases to \(N=0.5\), the influences of \(T\) on the momentum spectra are shown in Fig. 9. Compared to the case of \(\phi_{2}=0\) in Fig. 7, we discover some interesting phenomena except that the axisymmetry of momentum distribution in Fig. 9(a) is severely destroyed. With the increase of time delay, the maximum momentum distribution that was originally split into two
Figure 9: (color online). Momentum spectra of created particles in the polarization plane (where \(q_{z}=0\)) for \(N=0.5\) with different time delay parameters. The field parameters are the same as in Fig. 7 except \(\phi_{2}=\pi/2\).
parts in Fig. 7 is merged into one part, and the range of distribution is significantly shrunken, see Figs. 9(b), (c) and (d). This phenomenon is due to the fact that when \(\phi_{2}=\pi/2\), there is always only one maximum field strength in the electric field Eq. (2) as the time delay increases, and \(e^{-}e^{+}\) pairs are mainly created at the maximum of the electric field. Therefore, there exists only one maximum momentum distribution in the corresponding momentum spectra. Interestingly, for \(T=8\tau\), we found that vortex structure is still generated in the momentum spectrum. However, when \(\phi_{2}=0\), there are no vortices in this case. It indicates that the introduction of carrier phase can lead to the generation of vortex in the momentum spectrum even if the cycle is very small.
Of another interesting point is that we found the range of critical polarization values for the appearance of vortices in the momentum spectra for different cycles with \(\Delta\phi=0\) and \(\Delta\phi=\pi/2\) by numerical calculations. The results are shown in Table 2, note that where we only consider \(T=8\tau\) for each cycle. It is found that in two cases of relative phases, when the cycle decrease from \(N=2\) to \(N=0.5\), the polarization range for the transition from the appearance to the disappearance of vortices is gradually decreasing. Moreover, for \(N=0.5\), there is always no vortex in the momentum spectrum when \(\Delta\phi=0\), while when \(\Delta\phi=\pi/2\), momentum spectrum exists vortex, and the polarization range of the vortex transition is \(0.4\sim 0.3\).
In order to observe the change in polarization values during the vortex transition, we select some examples from Table 2 for the study, as shown in Fig. 10. One can see from Figs. 10(a) and (b) that for \(N=2\) with \(\Delta\phi=0\), there are still vortex in the momentum spectrum as \(|\delta_{1,2}|\) decreases to \(0.6\), while when \(|\delta_{1,2}|\) further decreases to \(0.5\), the vortex gradually disappear and the momentum distribution shrinks primarily toward small momentum direction. From Figs. 10(c) and (d), it is found that for \(N=1\) with \(\Delta\phi=\pi/2\), the momentum spectrum still exists vortex as \(|\delta_{1,2}|\) reduces to \(0.5\), while when \(|\delta_{1,2}|\) further reduces to \(0.4\), the vortex gradually disappear. These results show that we can observe vortex patterns not only in the two counter-rotating CP fields (\(|\delta_{1,2}|=1\)), but
\begin{table}
\begin{tabular}{c c c c} number of cycles & \(N=2\) & \(N=1\) & \(N=0.5\) \\ \hline \(|\delta_{1,2}|\left(\Delta\phi=0\right)\) & \(0.6\sim 0.5\) & \(0.5\sim 0.4\) & \(--\) \\ \(|\delta_{1,2}|\left(\Delta\phi=\pi/2\right)\) & \(0.6\sim 0.5\) & \(0.5\sim 0.4\) & \(0.4\sim 0.3\) \\ \end{tabular}
\end{table}
Table 2: Critical polarization range of the transition of vortices appearance/disappearance for various cycles with \(\Delta\phi=0\) and \(\Delta\phi=\pi/2\), respectively, when \(T=8\tau\) is given. Note that the \(--\) denotes the absence of vortex under the studied parameters.
also in the two counter-rotating elliptical polarization fields (\(|\delta_{1,2}|\approx 0.5\)). It greatly reduces the polarization of the field to observe vortices effectively.
It should be noted that \(N=4\) is a special case, in which there are no the vortex transition, but a variation of vortex split. The details are described as follows, we can observe vortex patterns at all polarizations, but at \(|\delta_{1,2}|=1\), the vortex structure is relatively uniform and is a globe, while at \(|\delta_{1,2}|\in[0.9,0.5]\), it is split into two parts, and at \(|\delta_{1,2}|\in[0.4,0]\), it is further split into four parts. Here we do not display the results.
Based on the effect of field polarization on vortex formation and change mentioned above, it is reminding that by adjusting the polarization, we can control not only the appearance or disappearance of vortex pattern, but also the location of vortex presence.
Figure 10: (color online). Momentum spectra of created particles in the polarization plane (where \(q_{z}=0\)) for various polarization value with different \(N\) and \(\Delta\phi\). Where the time delay is set as \(T=8\tau\), (a) and (b) correspond to the case of \(N=2\) and \(\Delta\phi=0\), (c) and (d) correspond to the case of \(N=1\) and \(\Delta\phi=\pi/2\).
Number density
In this section, the effects of time delay and cycle in pulse on the number density of created particles in the case of relative phase \(\Delta\phi=0\) and \(\Delta\phi=\pi/2\) are investigated, respectively. Note that for comparison between the results, the following studies are performed under the same laser field energy [49], and we selected several different field parameters.
The number density dependence on \(T\) for various \(N\) is shown in Fig. 11. In the case of \(\Delta\phi=0\), one can see from Fig. 11(a) that when \(T\) is fixed, the number density does not change obviously at large \(N\), while it is significantly enhanced about one order of magnitude at small \(N\). When \(N\) is fixed, the number density tends to be a constant at large \(T\), while it is increased at least five times at small \(T\). Combining the above we can conclude that either the small \(T\) or \(N\) is beneficial for \(e^{-}e^{+}\) pair production. On the other hand, according to the markings of the green rectangle in the figure, we found that when \(N\) is larger, instead, vortices start to be generated in the momentum spectrum at
Figure 11: (color online). Number density of created particles under the same laser field energy dependence on time delays for various cycles with different phase parameters \(\Delta\phi=0\) for (a) and \(\Delta\phi=\pi/2\) for (b), respectively. Other electric field parameters are the same as in Fig. 2. Note that the green rectangle marks the minimal \(T\) when the vortex is generated.
smaller \(T\). In particular, when \(N=0.8\) and \(N=1\), the corresponding number densities only differ about 2 times, but the minimal \(T\) for vortex generation are significantly different. Specifically, when \(N=0.8\), a vortex structure appears in the momentum spectrum at \(T=8\tau\), while when \(N=1\), it appears in the momentum spectrum at \(T=4\tau\). It indicates that without losing much number density, we can obtain the vortex pattern at a smaller time delay by adjusting the above two parameters flexibly.
In the case of \(\Delta\phi=\pi/2\), the results shown in Fig. 11(b) are similar to those of Fig. 11(a). The difference is that for \(T=0\), the number density of \(\Delta\phi=0\) is slightly larger than that of \(\Delta\phi=\pi/2\) for \(N=0.5\) and \(N=0.8\). Moreover, for \(N=0.5\), there is no vortex structure in the case of \(\Delta\phi=0\), while in the case of \(\Delta\phi=\pi/2\), momentum spectrum exists vortex pattern. This indicates that the introduction of the phase has little effect on the number density of the generated particles, while it mainly affects the momentum vortices.
To see more clearly how the number density of created particles varies with \(N\) for small \(T\), we display Fig. 12. It is found that in the cases of \(\Delta\phi=0\) and \(\Delta\phi=\pi/2\), when \(T\) is fixed, the
Figure 12: (color online). Number density of created particles under the same laser field energy dependence on cycles in pulse for small time delay with different relative phase parameters. The electric field parameters are the same as in Fig. 11.
corresponding number density is enhanced about one order of magnitude with the decrease of \(N\), while when \(N\) is fixed, it is increased by few times with reducing \(T\). These results indicate that the number density is more sensitive to the number of cycles in pulse.
## VI Conclusion and Outlook
In summary, we revisit the vortices in multiphoton pair creation by two counter-rotating fields with time delay for different cycles using the DHW formalism. The focus is on considering two case of the relative carrier envelope phase as \(0\) and \(\pi/2\), and the effects of different time delays and cycles on the number density are further examined. Meanwhile, some typical vortex structures are semiquantitatively analyzed by employing the WKB-like approximation method. Moreover, we provide some qualitative understandings to some results obtained by corresponding turning points structure.
For the momentum vortex, it is sensitive to time delays and cycles in pulse. compared to previous studies [25; 26], we found some interesting new results. With the increase of either time delay or cycle number, the spiral arms of the vortex become thinner and longer, meanwhile, the number of spiral arms is significantly increased. Importantly, for a small cycle \(N=0.8\), the momentum spectrum still exists an obvious vortex pattern. Moreover, the carrier phase plays an crucial role in multiphoton pair production, which destroys not only the axisymmetry of momentum spectrum, but also the rotational symmetry of momentum vortex. And the number of spiral arms is decreased due to the introduction of carrier phase. More importantly, for \(\Delta\phi=\pi/2\), there still exists vortex pattern in the momentum spectrum when the cycle decreased to \(N=0.5\). On the other hand, we also found the range of critical polarization values for the vortices appearance corresponding to the different cycle number with relative carrier envelope phase as \(0\) and \(\pi/2\), respectively. Based on the studied results, it is applicable to regard the momentum signatures as a new probing to the laser field information. For example, by breaking the symmetry, we can probe the information about relative phase. By the thinning shape and the number of spiral arms, one can detect the information of cycles in pulse. By the presence or absence of vortices in the momentum spectrum, we can probe the information of time delay.
For the number density of created particles, it is insensitive to relative phase, but sensitive to time delay and cycle number. We found that either small time delay or cycle increases the number density significantly. Specifically, the number density is increased at least five times at small time
delay, and it is enhanced about one order of magnitude at small cycle. While for either large time delay or pulse cycle, the number density tends to be a constant. Interestingly, it is found that without losing much number density, we can obtain the vortex pattern at a smaller time delay by adjusting the above two parameters flexibly. This is important since it may provide a possibility of broader parameter ranges for realizing the vortices in multiphoton pair production.
These results indicate that time delay, the number of cycles in pulse and carrier envelope phase play an extremely important role in vortices of multiphoton pair creation by two counter-rotating fields. While we have only investigated two typical cases of the carrier phase, we believe that the results have exhibited many important features about the vortices of multiphoton pair production.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 11935008 and No. 11875007. The computation was carried out at the HSCC of the Beijing Normal University.
|
2305.11669 | Structured factorization for single-cell gene expression data | Single-cell gene expression data are often characterized by large matrices,
where the number of cells may be lower than the number of genes of interest.
Factorization models have emerged as powerful tools to condense the available
information through a sparse decomposition into lower rank matrices. In this
work, we adapt and implement a recent Bayesian class of generalized factor
models to count data and, specifically, to model the covariance between genes.
The developed methodology also allows one to include exogenous information
within the prior, such that recognition of covariance structures between genes
is favoured. In this work, we use biological pathways as external information
to induce sparsity patterns within the loadings matrix. This approach
facilitates the interpretation of loadings columns and the corresponding latent
factors, which can be regarded as unobserved cell covariates. We demonstrate
the effectiveness of our model on single-cell RNA sequencing data obtained from
lung adenocarcinoma cell lines, revealing promising insights into the role of
pathways in characterizing gene relationships and extracting valuable
information about unobserved cell traits. | Antonio Canale, Luisa Galtarossa, Davide Risso, Lorenzo Schiavon, Giovanni Toto | 2023-05-19T13:35:38Z | http://arxiv.org/abs/2305.11669v1 | # Structured factorization for single-cell gene expression data
###### Abstract
Single-cell gene expression data are often characterized by large matrices, where the number of cells may be lower than the number of genes of interest. Factorization models have emerged as powerful tools to condense the available information through a sparse decomposition into lower rank matrices. In this work, we adapt and implement a recent Bayesian class of generalized factor models to count data and, specifically, to model the covariance between genes. The developed methodology also allows one to include exogenous information within the prior, such that recognition of covariance structures between genes is favoured. In this work, we use biological pathways as external information to induce sparsity patterns within the loadings matrix. This approach facilitates the interpretation of loadings columns and the corresponding latent factors, which can be regarded as unobserved cell covariates. We demonstrate the effectiveness of our model on single-cell RNA sequencing data obtained from lung adenocarcinoma cell lines, revealing promising insights into the role of pathways in characterizing gene relationships and extracting valuable information about unobserved cell traits.
**Keywords:** Count data; Factor analysis; Pathways; Shrinkage prior; Rounded continuous data;
Introduction
### Single-cell RNA sequencing data
Single-cell RNA sequencing (scRNA-seq) has become a widely used tool to characterize gene expression of thousands of cells at transcriptome-wide resolution. By sequencing RNA molecules from individual cells, scRNA-seq provides a count-based measure of relative gene expression. Compared to previous "bulk" technologies, single-cell sequencing unlocks the possibility to analyze rare cell types, to discover new cell types, and to study the heterogeneity of gene expression in cell populations of interest (Wagner et al., 2016). This is particularly relevant in cancer, as tumours interact with the surrounding tissues, known as the tumour microenvironment; this interaction is associated with prognosis, response to treatment, and survival (Wu and Dai, 2017). Studying tumour samples at single-cell resolution allows for the discovery of cell sub-populations that potentially respond differently to treatment (Xue et al., 2020) or that are differentially abundant across patients or disease stages (Becker et al., 2022), making it a promising tool for personalized medicine.
For each cell, scRNA-seq data consist of counts that represent the expression of each gene in that cell. In a typical experiment, in addition to the cells by genes expression matrix, several supporting variables are collected for each of the analyzed cells and for each of the measured genes; we name "covariates" the former and "meta-covariates" the latter.
We denote the matrix of gene expression as \(y\); such matrix, of dimension \(n\times p\), contains the counts for \(p\) genes measured on \(n\) cells. The matrix containing the covariates, \(x\), is a \(n\times d\) matrix where \(d\) indicates the number of covariates for each cell. The covariates are cell-specific features, typically containing quality control information, such as the number of mapped or aligned reads and the total counts, as well as phenotypic information, such as the tissue or donor. In addition, the matrix containing the meta-covariates is indicated with \(w\); it has dimensions \(p\times q\) and contains the \(q\) meta-covariates for each gene, i.e. gene-specific features containing technical, e.g., gene length or GC-content, and biological, e.g., pathway membership, information.
Gene expression data at single-cell resolution are highly informative, allowing researchers to characterize the cells at the finest level, and their application to cancer research, immunology, and developmental biology have already led to novel insights. However, scRNA-seq data are challenging: they are high-dimensional count data, characterized by high variance and abundance of zeros. Hence, exploratory models are needed to facilitate summary, visualization and clustering of cells, to identify novel biological hypotheses to be tested with targeted experiments.
### High dimensional count data challenges
A default strategy for modelling RNA-seq count data consists in using standard parametric distributions such as the Poisson (Marioni et al., 2008) or the negative binomial (Anders and Huber, 2010; Robinson and Smyth, 2008). Even if simple in terms of computation and interpretation, such standard models
have some limitations. For instance, even the negative binomial may be unable to capture the zero inflation and multimodality of gene-wise distributions often observed in scRNA-seq (Jiang et al., 2022).
A different - unfortunately still common - approach forgets the count nature of the data and treats them as continuous. A common practice, often used also in different applications in which count data are observed, consists of log- or square-root-transforming the observed counts, subsequently applying methods designed for continuous or Gaussian data. However, transformations to Gaussianity are ineffective for small counts (Warton, 2018), while log-transformations introduce difficulties in the presence of zeros (O'Hara and Kotze, 2010). This practice has been strongly criticized in our motivating context of scRNA-seq data (e.g., Townes et al., 2019). More broadly, these approaches are not well-defined for count data: the data-generating process for a continuously-transformed Gaussian model cannot produce counts, which immediately amplifies model misspecification, limits interpretability, and undermines the reliability of inference and predictive distributions.
To overcome these limitations and challenges, we introduce a flexible Bayesian framework specifically designed to handle complex count-valued data. Our proposed framework incorporates a continuous latent variable representation, similar to the approaches employed by Canale and Dunson (2011) and Kowal and Canale (2020). By adopting this specification, we are able to effectively capture the various characteristics associated with high-dimensional count probability mass functions. In addition, to account for the intricate dependence structures present in the multivariate count vector, we follow the common practice that leverages factorization models. This approach allows us to express the high-dimensional covariance matrix as a combination of a limited number of rank-one matrices. Recently, Schiavon et al. (2022) introduced a general class of infinite factorization models capable of handling continuous, binary, and count data. Notably, this class of models promotes sparsity in the matrix of factor loadings by effectively incorporating information from covariate and metacovariate vectors.
In the next Section, we provide a detailed description of our proposed approach, which we refer to as cosin (COunt data Structured INfinite factorization). Subsequently, in Section 3, we apply this model to a motivating example involving lung adenocarcinoma scRNA-Seq data. To assess the validity and generality of our approach, as well as to compare its performance against state-of-the-art scRNA-seq methods, we present a comprehensive simulation experiment in Section 4. Finally, in Section 5, we engage in a thorough discussion of the proposed approach, its generalization and extensions, and its possible applications beyond scRNA-seq studies.
## 2 Model and prior specification
For each cell \(i=1,\ldots,n\), scRNA-seq data can be treated as a \(p\)-dimensional vector of integer valued random variables \(y_{i}\in\mathbb{N}^{p}\) where \(\mathbb{N}\) is the set of natural numbers. Along with the \(n\times p\) data matrix \(y\), additional external information for each cell and each gene are also available. Let \(x_{i}\) be the \(d\)-dimensional vector of cell-specific covariates and \(w_{j}\) for \(j=1,\ldots,p\) be the \(q\)-dimensional vector of gene-specific meta-covariates with \(w_{j}^{\top}=(w_{Tj}^{\top},w_{Bj}^{\top})\) where \(w_{Tj}\) is the \(q_{T}\) dimensional subvector of
technical meta-covariates and \(w_{Bj}\) is the \(q_{B}\) dimensional subvector of biological meta-covariates and \(q=q_{T}+q_{B}\).
### Model specification
Following Kowal and Canale (2020) we introduce a continuous-valued latent matrix \(z\) related to the observed count-valued matrix \(y\) via a simultaneous transformation and rounding operator \(\mathcal{S}\colon\mathbb{R}\to\mathbb{N}\) with \(\mathcal{S}(\cdot)=\mathcal{H}(\mathcal{G}(\cdot))\). Specifically, the rounding operator is such that \(\mathcal{H}(t)=\ell\) if \(t\in\mathcal{A}_{\ell}\) and \(\{\mathcal{A}_{\ell}\}_{\ell=0}^{\infty}\) is a known partition of \(\mathbb{R}\). Here, we adopt the floor function defined by \(\mathcal{A}_{\ell}=[\ell,\ell+1)\). As discussed in Kowal and Canale (2020), rounding alone is suboptimal, particularly when the original data are counts. The popularity of log-linear models for count data thus suggests to specify the transformation operator \(\mathcal{G}\) as the exponential transformation. Thus, the single entry \(y_{ij}\) of \(y\) is linked to a latent \(z_{ij}\) via the operator \(\mathcal{S}\) and specifically \(y_{ij}=\ell\) if \(\exp\{z_{ij}\}\in[\ell,\ell+1)\).
The latent variables \(z_{ij}\) are modelled via
\[z_{ij}=x_{i}^{T}\beta_{j}+\epsilon_{ij}, \tag{1}\]
where \(x_{i}^{T}\beta_{j}\) is the conditional expectation of \(z_{ij}\) and \(\epsilon_{ij}\) is a zero-mean Gaussian error term. Note that we are assuming a linear relation between the mean of the latent variables and the set of cell-specific covariates \(x_{i}\) and that this relation is changing with \(j\), i.e. we assume that the same set of covariates may impact differently the different columns of the matrix \(z\).
The Gaussian error term captures all the residual variability not modeled by the linear predictor in (1). Consistently with this, we exploit a factor analytic representation that allows us to express \(\epsilon_{i}\) as the linear combinations of latent \(k\)-dimensional factors \(\eta_{i}\). More formally we let
\[\epsilon_{ij}=\sum_{h=1}^{k}\lambda_{jh}\eta_{ih}+\varepsilon_{ij}, \tag{2}\]
where \(\lambda_{jh}\) is an element of the \(p\times k\) factor loadings matrix \(\Lambda\) and \(\eta_{ih}\) is an element of the \(h\)-th latent factor \(\eta_{\cdot h}\), with \(h=1,\ldots,k\). The vectors \(\varepsilon_{i}\) represent the remaining noise and are iid according to a \(p\)-variate Gaussian distribution \(N(0,\Sigma)\), with diagonal covariance matrix \(\Sigma\). In matrix notation, the error matrix \(\epsilon\) is equal to a sum of \(n\times p\) rank-one matrices identified by the vector product \(\eta_{\cdot h}\lambda_{\cdot h}^{\top}\). We refer to such matrices as rank-one additive contributions \(C_{h}\). Notably, if the number \(k\) of these contributions is \(k\leq p\), the factor representation leads to a parsimonious model.
### Prior specification
Following a Bayesian approach we elicit suitable prior distributions for the model parameters. The conditional expectation of \(z_{ij}\) is modelled through a linear combination of a vector of cell-specific covariates \(x_{i}\) weighted by a vector of regression parameter \(\beta_{j}\), which differs over the genes \(j=1,\ldots,p\)
The availability of gene-specific prior information \(w_{j}\) allows one to model the regression parameters accordingly. Indeed, one may expect that the expression of a gene in a certain cell depends on the cell traits \(x_{i}\), but with such relation varying according to the gene characteristics \(w_{j}\). It is well known, for example, that the gene length and its sequence composition (e.g., the proportion of guanine and cytosine nucleotides, known as GC-content) influence gene expression quantification, potentially in sample-specific ways (Risso et al., 2011; Love et al., 2016). Hence, we specify
\[\beta_{j}\sim N_{d}(\Gamma_{T}w_{jT},\sigma_{\beta}^{2}I_{d})\]
where \(\Gamma_{T}\) is a \(d\times q_{T}\) coefficient matrix that model how the technical characteristics \(w_{T}\) of the genes impact the cell quality control parameter. In multivariate regression, such hierarchical structure on the mean process is common when additional information on the column entities is available (see, e.g. Ovaskainen and Abrego, 2020, for ecological applications). For instance, one may expect that the impact of the number of mapped reads on the expression of gene \(j\) varies according to the gene's technical traits. We set the prior of \(\Gamma_{T}\) entries as independent standard Gaussian random variables.
Inspired by such structure on the mean, we exploit the structured increasing shrinkage prior introduced by Schiavon et al. (2022) to induce a gene-specific effect also on the loadings \(\Lambda\), which model the impact of the latent cell traits \(\eta_{\cdot h}\left(h=1,\ldots,k\right)\). Consistently with this, the variance of each loading element is decomposed through the product of a factor-specific scale \(\theta_{h}\) and a local scale \(\phi_{jh}\) leading to the following hierarchical prior
\[\lambda_{jh}\sim N(0,\theta_{h}\phi_{jh}),\quad\theta_{h}=\vartheta_ {h}\rho_{h},\] \[\vartheta_{h}^{-1}\sim\text{Ga}(a_{\theta},b_{\theta}),\quad \rho_{h}\ \sim\text{Ber}(1-\pi_{h}),\]
where \(\text{Ga}(a_{\theta},b_{\theta})\) indicates a gamma distribution with mean \(a_{\theta}/b_{\theta}\) and variance \(a_{\theta}/b_{\theta}^{2}\) and \(\text{Ber}(1-\pi_{h})\) is a Bernoulli distribution with mean \(1-\pi_{h}\).
In such construction, \(\pi_{h}\) is the probability of factor \(h\) being shrunk to zero and is defined according to the stick-breaking construction
\[\pi_{h}=\sum_{l=1}^{h}u_{l},\quad u_{l}=v_{l}\prod_{m=1}^{l-1}(1-v_{m}),\quad v _{m}\sim\text{Be}(1,\alpha),\]
where \(\text{Be}(a,b)\) indicates the beta distribution with mean \(a/(a+b)\). Under this cumulative construction, \(\pi_{h+1}>\pi_{h}\) for any \(h>0\) and \(\lim_{h\to\infty}\pi_{h}=1\) almost surely. The probability of being shrunk is increasing over the the index \(h=1,\ldots,H\) allowing for an infinite factorization model (Bhattacharya and Dunson, 2011) when \(k\) is set equal to \(+\infty\), which can be approximated by a truncated version of the same model. Legramanti et al. (2020), which firstly introduced the cumulative stick-breaking construction to define a class of infinite factor models, note that the prior expected number of non shrunk columns of \(\Lambda\) is \(E(\sum_{k=1}^{\infty}\rho_{k})=\alpha\), suggesting setting \(\alpha\) equal to the expected number of active latent factors.
The scale \(\phi_{jh}\) has a Bernoulli prior distribution and regulates the local shrinkage of the loadings. We model the local behaviour according to the mean equation
\[E(\phi_{jh})=c_{p}\text{logit}^{-1}(w_{jB}^{\top}\gamma_{hB}),\quad\gamma_{hB} \sim N(0,\sigma_{\gamma}^{2}I_{q}),\]
where \(\text{logit}^{-1}(x)=e^{x}/(1+e^{x})\), \(c_{p}\in(0,1)\) is a possible offset, and \(\gamma_{hB}\) is the \(h\)th column vector of a \(q_{B}\times k\) matrix \(\Gamma_{B}\) with independent standard Gaussian prior. The vector \(w_{jB}\) represents the realization of \(q_{B}\) available gene-specific meta-covariates that we think could influence the effect of the latent unobserved covariates \(\eta_{k}\left(k=1,\ldots,d\right)\), correspondingly to their role in the specification of the covariate effects \(\beta\). Coefficients of the unobserved covariates are shrunk jointly in similar genes, i.e. with similar meta-covariates. In particular, we consider as meta-covariates the binary vector \(w_{jB}\) that indicates the biological pathways including gene \(j\). We use pathways meta-covariates here, as we expect the factor loadings to be influenced by the biological processes that genes contribute to. In other words, we expect that genes that interact in a given biological process act in a coordinated way in defining the factors inferred by our model. We use pathway membership as a proxy for biological process, as usually done in bioinformatics (Khatri et al., 2012).
### Posterior computation and point estimation
Bayesian inference uses the posterior distribution of model parameters, which is approximated through Markov chain Monte Carlo (MCMC) sampling. Following common practice in infinite factor models (Bhattacharya and Dunson, 2011; Legramanti et al., 2020; Schiavon et al., 2022), we use an adaptive Gibbs algorithm to infer the number of active factors while drawing from the posterior distribution. To ensure convergence of the Markov chain, as stated in Theorem 5 of Roberts and Rosenthal (2007), the value of the number of factors is adapted at certain iterations with exponentially decreasing probability.
At the adaptive iterations, active factors are identified as those characterized by non zero loadings column - i.e. \(\rho_{h}=1\) - and the redundant factors are discarded. Given the number of factors \(k\) at a certain iteration, model parameters are drawn from the corresponding posterior full conditional distributions. The detailed steps of the adaptive Gibbs sampler are reported in Appendix A in the supporting information.
In Bayesian analysis, point-wise estimates are usually obtained by approximating the parameters' posterior expectations via Monte Carlo averages over the samples drawn during the MCMC. However, it is well-known in the Bayesian factor model literature that the sample average cannot informatively summarize the posterior distribution of \(\Lambda\) and \(\eta\), due to their non-identifiability. In fact, both \(\Lambda\) and \(\eta\) are only identifiable up to an arbitrary rotation \(P\) with \(PP^{\top}=I_{k}\), causing sampling of such parameters from possibly different rotational alignments in different Gibbs iterations. Given the sign symmetry of possible rotations, Monte Carlo averages would result in poorly meaningful point-wise estimates around zero. On the other hand, the non-identifiable possible rotations of the rank-one contributions \(C_{h}=\eta_{\cdot h}\lambda_{\cdot h}^{\top}\) are limited to the class of permutations of the indices \(h=1,2,\ldots\). Then, focusing on rank-one contributions, identifiability is achieved by following the steps below.
1. Order the contributions \(C_{1}^{(T)},\ldots,C_{k}^{(T)}\) sampled at the last iteration \(T\) of the Gibbs algorithm decreasingly with respect to the Frobenius norm.
2. Use the re-ordered contributions of the last iteration \(C_{1^{*}}^{(T)},\ldots,C_{k^{*}}^{(T)}\) as a reference.
3. For each Gibbs iteration \(t<T\), re-order the contributions as follows. For \(h^{*}=1,\ldots,k\), the contribution \(C_{h^{*}}\) corresponds to the non ordered Contribution \(C_{h}\) with index \[h=\text{argmin}_{l\in\mathbb{H}_{h}}||C_{h^{*}}^{(T)}-C_{l}^{(t)}||_{F},\] where \(\mathbb{H}_{h}\) is the set of \(k-h^{*}+1\) indices of non re-ordered contributions and \(||A||_{F}\) denotes the Frobenius norm of matrix \(A\).
To obtain point-wise estimates, we compute, for each \(h=1,\ldots,k\), the sample mean \(\bar{C}_{h}=(\sum_{t=1}^{T}C_{h^{*}}^{(t)})/T\) over the re-ordered contributions. To investigate the behaviour of the factors scores \(\eta\), we select a representative iteration of the Gibbs sampler by following the procedure described in Schiavon et al. (2022). For alternative post-processing algorithms to align the samples of \(\Lambda\) or \(\eta\) we refer the reader to McParland et al. (2014), Assmann et al. (2016), and Roy et al. (2021).
## 3 Lung adenocarcinoma scRNA-seq data
We analyze a subset of the study of three lung adenocarcinoma cell lines measured by scRNA-seq by Tian et al. (2019). The original data are high quality in terms of exon mapping and unique transcript counts per cell and have already undergone pre-processing and quality control by the original authors using the scPipe workflow (Tian et al., 2018). Cells with high percentages of mitochondrial genes and less than 1000 detected genes are excluded and only genes belonging to the 20 largest pathways are retained. The resulting data consist of a gene expression matrix of \(n=199\) cells and \(p=949\) genes. Along with this gene expression matrix, quality control information about the cells \(x\) and the genes \(w_{T}\) are also available. The matrix \(x\) includes technical features, such as the number of unaligned reads, the number of reads mapped to exons or introns, the number of expressed genes, and the total number of counts, leading to \(d=9\) covariates. In addition, cell line information is also available, indicating for any cell if it belongs to cancer lines H1975, H2228, or HCC827. We chose not to use such information as a covariate in the model, to mimic a typical situation in scRNA-seq, in which the identity of the cells is not known in advance. Hence, it represents a useful benchmark to assess the capacity of the model to reconstruct unobserved covariates in high-dimensional settings. Technical meta-covariate matrix \(w_{T}\) includes the length and the GC-content of each gene. Consistently with the motivations previously mentioned, for each gene \(j\), we also define a biological meta-covariate binary vector \(w_{jB}\) with \(m\) entry equal to \(1\) if the gene \(j\) belongs to the \(m\) biological pathway. The list of \(q_{B}=20\) pathways considered is reported in Table A2 of the Appendix.
We apply cosin with latent continuous Gaussian variable \(z\) specified as in (1)-(2). Parameter prior distributions follow the hierarchical structure discussed in Section 2. Given the high dimensionality
of the data set, we may expect a sufficiently large number of latent factors, hence we set \(\alpha=10\). To favour variable selection, we shrink covariate coefficients setting \(\sigma_{\beta}^{2}=1/3\). Additional details and MCMC settings are reported in the supporting information.
First, we discuss the results obtained for the mean of the process and the cell covariate effects. Summaries of the matrix \(\beta\) are reported in Table A3 in the Appendix. The last column illustrates that only a small proportion of \(\beta\)'s for any covariate has \(0.9\)-level posterior credible intervals not including zero. This fact suggests a low importance of the control quality cell features, confirming that the data are of high quality. As one may expect, the total number of genes detected per cell seems positively associated to the gene expression, but with just few genes where such effect is relevant. The variables describing the number of reads mapped to introns and to mitochondrial genes are those relevantly impacting the highest number of genes. The direction of such impacts varies on different genes, with variability partially explained by the technical characteristics of the genes, i.e. the length and the GC-content of the genes.
Figure A5, reported in the Appendix, displays the posterior distributions of the \(\Gamma_{T}\) coefficients, illustrating the influence of meta-covariates on the covariate effects. The GC-content and the length of the genes seem to have an impact mainly on the effect of the number of reads mapped to introns and those with ambiguous mapping.
Figure 1: First four rank-one contribution matrices. Genes are ordered according to the COVID-19 disease biological pathway: the genes within the grey square on the right of each contribution matrix belong to the COVID-19 pathway. Rows are re-arranged according to cell lines.
We focus the remaining part of this section on the results obtained thanks to the innovative treatment of the residual term allowed by cosin. The adaptive Gibbs sampler identifies \(15\) active factors. The rank-one contributions \(C_{h}\) allow one to decompose the underlying signal in rank-one additive matrices which aid interpretation as discussed henceforth.
Figure 1 displays, as an example, the estimated \(C_{1}\), \(C_{2}\), \(C_{3}\), and \(C_{4}\) matrices. To facilitate interpretation, we re-arranged the column order according to the COVID-19 pathway in explaining similarities among genes. The first contribution positively impacts the expression of the genes related to the COVID-19 pathway. The second contribution influences the expression of the genes in an opposite way, yet still indicating the importance of the pathway information to decompose the residual term. Such an evidence of strong association between the COVID-19 pathway and lung cancer cells may suggest a similar inflammation pathway for lung adenocarcinoma and COVID-19. In fact, COVID-19 promotes activation of the NF-\(\kappa\)B pathway via Ang II type 1 receptor (AT1R), followed by interleukin 6 (IL-6) production (Perco et al., 2021). Such activation of the innate immune system, which triggers overproduction of pro-inflammatory cytokines, including IL-6, can result in systemic inflammatory response (Perco et al., 2021). A similar process happens in lung cancer, in which immune response and cytokines play an important role. For instance, overexpression of IL-6 is associated with tumor progression through inhibition of cancer cell apoptosis, stimulation of angiogenesis, and drug resistance (Guo et al., 2012). A further indication of the similar molecular mechanisms of COVID-19 and cancer is the reporpusing of several cancer treatments as experimental treatments for severe COVID-19 (Jafarzadeh et al., 2020). The third and fourth contributions are characterized by a completely different pattern. Their effect is no longer homogeneous across the rows, but it reveals the existence of a cell stratification not captured by the mean of the process with respect to the cell line.
To investigate the ability of the model in recognizing possibly interpretable latent unobserved covariates, in Figure 2 we plot a representative posterior draw of the cell factor scores \(\eta_{\cdot 3}\) and \(\eta_{\cdot 4}\), chosen following the recommendations of Section 2.3. Cells are coloured according to the cell lines in order to assess correspondence with the three clusters revealed by our approach. We can appreciate a clear pattern suggesting that the structure imposed on the latent part of the model aid in recognizing and reconstructing possible missing covariates.
Notably, the contributions that explain heterogeneity among cells are generally not characterized by a differentiated impact on the genes with respect to the biological pathways. In other terms, considering the full model specification for the data point \(y_{ij}\)
\[\lfloor y_{ij}\rfloor=\exp(z_{ij})=\exp(x_{i}^{\top}\beta_{j})\left\{\,\prod_{ h=1}^{15}\exp(C_{hij})\right\}\exp(\varepsilon_{ij}),\]
we observe a multiplicative factorization in different contributions of the cell line role and the genes pathway role in characterizing the gene expression.
Latent factors not discussed here are more difficult to interpret. However, they contribute to explain the residual variance, and may be possibly important in guiding new biological discoveries in future
studies.
Additional insights can be gained by exploring the structure of the sparse covariance matrix \(\Omega=\Lambda\Lambda^{\top}+\Sigma\). The graph constructed from the posterior mean of its inverse (i.e. the partial correlation matrix), reported in Figure 3, reveals the presence of gene communities with genes belonging to the same pathway having the tendency of being clustered together. For instance, the cluster observed at the bottom of the graph is mainly constituted by the genes belonging to the cell cycle and metabolic and cancer pathways, while the genes related to the COVID-19 pathway are mainly distributed in the communities at the top right corner of the graph. This structure is favoured, yet not imposed, by the dependence on the meta-covariates induced by the structured increasing shrinkage specification on the loadings matrix \(\Lambda\). In addition, the graph highlights the genes which stand out as hub nodes, since are correlated with large groups of genes, or link different clusters. Notable hub genes include cell cycle kinases, such as CDK1 and CDK4, essential for G1/S and G2/M phase transitions, and for cell cycle control (Nurse et al., 1998). Because of the central role of the cell cycle in cancer progression, these genes have been proposed as therapeutic targets for inhibitors in lung adenocarcinoma treatment (Asghar et al., 2015). Other hub nodes that have been associated with cancer in the literature include
Figure 2: Representation of the \(199\) cells plotted on the plane identified by factor scores \(\eta_{\cdot 3}\) and \(\eta_{\cdot 4}\). Points are coloured according to the cell lines.
HSP90AA1 and ITGB1, which belong to pathways in cancers and to the PI3K-AKT signaling pathway, and B2M that is involved in immune resistance promoting cancer transformation of the cells and escape from therapies (Pereira et al., 2017).
## 4 Simulation study
To illustrate the validity and generality of cosin, we assess its performances through a simulation study. As competitor, we use the generalized principal component analysis (or glm-pca, Townes
Figure 3: Graphical representation based on the posterior mean of the inverse of the correlation matrices estimated by the model. Edge thicknesses are proportional to the latent partial correlations between genes. Values below 0.025 are not reported. Nodes are positioned using a Fruchterman-Reingold force-direct algorithm.
et al., 2019), representing the current state-of-the-art in dimension reduction for sRNA-seq data. We are interested in evaluating both the predictive capacity of the model and its ability in recognizing and isolating the underlying signal. In order to assess the main novel aspects of our approach, which lie in how count data and residual error terms are treated, we consider the zero mean data generating process
\[y_{ij}=\lfloor\exp(z_{ij})\rfloor,\,z_{ij}=C_{1ij}+C_{2ij}+C_{3ij}+\varepsilon_ {ij},\,\varepsilon_{ij}\sim N(0,\sigma^{2}).\]
To mimic the situation observed in the application discussed in Section 3, we induce row-wise and column-wise sparsity over the contribution matrices \(C_{2}\) and \(C_{3}\), respectively. In particular, we specify \(C_{hij}=\eta_{ih}\lambda_{jh}\) with
\[\eta_{\cdot 1},\eta_{\cdot 3} \sim N_{n}(0,I_{n}),\quad\lambda_{\cdot 1},\lambda_{\cdot 2}\sim N _{p}(0,I_{p}),\] \[\eta_{i2} \sim N(0,0.05^{2}),\quad i>n/2,\quad\eta_{l2}=1,\quad l\leq n/2,\] \[\lambda_{j3} \sim N_{p}(0,0.05^{2}),\quad j>p/2,\quad\lambda_{m3}=1,\quad m\leq p /2.\]
Information on sparse columns is provided to the competing models through a single meta-covariate vector \(w_{B}\) containing entries \(w_{j}\) equal to one for \(j\leq p/2\) and zero otherwise. The role of such a meta-covariate is analogous to the role of a biological genes-specific meta-covariate in the scRNA-seq data application. cosin and glm-pca are also estimated ignoring meta-covariate information to assess their robustness to the missing of informative variable traits. On the other hand, row-sparsity simulates the unobserved cell stratification and then should be entirely inferred via model estimation.
Six different scenarios are obtained varying the data dimensions \((n,p)\) over the set \(\{(50,100),\)\((200,100),(200,1000)\}\) and idiosyncratic variance \(\sigma^{2}\) over \(\{0.1^{2},1\}\). For each scenario, we simulate \(50\) synthetic data sets.
To assess goodness-of-fit, we perform an out-of-sample prediction task, randomly removing a sample \(\mathcal{S}\) of \(25\%\) of entries in \(y\). We fit cosin, cosin without meta-covariates, and glm-pca ignoring entries of \(\mathcal{S}\) and compute the mean absolute error of model \(m\) defined as
\[\text{MAE}_{m}=\frac{1}{|\mathcal{S}|}\sum_{l\in\mathcal{S}}\left|y_{l}-\hat{y }_{l}^{(m)}\right|,\]
where \(\hat{y}_{l}^{(m)}\) is the value predicted by the model \(m\).
For cosin, we set hyperparameters \(\alpha=5\), \(\sigma_{\gamma}^{2}=1\), \(a_{\theta}=b_{\theta}=1\), \(a_{\sigma}=b_{\sigma}=1\), and \(c_{p}=0.5\). Posterior distribution is approximated via the Gibbs algorithm reported in Appendix A in the supporting information. We compute the MAE of cosin averaging over all the MAE calculated on the basis of the parameters sampled at every iteration of the Gibbs. Unlike our proposal, the competitor model glm-pca is not equipped of a methodology to infer the number of latent components \(k\), which should be provided before the estimation. To favour a fair comparison, we estimate the glm-pca models under different values of \(k\in\{2,3,4,5,6,7,8\}\), using as a benchmark the specification characterized by the lowest MAE.
Figure 4 provides a summary of the results for scenarios with \(\sigma^{2}=1\). Our proposed approach shows the best performance across all scenarios. Although meta-covariates have a minor impact on model fitting, they play a crucial role in providing valuable and interpretable estimates, as discussed in Section 3. The full results are presented in Table A4, reported in the Appendix.
Finally, we evaluated the ability of the different methods in reconstructing the original underlying signal. This assessment is based on the difference between the original rank-one contributions and the contributions estimated by the competing approaches. In particular, Table 1 reports the root mean squared errors on the three contributions under the cosin and glm-pca models fitted with meta
Figure 4: Boxplot of the out-of-sample MAE of the competing models under different values of \((n,p)\) with \(\sigma^{2}=1\). Since the R package glmpca does not allow for out-of-sample imputation, we adopt the glm-pca approximation proposed by Townes et al. (2019), i.e. we perform principal component analysis on Pearson residuals of the null Poisson model.
covariates on the full data matrices used for the predictive comparison. The estimated contributions of each model are ordered by minimizing the sum of contributions RMSE, while the number of latent components \(k\) in glm-pca models is set equal to the value minimizing MAE. For cosin, RMSE reported is the average of the RMSE computed at every Gibbs iteration. Results point out that the latent constructs identified by glm-pca are far from the original contributions generating the data, suggesting that the use of the cosin can bring some advantages in decoupling the several layers that explain the residual variance in a highly multivariate context.
## 5 Discussion
In this study, we introduced a novel method called cosin which provides a joint modeling approach for multivariate count data through latent factor models. This approach was specifically motivated by scRNA-Seq applications, which involves complex count data. The empirical performances observed in real and synthetic data sets demonstrate that cosin shows competitive results in terms of model fitting compared to existing methods, indicating its efficacy as a modeling framework.
One key advantage of cosin is that it allows for the modeling of latent sparsity through the use of meta-covariates. This feature is particularly useful in identifying latent contributions that have biological interpretation. As discussed in Section 3, cosin was successfully applied to a scRNA-seq dataset
\begin{table}
\begin{tabular}{l r r r r r r} \((n,p,\sigma)\) & \multicolumn{4}{c}{cosin} & \multicolumn{4}{c}{glm-pca} \\ & \(C_{1}\) & \(C_{2}\) & \(C_{3}\) & \(C_{1}\) & \(C_{2}\) & \(C_{3}\) \\ \hline \((50,100,0.10)\) & 0.63 & 0.58 & 0.39 & 1.85 & 1.76 & 0.84 \\ & (0.25) & (0.28) & (0.26) & (2.13) & (1.25) & (0.18) \\ \((50,100,1.00)\) & 0.62 & 0.57 & 0.39 & 0.63 & 0.79 & 0.76 \\ & (0.30) & (0.26) & (0.14) & (0.17) & (0.16) & (0.13) \\ \((200,100,0.10)\) & 0.54 & 0.45 & 0.25 & 1.56 & 1.44 & 0.74 \\ & (0.35) & (0.33) & (0.19) & (0.58) & (0.78) & (0.03) \\ \((200,100,1.00)\) & 0.48 & 0.46 & 0.26 & 0.48 & 0.66 & 0.74 \\ & (0.41) & (0.32) & (0.05) & (0.12) & (0.11) & (0.07) \\ \((200,1000,0.10)\) & 0.87 & 0.47 & 0.63 & 0.99 & 0.86 & 0.72 \\ & (0.12) & (0.08) & (0.06) & (0.23) & (0.42) & (0.05) \\ \((200,1000,1.00)\) & 0.62 & 0.55 & 0.53 & 0.81 & 0.68 & 0.73 \\ & (0.12) & (0.07) & (0.04) & (0.61) & (0.12) & (0.05) \\ \end{tabular}
\end{table}
Table 1: Median of contribution RMSE in 50 replicates, with varying \((n,p,\sigma)\). Interquartile range is reported in parenthesis.
on lung adenocarcinoma, where it identified specific latent contributions \(C_{j}\) that were associated with different cell types and pathways. These findings highlight the potential utility of cosin for uncovering biologically meaningful patterns in high-dimensional count data.
One important aspect to consider is the role of the intercept in the prior mean of the coefficients \(\beta_{j}\). Indeed, this is a gene-wise intercept that helps the model accounts for the differences in sequencing depth across cells, similarly to what is achieved with the gene-wise intercept in GLM-PCA (Townes et al., 2019) and with offsets in more traditional frequentist models (Robinson et al., 2010; Love et al., 2014).
Clearly, our results suggest that cosin is applicable beyond genomics and can be used in any context dealing with complex high-dimensional counts. This versatility makes cosin a valuable tool for researchers working in diverse fields.
## Acknowledgements
Davide Risso and Giovanni Toto are supported by the National Cancer Institute of the National Institutes of Health (U24CA180996). Davide Risso is also supported by EU funding within the MUR PNRR "National Center for HPC, big data and quantum computing" (Project no. CN00000013 CN1).
|
2306.10027 | Renormalization of shell model of turbulence | Renormalization enables a systematic scale-by-scale analysis of multiscale
systems. In this paper, we employ \textit{renormalization group} (RG) to the
shell model of turbulence and show that the RG equation is satisfied by $
|u_n|^2 =K_\mathrm{Ko} \epsilon^{2/3} k_n^{-2/3}$ and $ \nu_n = \nu_*
\sqrt{K_\mathrm{Ko}} \epsilon^{1/3} k_n^{-4/3}$, where $k_n, u_n $ are the
wavenumber and velocity of shell $ n $; $\nu_*, K_\mathrm{Ko}$ are RG and
Kolmogorov's constants; and $ \epsilon $ is the energy dissipation rate. We
find that $\nu_* \approx 0.5$ and $K_\mathrm{Ko} \approx 1.7$, consistent with
earlier RG works on Navier-Stokes equation. We verify the theoretical
predictions using numerical simulations. | Mahendra K. Verma, Shadab Alam | 2023-06-03T10:12:16Z | http://arxiv.org/abs/2306.10027v1 | # Renormalization of shell model of turbulence
###### Abstract
Renormalization enables a systematic scale-by-scale analysis of multiscale systems. In this paper, we employ _renormalization group_ (RG) to the shell model of turbulence and show that the RG equation is satisfied by \(|u_{n}|^{2}=K_{\rm{ko}}\epsilon^{2/3}k_{n}^{-2/3}\) and \(\nu_{n}=\nu_{*}\sqrt{K_{\rm{ko}}\epsilon^{1/3}}k_{n}^{-4/3}\), where \(k_{n},u_{n}\) are the wavenumber and velocity of shell \(n\); \(\nu_{*},K_{\rm{ko}}\) are RG and Kolmogorov's constants; and \(\epsilon\) is the energy dissipation rate. We find that \(\nu_{*}\approx 0.5\) and \(K_{\rm{ko}}\approx 1.7\), consistent with earlier RG works on Navier-Stokes equation. We verify the theoretical predictions using numerical simulations.
## I Introduction
Renormalization group (RG) analysis has been employed to model turbulence. Orszag [1] and Forster _et al._[2] performed one of the first perturbative renormalization analysis. Yakhot and Orszag [3] performed detailed analysis using \(\epsilon\) expansion. The other perturbative RG works are by Zhou _et al._[4], Zhou [5], McComb and Shanmugasundaram [6; 7], McComb [8; 9], Eyink [10], Martin _et al._[11], Bhattacharjee [12], and Adzhemyan _et al._[13]. Among these works, McComb, Zhou, and coworkers employed self-consistent RG (using "dressed Green's function") that has nonperturbative features. The above set of works show that the renormalized turbulent viscosity \(\nu(k)\sim k^{-4/3}\), where \(k\) is the wavenumber.
Recently, researchers have employed _exact renormalization group equation_ (ERGE) to turbulence [14; 15; 16]. Here, either sharp or smooth filter is employed during coarsening. A more formal implementation of ERGE is via functional renormalization group (FRG). Tomassini [17], Fontaine _et al._[18], and Canet [19] employed FRG to hydrodynamic turbulence and shell model. They derived formulas for the velocity correlations and multiscaling exponents. For Navier-Stokes equation, Canet [19] reported \(\nu(k)\approx k^{-1}\), rather than \(\nu(k)\sim k^{-4/3}\). Fedorenko _et al._[20] performed FRG to decaying Burgers, hydrodynamic, and quasi-geostrophic turbulence. Among many results, Fedorenko _et al._[20] showed that for hydrodynamic turbulence, the second-order structure function scales as \(l\) (the distance between two points), rather than Kolmogorov's predictions of \(l^{2/3}\).
Mejia-Monasterio and Muratore-Ginanneschi [21] performed nonperturbative renormalization group analysis of stochastic Navier-Stokes equation with power-law forcing. Here, they renormalized the viscosity, the forcing amplitude, and the coupling constants. Using field-theoretic tools, Biferale _et al._[22] constructed optimal subgrid closure for the shell models; they related the closure scheme to _large-eddy simulations_. In addition, Eyink [10] used _operator product expansion_ (OPE) and discovered _multiscaling_ for the shell model. Some other notable field-theoretic works (not RG) on turbulence are [23; 24; 25; 26].
In this paper, we employ RG scheme based on the differential equation, as in [3; 4; 8]. Note that the shell model involves discrete wavenumbers, hence its renormalization does not involve complex integration, as in hydrodynamic turbulence. For inviscid shell model, our RG procedure yields \(\nu(k)=0\) as the solution of the RG equation, which is similar to the Gaussian fixed point of Wilson \(\phi^{4}\) theory [27]. We verify several RG predictions using numerical simulation of the shell model. We use temporal autocorrelation function for the velocity field to compute the renormalized viscosity [28; 29].
In one of the important works on hydrodynamic turbulence, Kraichnan [30] argued that large-scale structures sweep the small-scale fluctuations; this phenomenon is referred to as _sweeping effect_. These interactions are naturally multiscale (across many wavenumbers). Note, however, that multiscale interactions are absent in the shell models, which has local interactions among the wavenumber shells. Hence, we expect that sweeping effect may be suppressed in the shell model. This is precisely what we observe in our RG calculation of the shell model.
In this paper, we compute the renormalized viscosity in the shell model using momentum-space RG proposed by Wilson [27] (see Sec. II). Here, we assume that the coarse-grained velocity field is random satisfying time-stationarity. Our calculation does not require quasi-Gaussian approximation for the velocity field. In Sec. III, we compute the energy flux of the shell model; here, we assume the velocity field to be quasi-Gaussian. The flux calculation enables us to compute Kolmogorov's constant. Interestingly, our predictions for the shell model are quite close to those for the Navier-Stokes equation. In Sec. IV, we extend our RG calculation to show that sweeping effect is suppressed in the shell model.
In Sec. V, we describe how we verify the theoretical predictions using numerical simulations. We observe that the numerical results are in good agreement with the theoretical predictions. In Sec. VI, we compare our results with those from earlier works. We conclude in Sec. VII.
## II Renormalization of viscosity
The Sabra shell model is [31; 32; 33; 34; 35; 36]
\[\frac{du_{n}}{dt}+\bar{\nu}k_{n}^{2}u_{n} = -i\lambda[a_{1}k_{n}u_{n+1}^{*}u_{n+2}+a_{2}k_{n-1}u_{n-1}^{*}u_{n+1} \tag{1}\] \[\quad-a_{3}k_{n-2}u_{n-1}u_{n-2}]+f_{n},\]
where \(u_{n}\) represents the velocity field for the shell \(n\); \(\bar{\nu}\) is the _microscopic kinematic viscosity_; \(a_{1},a_{2},a_{3}\) are constants with \(a_{1}+a_{2}+a_{3}=0\); and \(k_{n}=k_{0}b^{n}\) with \(b\) as a constant. In this paper, we choose \(a_{1}=1,a_{2}=-1+1/b\), \(a_{3}=-1/b\), and \(b\) in the range \((1.2,2)\). Here, \(f_{n}\) represents the forcing, which is employed at small \(n\)'s. This forcing injects energy at large scales that cascades to small scales as the energy flux. Note that triadic interactions of hydrodynamic turbulence are modelled better with Sabra model than GOY model [31].
The coupling constant (coefficient of the nonlinear term) \(\lambda\) is not renormalized due to the Galilean invariance [9; 2; 37], and we set \(\lambda=1\). Refer to Appendix A for details. In addition, we consider \(u_{n}\) to be random, as in fully-developed turbulence, rather than introducing a separate noise term in the inertial range [5; 6; 37; 38]. Thus, we avoid noise renormalization. In this self-consistent approach, we renormalize only the viscosity.
Following Wilson [15], we coarse-grain the system over a wavenumber shell, and compute the consequent correction to the viscosity. The wavenumber space is already divided in the shell model of turbulence, which makes the computation simpler than that for Navier-Stokes equation. The locality of interactions too simplifies the RG calculation. We denote the renormalized viscosity at wavenumber \(k_{n}\) by \(\nu_{n}\).
Renormalization is often performed in \(({\bf k},\omega)\) space. However, for the shell model, the renormalization calculation in \(({\bf k},t)\) space is concise and convenient. Hence, we adopt this scheme. In Appendix B, we will briefly discuss the renormalization of the shell model in \(({\bf k},\omega)\) space.
For computing the renormalized viscosity at \(k_{n}\) in the inertial range where \(f_{n}=0\), we coarse-grain the system by averaging over \(u_{n+1}^{*>}(t)\) and \(u_{n+2}^{>}(t)\) (see Fig. 1). Following RG convention, we label the variables to be averaged using \(>\) symbol, whereas those to be retained using \(<\) symbol. Under this notation,
\[\left(\frac{d}{dt}+\bar{\nu}k_{n}^{2}\right)u_{n}^{<}(t) = -i[a_{1}k_{n}u_{n+1}^{*>}(t)u_{n+2}^{>}(t) \tag{2}\] \[+a_{2}k_{n-1}u_{n-1}^{*>}(t)u_{n+1}^{>}(t)\] \[-a_{3}k_{n-2}u_{n-1}^{<}(t)u_{n-2}^{<}(t)].\]
The variables with \(<\) superscript remain unaltered under coarse-graining. However, \(u_{n+1}^{>}(t)\) and \(u_{n+2}^{>}(t)\) variables are assumed to be random with zero mean. Note that \(u^{>}\) variables need not be Gaussian. Under these assumptions,
\[\left\langle u_{n-1}^{*<}(t)u_{n+1}^{>}(t)\right\rangle = u_{n-1}^{*<}(t)\left\langle u_{n+1}^{>}(t)\right\rangle=0, \tag{3}\] \[\left\langle u_{n-2}^{<}(t)u_{n-1}^{<}(t)\right\rangle = u_{n-2}^{<}(t)u_{n-1}^{<}(t). \tag{4}\]
Based on the above simplification,
\[\left(\frac{d}{dt}+\bar{\nu}k_{n}^{2}\right)u_{n}^{<}(t)-ia_{3}k_ {n-2}u_{n-1}^{<}(t)u_{n-2}^{<}(t) \tag{5}\] \[=-ia_{1}k_{n}\left\langle u_{n+1}^{*>}(t)u_{n+2}^{>}(t)\right\rangle.\]
To compute the right-hand side (RHS) of Eq. (5), we evaluate \(u_{n+1}^{*>}(t)\) and \(u_{n+2}^{>}(t)\) using Green's function technique. For example,
\[u_{n+2}^{>}(t) = \int_{0}^{t}dt^{\prime}G_{n+2}(t-t^{\prime})\times(-i)[a_{1}k_{n} u_{n+3}^{*>}(t^{\prime})u_{n+4}^{>}(t^{\prime}) \tag{6}\] \[+a_{2}k_{n-1}u_{n+1}^{*>}(t^{\prime})u_{n+3}^{>}(t^{\prime})\] \[-a_{3}k_{n}u_{n}^{<}(t^{\prime})u_{n+1}^{>}(t^{\prime})],\]
where \(G_{n+2}(t-t^{\prime})\) is the Green's function. Note, however, that \(u_{n+3}^{*>}(t)\) and \(u_{n+4}^{>}(t)\) are absent at this stage. Hence,
\[u_{n+2}^{>}(t) = \int_{0}^{t}dt^{\prime}G_{n+2}(t-t^{\prime})ia_{3}k_{n}u_{n}^{<}( t^{\prime})u_{n+1}^{>}(t^{\prime}). \tag{7}\]
Substitution of Eq. (7) in the RHS of Eq. (5) yields
\[I_{1} = \int_{0}^{t}dt^{\prime}G_{n+2}(t-t^{\prime})a_{1}a_{3}k_{n}^{2}u_{ n}^{<}(t^{\prime})\left\langle u_{n+1}^{*>}(t)u_{n+1}^{>}(t^{\prime})\right\rangle \tag{8}\] \[= a_{1}a_{3}k_{n}^{2}\int_{0}^{t}dt^{\prime}G_{n+2}(t-t^{\prime}) \bar{C}_{n+1}(t-t^{\prime})u_{n}^{<}(t^{\prime}),\]
where \(\bar{C}_{n+1}(t-t^{\prime})\) is unequal time correlation.
In self-consistent RG procedure, it is assumed that the decay rates of Green's and correlation functions are determined by the renormalized viscosity [9; 24]. Hence,
\[G_{n}(t-t^{\prime}) = \theta(t-t^{\prime})\exp[-\nu_{\mu}k_{n}^{2}(t-t^{\prime})], \tag{9}\] \[\bar{C}_{n}(t-t^{\prime}) = C_{n}(t)\exp[-\nu_{n}k_{n}^{2}(t-t^{\prime})], \tag{10}\]
where \(C_{n}(t)\) is equal-time correlation (\(t=t^{\prime}\)), and \(\theta(t-t^{\prime})\) is the step function. Note that \(G_{n}(\tau)\) and \(\bar{C}_{n}(\tau)\) decay
Figure 1: Division of wavenumber shells into \(<\) and \(>\) partitions during the computation of \(\nu_{n}\) at \(k=k_{n}\). Under the coarse-graining, the \(u^{<}\) variables are unaltered, whereas \(u^{>}\) variables are averaged out.
with a time scale of \(\tau_{c}=(\nu_{n}k_{n}^{2})^{-1}\). Equations (9, 10) are valid for \(\tau<\tau_{c}\), after which \(C_{n}\) and \(G_{n}\) decay rapidly to zero [1; 28; 29; 39; 40].
Substitution of \(G_{n}(t-t^{\prime})\) and \(\bar{C}_{n}(t-t^{\prime})\) of Eqs. (9, 10) in Eq. (8) yields
\[I_{1} = a_{1}a_{3}k_{n}^{2}C_{n+1}(t)\] \[\times\int_{0}^{t}dt^{\prime}\exp[-(\nu_{n+1}k_{n+1}^{2}+\nu_{n+ 2}k_{n+1}^{2})(t-t^{\prime})]u_{n}^{<}(t^{\prime}),\]
Now, we employ Markovian approximation, according to which the integral of Eq. (II) gets maximal contributions from \(t^{\prime}\) near \(t\)[1]. This is possible when \(\nu_{n}k_{n}^{2}\gg 1\)[1]. Since the integral is peaked near \(t=t^{\prime}\), \(u_{n}(t^{\prime})\to u_{n}(t)\) and
\[I_{1} = \frac{a_{1}a_{3}k_{n}^{2}C_{n+1}(t)}{\nu_{n+1}k_{n+1}^{2}+\nu_{n+ 2}k_{n+2}^{2}}u_{n}^{<}(t). \tag{12}\]
Such assumptions are made in Eddy-damped Quasi-normal Markovian (EDQNM) approximation of hydrodynamic turbulence [1].
The RHS of Eq.(5) has another contribution to \(\nu_{n}\), which is computed by expanding \(u_{n+1}^{*>}(t)\) using Green's function. Following similar approach as above, we compute new term as
\[I_{2} = \frac{a_{1}a_{2}k_{n}^{2}C_{n+2}(t)}{\nu_{n+1}k_{n+1}^{2}+\nu_{n+ 2}k_{n+2}^{2}}u_{n}^{<}(t). \tag{13}\]
The Feynman diagrams associated with \(I_{1}\) and \(I_{2}\) are exhibited in Fig. 2. Here, the loop-diagrams represent the _self-energy_ in which the wavy and solid lines are the Green's function and correlation function respectively.
These calculations reveal that the RHS of Eq.(5) is proportional to \(u_{n}^{<}\). Hence, the prefactors of \(I_{1}\) and \(I_{2}\) will provide corrections to \(\bar{\nu}\) to yield \(\nu_{n}\). That is,
\[\nu_{n}k_{n}^{2}=\bar{\nu}k_{n}^{2}-\frac{a_{1}k_{n}^{2}[a_{3}C_{n+1}(t)+a_{2} C_{n+2}(t)]}{\nu_{n+1}k_{n+1}^{2}+\nu_{n+2}k_{n+2}^{2}}. \tag{14}\]
Note, however, that \(\bar{\nu}\ll\nu_{n}\). Hence,
\[\nu_{n}k_{n}^{2}=-\frac{a_{1}k_{n}^{2}[a_{3}C_{n+1}(t)+a_{2}C_{n+2}(t)]}{\nu_{ n+1}k_{n+1}^{2}+\nu_{n+2}k_{n+2}^{2}}. \tag{15}\]
Note that we compute renormalized viscosity at the corresponding coarse-graining step. At the present level, \(\nu_{n+1},\nu_{n+2},...\) have been computed already, whereas, \(\nu_{n-1},\nu_{n-2},...\) would be computed at subsequent stages. Also note that during the computation of \(\nu_{n-1}\), \(u_{n}^{>}\) and \(u_{n+1}^{>}\) would belong to \(>\) shells.
In Eq. (15), \(\nu_{n}\) and \(C_{n}\) are both unknowns. RG equation for Navier-Stokes equation too has a similar implicit form. Zhou _et al._[38], and McComb and Shanmugasundaram [6] employed self-consistent procedure to solve such an implicit equation (also see [5; 37; 41]). Following these authors, we attempt the following functions for \(C_{n}(t)\) and \(\nu_{n}\), which are inspired by Kolmogorov's theory of turbulence:
\[C_{n}(t) = K_{\rm Ko}\epsilon^{2/3}k_{n}^{-2/3}, \tag{16}\] \[\nu_{n}k_{n}^{2} = \nu_{*}K_{\rm Ko}^{1/2}\epsilon^{1/3}k_{n}^{2/3}, \tag{17}\]
where \(K_{\rm Ko}\) is Kolmogorov's constant, \(\epsilon\) is the viscous dissipation rate, and \(\nu_{*}\) is the RG constant associated with \(\nu_{n}\). Substitution of the above in Eq. (15) yields
\[\nu_{*}^{2}=-\frac{a_{1}(a_{3}b^{-2/3}+a_{2}b^{-4/3})}{b^{2/3}+b^{4/3}}. \tag{18}\]
In Fig. 3 we plot \(\nu_{*}\) for \(b\) ranging from 1.2 to 2.0. Here, \(\nu_{*}\approx 0.5\), in particular, \(\nu_{*}\approx 0.48\) for \(b=1.5\). The \(\nu_{*}\) computed above are remarkably close to that for Navier-Stokes equation [5; 37; 41; 42; 43; 38; 43], which gives credence to the RG computation described in this paper.
It is important to note that the above derivation does not require quasi-Gaussian assumption for \(u^{>}\) variables. We only need to assume time-stationarity for these variables. In addition, we approximate \(\langle u^{<}u^{>}\rangle=u^{<}\langle u^{>}\rangle=0\), rather than expanding it further. These assumptions and local interactions in the shell model provide simplification in comparison to the RG calculations for the Navier-Stokes equation [5; 6; 37; 38; 41].
Equation (17) yields
\[\nu_{n+1}/\nu_{n}=(k_{n+1}/k_{n})^{-4/3}=b^{-4/3}. \tag{19}\]
As is customary in quantum field theory [44], we make a change of variable as \(b=\exp(l)\), with which
\[\nu_{n+1}=\nu_{n}\exp(-4l/3)\approx\nu_{n}[1-4l/3], \tag{20}\]
when \(b\to 1\) or \(l\to 0\). Hence,
\[\frac{d\nu}{dl}\approx-\frac{4}{3}\nu. \tag{21}\]
Figure 2: Feynman diagrams associated with the viscosity renormalization. These diagrams are related to the RHS of Eq. (5).
Therefore, \(\nu_{n}\) increases with the decrease of \(k_{n}\), akin to running coupling constant in quantum chromodynamics. Note, however, that \(\nu_{n}\) is not the coupling constant; instead, it is the coefficient of the viscous term, which is linear (analogous to mass term in quantum field theory). We remark that the scaling of Eqs. (16, 17, 21) breaks down when \(k\to 1/L\), where \(L\) is the system size.
The dominant frequency at \(k=k_{n}\) is
\[\omega_{n}\sim\nu_{n}k_{n}^{2}\sim\epsilon^{1/3}k_{n}^{2/3}. \tag{22}\]
For small \(k_{n}\), \(\omega_{n}\to 0\). This is one of the assumptions of RG schemes in \(({\bf k},\omega)\) space. Refer to Appendix B for details.
For \(\bar{\nu}=0\), \(f_{n}=0\), and \(\delta\)-correlated (white noise) initial condition, \(u_{n}\) remains \(\delta\)-correlated, as in Euler turbulence [30, 45, 46]. Therefore, \(\left<u_{n+1}^{*>}(t)u_{n+2}^{>}(t)\right>=0\) [see Eq. (5)], leading to no correction or renormalization of the viscosity. Thus, \(\nu_{n}=0\) for the inviscid shell model. This solution corresponds to the _Gaussian fixed point_ in Wilson's \(\phi^{4}\) theory [27].
In Sec. III, we will compute the energy flux for the shell model using field-theoretic techniques.
## III Energy Flux Computation
In this section, we compute the energy flux for the shell model perturbatively. The energy flux at \(k=k_{n}\) is defined as [33, 35, 47]
\[\Pi_{n} = 2a_{3}k_{n-1}\Im[\langle u_{n-1}^{*}(t)u_{n}^{*}(t)u_{n+1}(t) \rangle] \tag{23}\] \[-2a_{1}k_{n}\Im[\langle u_{n}^{*}(t)u_{n+1}^{*}(t)u_{n+2}(t) \rangle].\]
We compute \(\langle\Pi_{n}\rangle\) by averaging Eq. (23) under the assumption that \(u_{n}(t)\)'s in the inertial range are quasi-Gaussian with zero mean, an assumption used in eddy-damped quasi-normal Markovian (EDQNM) approximation and in direct interaction approximation (DIA) [1, 23]. To zeroth order, \(\langle\Pi_{n}\rangle=0\), which is the energy flux for Euler turbulence; this flux corresponds to the Gaussian fixed point, \(\nu=0\).
However, \(\langle\Pi_{n}\rangle\neq 0\) to the first order of perturbation. The Feynman diagrams associated with the first order in perturbation for the first and second terms of Eq. (23) are exhibited in Figs. 4 and 5 respectively. Let us analyze the expansion of the first Feynman diagram of Fig. 4. Here, \(u_{n+1}(t)\) has been expanded as
\[u_{n+1}(t) = \int_{0}^{t}dt^{\prime}G_{n+1}(t-t^{\prime})[-ia_{1}k_{n+1}u_{n+2 }^{*}(t^{\prime})u_{n+3}(t^{\prime}) \tag{24}\] \[-ia_{2}k_{n}u_{n}^{*}(t^{\prime})u_{n+2}(t^{\prime})\] \[+ia_{3}k_{n-1}u_{n}(t^{\prime})u_{n-1}(t^{\prime})],\]
substitution of which in the first term of Eq. (23), or in
Figure 3: For the shell model with various \(b\)’s, the RG constant \(\nu_{*}\) computed using RG (solid blue curve) and using numerical simulations (blue circles). Also, Kolmogorov’s constant \(K_{\rm Ko}\) computed using field theory (solid red curve) and using numerical simulations (red squares). The analytical and numerical \(\nu_{*}\)’s match quite well, but numerical \(K_{\rm Ko}\) is around 1.6 times smaller than the analytical counterpart.
Figure 4: Feynman diagrams associated with the first term of Eq. (23).
the first Feynman diagram of Fig. 4, yields
\[I_{3} = 2a_{3}^{2}k_{n-1}^{2}\int_{0}^{t}dt^{\prime}\exp[-\nu_{n+1}k_{n+1}^ {2}(t-t^{\prime})] \tag{25}\] \[\times\Im[i\left<u_{n}^{*}(t)u_{n-1}^{*}(t)u_{n}(t^{\prime})u_{n-1 }(t^{\prime})\right>]\] \[= 2a_{3}^{2}k_{n-1}^{2}\int_{0}^{t}dt^{\prime}\exp[-\nu_{n+1}k_{n+1 }^{2}(t-t^{\prime})]\] \[\times\left<u_{n}^{*}(t)u_{n}(t^{\prime})\right>\left<u_{n-1}^{*} (t)u_{n-1}(t^{\prime})\right>\] \[= 2a_{3}^{2}k_{n-1}^{2}\frac{C_{n}C_{n-1}}{\nu_{n-1}k_{n-1}^{2}+ \nu_{n}k_{n}^{2}+\nu_{n+1}k_{n+1}^{2}}.\]
In the above derivation, we use the following properties:
1. \(\left<u_{n}(t)u_{m}^{*}(t^{\prime})\right>=\delta_{n,m}C_{n}(t)\exp[-\nu_{n}k_ {n}^{2}(t-t^{\prime})]\).
2. \(\left<abcd\right>=\left<ab\right>\left<cd\right>+\left<ac\right>\left<bd \right>+\left<ad\right>\left<bc\right>\) when \(a,b,c,d\) are Gaussian variables.
Using similar analysis, we derive the other integrals of the energy flux as
\[I_{4} = 2a_{2}a_{3}k_{n-1}^{2}C_{n-1}C_{n+1}/\text{denr1.}, \tag{26}\] \[I_{5} = 2a_{1}a_{3}k_{n-1}^{2}C_{n}C_{n+1}/\text{denr1.},\] (27) \[I_{6} = -2a_{1}^{2}k_{n}^{2}C_{n+1}C_{n+2}/\text{denr2.},\] (28) \[I_{7} = -2a_{1}a_{2}k_{n}^{2}C_{n}C_{n+2}/\text{denr2.},\] (29) \[I_{8} = -2a_{1}a_{3}k_{n}^{2}C_{n}C_{n+1}/\text{denr2.}, \tag{30}\]
with
\[\text{denr1.}=\nu_{n-1}k_{n-1}^{2}+\nu_{n}k_{n}^{2}+\nu_{n+1}k_{ n+1}^{2}, \tag{31}\] \[\text{denr2.}=\nu_{n}k_{n}^{2}+\nu_{n+1}k_{n+1}^{2}+\nu_{n+2}k_{ n+2}^{2}. \tag{32}\]
The integrals \(I_{3},I_{4},I_{5}\) correspond to the first term of Eq. (23), whereas \(I_{6},I_{7},I_{8}\) correspond to the second term of Eq. (23). By adding \(I_{3}\) to \(I_{8}\) and using \(k_{n}=k_{0}b^{n}\), we derive
\[\left<\Pi_{n}\right>=\epsilon=\frac{K_{\text{Ko}}^{3/2}}{\nu_{*}}\frac{\text{ numr}}{1+b^{2/3}+b^{4/3}}, \tag{33}\]
where
\[\text{numr} = 2a_{3}b^{-4/3}(a_{1}b^{-2/3}+a_{2}+a_{3}b^{2/3}) \tag{34}\] \[-2a_{1}(a_{1}b^{-2}+a_{2}b^{-4/3}+a_{3}b^{-2/3}).\]
Equation (33) reveals that the energy flux is independent of wavenumber, consistent with Kolmogorov's theory of turbulence [48; 49; 50]. Using Eq. (33), we compute \(K_{\text{Ko}}\) and plot it in Fig. 3. We observe \(K_{\text{Ko}}\) to be a weak function of \(b\). In particular, for \(b=1.5\), \(K_{\text{Ko}}\approx 1.71\), which is close to the theoretical, experimental, and numerical values of Kolmogorov's constant [50; 37].
## IV Sweeping effect in shell model
Kraichnan [30] showed that large-scale flow structures sweep smaller ones, a phenomenon called _sweeping effect_. Here, large-scale velocity structures interact with small-scale ones. Kraichnan [30] observed that the sweeping effect leads to \(k^{-3/2}\) energy spectrum, rather than usual \(k^{-5/3}\) spectrum. To overcome this discrepancy, Kraichnan [51] proposed Lagrangian-History Closure Approximation for Turbulence. Note that the shell model involves local interactions, thus drastically reduce the sweeping effect.
To test the sweeping effect in the field-theoretic calculation of shell model, we introduce a term \(iU_{0}k_{n}u_{n}\) in the left-hand side of Eq. (1), where \(U_{0}\), a constant, represents the mean flow. Under renormalization, the above term appears as \(iU_{n}k_{n}u_{n}^{<}\), where \(U_{n}\) represents the renormalized parameter corresponding to \(U_{0}\). With \(U_{n}\), the RG flow equation [Eq. (15)] gets transformed to
\[iU_{n}k_{n}+\nu_{n}k_{n}^{2}=-\frac{a_{1}k_{n}^{2}[a_{3}C_{n+1}+a _{2}C_{n+2}]}{\text{denr.}}, \tag{35}\] \[\text{denr.}=i[U_{n+1}k_{n+1}+U_{n+2}k_{n+2}]+\nu_{n+1}k_{n+1}^{2} +\nu_{n+2}k_{n+2}^{2}. \tag{36}\]
Using dimensional analysis, we argue that
\[U_{n}=U_{*}\epsilon^{1/3}k_{n}^{-1/3}, \tag{37}\]
substitution of Eqs. (16,17, 37) in Eq. (35) yields
\[(iU_{*}+\nu_{*})^{2}=-\frac{a_{1}[a_{3}b^{-2/3}+a_{2}b^{-4/3}]}{b^{2/3}+b^{4/3}}. \tag{38}\]
Figure 5: Feynman diagrams associated with the second term of Eq. (23).
The only possible solution of Eq. (38) is
\[U_{*}=0,\text{ or }U_{n}=0, \tag{39}\]
and \(\nu_{*}\) is given by the same formula as Eq. (18). Hence, sweeping effect is absent in the RG calculation of the shell model, and \(\nu(k)\) is independent of \(U_{0}\). However, in Sec. V, we show that the numerical results deviate from the above prediction.
## V Numerical verification
To test the predictions of the above field-theoretic calculations, we solve Sabra shell model, Eq. (1), numerically. We employ 40 shells, \(\nu=10^{-6}\), \(U_{0}=0\), and fourth-order Runge-Kutta (RK4) time marching scheme with \(dt=10^{-5}\). The shell model is forced randomly at shells \(n=0\) and 1 so as to provide a constant energy supply rate; we choose \(\epsilon=2\) for all our runs. To test the dependence of \(\nu\) and \(K_{\text{Ko}}\) on \(b\), we vary \(b\) from 1.2 to 2 in the interval of 0.1. We also perform another simulation with \(U_{0}=0.5\) and \(b=1.5\) to test the field-theoretic predictions on the sweeping effect. We carry out the simulations till 2000 eddy turnover times, and report the energy spectra and fluxes after the system has reached a steady state.
As expected, for \(U_{0}=0\) and finite \(\nu\), in the inertial range, the energy spectrum \(C_{n}\sim k^{-2/3}\) and the energy flux \(\Pi_{n}\approx\epsilon=2\)[33; 35; 36]. See the red curves of Fig. 6 for an illustration for \(b=1.5\). Using Eq. (16), we compute the Kolmogorov's constants for various \(b\)'s and plot them in Fig. 3. We observe that for \(b=1.5\), \(K_{\text{Ko}}=1.05\), which is approximately 1.6 times smaller than the theoretically predicted value of 1.71 (for the shell model). See Fig. 3 for an illustration. This discrepancy between the numerical and analytical \(K_{\text{Ko}}\) is possibly due to various approximations employed in the theoretical calculations, an issue that needs a closer investigation.
For the \(U_{0}=0.5\) run, we again observe Kolmogorov's spectrum (apart from a hump) and constant energy flux (blue curves in Fig. 6). Here, \(K_{\text{Ko}}\approx 0.92\). For the special case with \(\bar{\nu}=0\) and white noise initial condition, numerical simulation yields \(C_{n}\approx\text{constant}\) and nearly zero energy flux, consistent with the field-theoretic predictions. We illustrate the above energy spectrum and flux in the insets of Fig. 6.
To validate the renormalized viscosity of Eq. (17), we compute \(\nu_{n}\) numerically using the normalized correlation function \(R_{n}(\tau)\), which is defined as
\[R_{n}(\tau)=\frac{\bar{C}(\tau)}{C_{n}}, \tag{40}\]
where \(\bar{C}(\tau)\) and \(C_{n}\) are the unequal-time and equal-time correlations respectively (see Eq. (10)).
We observe that the numerically-computed \(R_{n}(\tau)\) is real. For small \(\tau\) and inertial range \(k_{n}\)'s, \(R_{n}(\tau)\approx\exp[-k_{n}^{2/3}\tau]\), which is consistent with Eqs. (10, 17). As illustrated in Fig. 7, for \(b=1.5\) and \(U_{0}=0\),
\[R_{n}(\tau)\approx 1.03\exp(-0.57k_{n}^{2/3}\tau) \tag{41}\]
when \(\nu_{n}k_{n}^{2}\tau\lessapprox 1\). A comparison of Eq. (41) with Eq. (10) reveals that \(\nu_{*}\approx 0.57/(K_{\text{Ko}}^{1/2}\epsilon^{1/3})\approx 0.44\), which is in good agreement with the RG prediction of 0.48 (see Sec. II). However, we canutibly remark that the numerical \(\nu_{*}\) has significant errors.
For \(U_{0}=0.5\), we compute \(R_{n}(\tau)\) and fit it with \(\exp(-\nu_{n}k_{n}^{2}\tau)\). In Fig. 8 we plot \(R_{n}(\tau)\) for shells \(n=5\) to 9. We observe that \(R_{n}(\tau)\) for \(U_{0}=0.5\) is steeper than the RG predictions. This is contrary to the RG prediction that the mean flow does not affect the renormalized viscosity. Clearly, the analytical computation underestimates the dissipation arising due to nonzero \(U_{0}\). This issue needs a closer examination that will be pursued in future.
In Sec. VI, we compare our results with earlier RG works on the shell model and hydrodynamic turbulence.
Figure 6: For the numerical simulation of shell model with \(b=1.5\): (a) plots of normalized energy spectra \(C_{n}\epsilon^{-2/3}k_{n}^{2/3}\) vs. \(k_{n}\) for \(U_{0}=0\) (red curve) and \(U_{0}=0.5\) (blue curve). (b) The corresponding energy fluxes \(\Pi_{n}\) are shown using the same color convention. We observe \(C_{n}\sim k_{n}^{-2/3}\) and constant \(\Pi_{n}\) in the inertial range. The insets in (a,b) exhibit \(C_{n}\) and \(\Pi_{n}\) for the \(\nu=0\) case (equilibrium behaviour).
## VI Comparison with earlier works
There are only a small number of works on renormalization group analysis of the shell model. Recently, Fontaine _et al._[18] preformed functional renormalization group (FRG) analysis of the shell model and computed the multiscaling exponents. They observed that
\[C_{n}\sim k_{n}^{-\alpha_{E}}, \tag{42}\] \[\nu_{n}k_{n}^{2}\sim k_{n}^{\alpha_{\nu}}, \tag{43}\]
with \(\alpha_{E}=0.633\pm.004\) and \(\alpha_{\nu}=-0.741\pm 0.01\). Substitution of the above in renormalization group equation [Eq. (15)] yields
\[\alpha_{E}+2\alpha_{\nu}=2. \tag{44}\]
Note that Fontaine _et al._[18]'s \(\alpha_{E}\) and \(\alpha_{\nu}\) satisfy Eq. (44) to a good approximation. Fontaine _et al._[18] reported that the proportionality constant for \(C_{2}(\tau=0)\), which is \(K_{\rm Ko}\), is approximately 1.15.
In a different application of field theory, Eyink [10] employed operator product expansion to the shell model and computed various correlations and structure functions. Note, however, that Fontaine _et al._[18] and Eyink [10] do not report the RG constant \(\nu_{*}\) in their calculation. We remark that the multiscaling exponents are related to the fluctuations in the energy flux, i.e., for \(\left\langle\Pi_{n}^{2}\right\rangle\)[52; 53; 42]. The self-consistent calculation presented in this paper may be extendible to the computation of \(\left\langle\Pi_{n}^{2}\right\rangle\).
It is important to compare our predictions on \(\nu_{*}\) and \(K_{\rm Ko}\) with the past works on hydrodynamic turbulence. Yakhot and Orszag [3] observed that \(\nu_{*}=0.39\) and \(K_{\rm Ko}=1.617\). McComb and Shamugasundaram [7] computed that \(\nu_{*}\approx 0.40\) and \(K_{\rm Ko}\approx 1.8\). Zhou _et al._[38] also reported \(\nu_{*}\approx 0.40\). Our field-theoretic computation of the shell model yields \(\nu_{*}\approx 0.50\) and \(K_{\rm Ko}=1.7\), with minor variations depending on the value of \(b\). Using ERG, Tomassini [17] showed that \(E(k)\sim k^{-1.666\pm 0.001}\), whereas \(K_{\rm Ko}\) lies in the range of 1.124 to 1.785 depending on the chosen function. Clearly, the shell model predictions are reasonably close to the earlier works on hydrodynamic turbulence.
There are subtle differences between the RG schemes for the shell model and hydrodynamic turbulence. The RG procedure for the shell model does not involve any integral, and hence is simpler than that for HD turbulence. In addition, we make fewer assumptions in the RG implementation of the shell model. For example, \(u^{>}\) variables are assumed to be time-stationary, but not necessarily quasi-Gaussian. Note that many past RG works assume that \(u^{>}\) is quasi-Gaussian (see e.g., [9]). In addition, the RG computation of the shell model is nearly exact. In Eq. (5), we substitute the expansion of \(u_{n+1}^{*>}\) and \(u_{n+2}^{>}\) one after the other, and then solve for the \(\nu_{n}\) under Markovian approximation. Also, note that the local interactions in the shell model suppresses the _sweeping effect_ proposed by Kraichnan [30].
We conclude in the next section.
## VII Conclusions
In this paper, we employ RG analysis to the shell model of turbulence, and show that a combination of Kolmogorov's spectrum \(C_{n}=K_{\rm Ko}\epsilon^{2/3}k_{n}^{-2/3}\) and \(\nu_{n}=\nu_{*}K_{\rm Ko}^{1/2}\epsilon^{1/3}k_{n}^{-4/3}\) is a solution of the RG flow equation. Our calculations predict that for \(b=1.5\), \(\nu_{*}\approx 0.48\) and \(K_{\rm Ko}\approx 1.71\), which are in good agreement with the numerical results, except that the numerical \(K_{\rm Ko}\) is around 1.6 times smaller than the theoretical prediction. Note that the field-theoretic predictions for the shell model and the Navier-Stokes equation are close to each other [37; 6; 38].
The computation employed in this paper can be easily generalized to the shell models for scalar and magneto-hydrodynamic turbulence. We also believe that the fluctuations in the energy flux for the shell model could be computed using the method outlined in this paper.
###### Acknowledgements.
We thank Soumyadeep Chatterjee in help in drawing the Feynman diagrams of the paper. We thank anonymous referees for useful comments and suggestions. This work is supported by the project 6104-1 from the Indo-French Centre for the Promotion of Advanced Research (IFCPAR/CEFIPRA), and the project PHY/DST/2020455 by Department of Science and Technology, India.
## Appendix A Galilean invariance leads to non-renormalizibilty of coupling constant
It can be easily shown that the coupling constant \(\lambda\) of Navier-Stokes equation (NSE) remain unchanged on renormalization due to Galilean invariance [2; 9; 37]. Here, the derivation is reproduced in brief.
We write the renormalized Navier-Stokes equation as
\[\partial_{t}{\bf u}({\bf x},t)+\lambda{\bf u}({\bf x},t)\cdot\nabla{\bf u}({ \bf x},t)=-\nabla p({\bf x},t)+\nu\nabla^{2}{\bf u}({\bf x},t), \tag{35}\]
where \({\bf u}({\bf x},t),p({\bf x},t)\) are the velocity and pressure fields respectively, \(\lambda\) is a measure of the nonlinear interaction, and \(\nu\) is the kinematic viscosity. Note that \(\lambda=1\) for the original NSE, but it may get renormalized under scaling.
We consider two reference frames: _Lab references frame_, where the fluid has mean velocity \({\bf U}_{0}=U_{0}\hat{x}\), and the _moving reference frame_, where the velocity is \({\bf u}^{\prime}({\bf x}^{\prime},t^{\prime})\) with zero mean. We denote the variables in the lab frames using unprimed variables, but those in the moving frame using primed variables. The variables in the two reference frames are related to each other via Galilean transformation, which is
\[x=x^{\prime}+U_{0}t^{\prime};\quad y=y^{\prime};\quad z=z^{\prime};\quad t=t^{ \prime}; \tag{36}\]
\[\partial_{x}=\partial_{\nu^{\prime}};\quad\partial_{y}=\partial_{\nu^{\prime}} ;\quad\partial_{z}=\partial_{z^{\prime}};\quad\partial_{t}=\partial_{t^{\prime }}-U_{0}\partial_{x}^{\prime}; \tag{37}\]
\[{\bf u}({\bf x},t)=U_{0}\hat{x}+{\bf u}^{\prime}({\bf x}^{\prime},t^{\prime}); \quad p({\bf x},t)=p^{\prime}({\bf x}^{\prime},t^{\prime}), \tag{38}\]
substitution of which in Eq. (35) yields
\[\partial_{t^{\prime}}{\bf u}^{\prime}({\bf x}^{\prime},t^{\prime}) +\lambda[U_{0}\hat{x}+{\bf u}^{\prime}({\bf x}^{\prime},t^{\prime})]\cdot \nabla^{\prime}{\bf u}^{\prime}({\bf x}^{\prime},t^{\prime})\] \[-U_{0}\partial_{x^{\prime}}{\bf u}^{\prime}({\bf x}^{\prime},t^{ \prime})=-\nabla^{\prime}p^{\prime}({\bf x},t)+\nu\nabla^{\prime 2}{\bf u}^{ \prime}({\bf x}^{\prime},t^{\prime}). \tag{39}\]
Note that Eq. (39) is transformed to Eq. (35) in primed variables only if
\[\lambda=1. \tag{40}\]
Thus, it has been shown that \(\lambda\) is unchanged under RG due to Galilean invariance. For further discussion, refer to Forster _et al._[2] and McComb [9; 37].
The shell model is written in Fourier space. Hence, it is not possible to extend the above derivation to the shell model. However, using the analogy between the shell model and Navier-Stokes equation, it is reasonable to assume that \(\lambda=1\) for the shell model, and that \(\lambda\) remains unaltered under RG operation.
## Appendix B Renormalization of shell model in \(({\bf k},\omega)\) space
In this Appendix, we will briefly discuss renormalization of the shell model in \(({\bf k},\omega)\) space. Note that the shell model is already divided in \({\bf k}\) space. The forward and inverse Fourier transforms of \(u_{n}\) are defined as follows:
\[u_{n}(t) = \int_{-\infty}^{\infty}\frac{d\omega}{2\pi}u_{n}(\omega)\exp[-i \omega t], \tag{41}\] \[u_{n}(\omega) = \int_{-\infty}^{\infty}dt\;u_{n}(t)\exp[i\omega t]. \tag{42}\]
Fourier transform of Eq. (1) yields the following equation for \(u_{n}^{<}(\omega)\):
\[(-i\omega+\bar{\nu}k_{n}^{2})u_{n}^{<}(\omega) = -i\int\frac{d\omega^{\prime}}{2\pi}[a_{1}k_{n}u_{n+1}^{*>}( \omega^{\prime}-\omega)u_{n+2}^{>}(\omega^{\prime}) \tag{43}\] \[+a_{2}k_{n-1}u_{n-1}^{*<}(\omega^{\prime}-\omega)u_{n+1}^{>}( \omega^{\prime})\] \[-a_{3}k_{n-2}u_{n-2}^{*}(\omega-\omega^{\prime})u_{n-1}^{<}( \omega^{\prime})].\]
We perform ensemble averaging over \(u_{n+1}^{>}\) and \(u_{n+2}^{>}\) variables. Following the method of Sec. II, we arrive at
\[\left<u_{n-1}^{*<}(\omega^{\prime}-\omega)u_{n+1}^{>}(\omega^{ \prime})\right> = 0, \tag{44}\] \[\left<u_{n-2}^{<}(\omega-\omega^{\prime})u_{n-1}^{<}(\omega^{ \prime})\right> = u_{n-2}^{<}(\omega-\omega^{\prime})u_{n-1}^{<}(\omega^{\prime}).\]
Consequently, only the first term of Eq. (43) yields a nonzero correction to the viscosity.
The renormalized viscosity receives contributions from the two Feynman diagrams of Fig. 2. For the first loop diagram, we expand \(u_{n+2}^{>}(\omega^{\prime})\) as follows
\[(-i\omega^{\prime}+\nu_{n+2}k_{n+2}^{2})u_{n+2}^{>}(\omega^{\prime })=\] \[-i\int\frac{d\omega^{\prime\prime}}{2\pi}[a_{1}k_{n+2}u_{n+3}^{*>} (\omega^{\prime\prime}-\omega^{\prime})u_{n+4}^{>}(\omega^{\prime\prime})\] \[+a_{2}k_{n+1}a_{2}u_{n+1}^{*>}(\omega^{\prime\prime}-\omega^{ \prime})u_{n+3}^{>}(\omega^{\prime\prime})\] \[-a_{3}k_{n}u_{n}^{<}(\omega^{\prime}-\omega^{\prime\prime})u_{n+1}^ {>}(\omega^{\prime\prime})]. \tag{45}\]
Note, however, that \(u_{n+3}^{*>}(\omega)\) and \(u_{n+4}^{*>}(\omega)\) are absent at this stage of expansion. Hence, only the last term of Eq. (45) survives. Therefore,
\[u_{n+2}^{>}(\omega^{\prime})=ia_{3}k_{n}\int\frac{d\omega^{\prime\prime}}{2\pi }\frac{u_{n}^{<}(\omega^{\prime}-\omega^{\prime\prime})u_{n+1}^{>}(\omega^{ \prime\prime})}{-i\omega^{\prime}+\nu_{n+2}k_{n+2}^{2}}, \tag{46}\]
substitution of which in the RHS of Eq. (24) yields
\[X = -ia_{1}k_{n}\int\frac{d\omega^{\prime}}{2\pi}\left\langle u_{n+1}^{ \ast>}(\omega^{\prime}-\omega)u_{n+2}^{>}(\omega^{\prime})\right\rangle \tag{25}\] \[= a_{1}a_{3}k_{n}^{2}\int\frac{d\omega^{\prime}}{2\pi}\frac{d \omega^{\prime\prime}}{2\pi}\frac{\left\langle u_{n+1}^{\ast>}(\omega^{\prime} -\omega)u_{n+1}^{>}(\omega^{\prime\prime})\right\rangle}{-i\omega^{\prime}+ \nu_{n+2}k_{n+2}^{2}}\] \[\qquad\qquad\qquad\qquad\times u_{n}^{<}(\omega^{\prime}-\omega^{ \prime\prime}),\] \[= \left[a_{1}a_{3}k_{n}^{2}\int\frac{d\omega^{\prime}}{2\pi}\frac{ \left\langle|u_{n+1}^{>}(\omega^{\prime}-\omega)|^{2}\right\rangle}{-i\omega^ {\prime}+\nu_{n+2}k_{n+2}^{2}}\right]u_{n}^{<}(\omega).\]
Since \(\nu_{n}\) is computed for a long time limit, we set \(\omega\to 0\) in the above integral. Hence, the square-bracketed term of Eq. (25) is
\[I_{1} = a_{1}a_{3}k_{n}^{2}\int\frac{d\omega^{\prime}}{2\pi}\frac{\left\langle| u_{n+1}^{>}(\omega^{\prime})|^{2}\right\rangle}{-i\omega^{\prime}+\nu_{n+2}k_{n+2 }^{2}}. \tag{26}\]
Now, we employ Wiener-Khinchin theorem to simplify the frequency spectrum as
\[\left\langle|u_{n+1}^{>}(\omega^{\prime})|^{2}\right\rangle=\int_{-\infty}^{ \infty}d\tau C_{n+1}(\tau)\exp(-i\omega^{\prime}\tau), \tag{27}\]
where \(C_{n+1}(\tau)\) is the correlation function defined in Eq. (10). With this,
\[I_{1} = a_{1}a_{3}k_{n}^{2}\int_{-\infty}^{\infty}d\tau\bar{C}_{n+1}(\tau)\int \frac{d\omega^{\prime}}{2\pi}\frac{\exp(-i\omega^{\prime}\tau)}{-i\omega^{ \prime}+\nu_{n+2}k_{n+2}^{2}}. \tag{28}\]
An application of contour integral over the lower part of \(\omega^{\prime}\) plane yields
\[I_{1} = a_{1}a_{3}k_{n}^{2}C_{n+1} \tag{29}\] \[\times\int_{0}^{\infty}d\tau\exp[-(\nu_{n+1}k_{n+1}^{2}+\nu_{n+2 }k_{n+2}^{2})\tau]\] \[= \frac{a_{1}a_{3}k_{n}^{2}C_{n+1}}{\nu_{n+1}k_{n+1}^{2}+\nu_{n+2}k _{n+2}^{2}}.\]
The second Feynman diagram of Fig. 2 yields
\[I_{2}=\frac{a_{1}a_{2}k_{n}^{2}C_{n+2}}{\nu_{n+1}k_{n+1}^{2}+\nu_{n+2}k_{n+2}^ {2}}. \tag{30}\]
The steps beyond this point are same as those described in Sec. II.
|
2309.01977 | PyTomography: A Python Library for Quantitative Medical Image
Reconstruction | There is a need for open-source libraries in emission tomography that (i) use
modern and popular backend code to encourage community contributions and (ii)
offer support for the multitude of reconstruction algorithms available in
recent literature, such as those that employ artificial intelligence. The
purpose of this research was to create and evaluate a GPU-accelerated,
open-source, and user-friendly image reconstruction library, designed to serve
as a central platform for the development, validation, and deployment of
various tomographic reconstruction algorithms. PyTomography was developed using
Python and inherits the GPU-accelerated functionality of PyTorch and
parallelproj for fast computations. Its flexible and modular design decouples
system matrices, likelihoods, and reconstruction algorithms, simplifying the
process of integrating new imaging modalities using various python tools.
Example use cases demonstrate the software capabilities in parallel hole SPECT
and listmode PET imaging. Overall, we have developed and publicly share
PyTomography, a highly optimized and user-friendly software for medical image
reconstruction, with a class hierarchy that fosters the development of novel
imaging applications. | Lucas Polson, Roberto Fedrigo, Chenguang Li, Maziar Sabouri, Obed Dzikunu, Shadab Ahamed, Nikolaos Karakatsanis, Arman Rahmim, Carlos Uribe | 2023-09-05T06:12:39Z | http://arxiv.org/abs/2309.01977v4 | # PyTomography: A Python Library for Quantitative Medical Image Reconstruction
###### Abstract
**Background:** There is a scarcity of open-source libraries in medical imaging dedicated to both (i) the development and deployment of novel reconstruction algorithms and (ii) support for clinical data.
**Purpose:** To create and evaluate a GPU-accelerated, open-source, and user-friendly image reconstruction library, designed to serve as a central platform for the development, validation, and deployment of novel tomographic reconstruction algorithms.
**Methods:** PyTomography was developed using Python and inherits the GPU-accelerated functionality of PyTorch for fast computations. The software uses a modular design that decouples the system matrix from reconstruction algorithms, simplifying the process of integrating new imaging modalities or developing novel reconstruction techniques. As example developments, SPECT reconstruction in PyTomography is validated against both vendor-specific software and alternative open-source libraries. Bayesian reconstruction algorithms are implemented and validated.
**Results:** PyTomography is consistent with both vendor-software and alternative open source libraries for standard SPECT clinical reconstruction, while providing significant computational advantages. As example applications, Bayesian reconstruction algorithms incorporating anatomical information are compared to the ordered subset expectation maximum (OSEM) algorithm.
**Conclusions:** We have developed and publicly shared PyTomography, a highly optimized and user-friendly software for quantitative image reconstruction of medical images, with a class hierarchy that fosters the development of novel imaging applications.
###### Contents
* I Introduction
* II Materials and Methods
* II.A Software Architecture
* II.B Mathematical Description
* II.B.1 Transforms and Projections
* II.B.2 Reconstruction Algorithms
* II.C Examples
* III Results
* IV Discussion
* V Conclusion
* VI Appendix
* VI.A Relevant Links
* VI.B Validation of PyTomography
## 1 Introduction
Medical imaging forms a cornerstone in modern healthcare by providing visual and quantitative information about internal body structures and functions. It enables early detection [1], accurate diagnosis, and precise treatment planning for a wide range of medical conditions [2, 3, 4], contributing to improved patient outcomes and enhanced medical decision-making. Reconstruction algorithms are routinely used to generate 3D images that can be used for both research and clinical decision making [5].
While the development of new reconstruction techniques remains an active field of research [6, 7, 8, 9], it can often be difficult to share and disseminate one's findings. Additionally, while manufacturers of tomographic modalities such as Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) cameras provide their own internal reconstruction algorithms, users cannot access all the implementation details; this limits reproducibility in scientific studies focused on assessing reconstruction algorithms. These issues could be solved by moving to an open source paradigm for medical image reconstruction. Here, the imaging community could easily contribute to the implementation of novel reconstruction and correction algorithms in a way that is open and transparent. Such a framework would allow for the standardization needed to produce comparable results, improving the reliability, validity, and reproducibility of findings. This could ultimately help accelerate the development and translation of new diagnostic imaging capabilities into clinical practice.
A few tools have already been developed to try and address some of these issues. STIR [10] and Castor [11] are examples of two open source image reconstruction frameworks written in C++. However, there are some considerations: (i) the C++ architectures may be difficult to understand for users and potential contributors who will likely be most familiar with Python, (ii) both libraries are limited to running on CPU only, and (iii) neither library appears to provide native support for the DICOM standard. While the Python-based library TomoPy [12] and the ASTRA toolbox [13] support a variety of GPU-accelerated image tomographic reconstruction algorithms, they lack extensive SPECT/PET system modeling. To address these limitations, the python library PyTomography was created as a collaborative medical imaging platform, designed to serve as a central hub for researchers to share, validate, and deploy novel reconstruction techniques. While the present focus of PyTomography
is the development and validation of reconstruction algorithms for SPECT, other imaging modalities and reconstruction algorithms can be added using the building blocks provided.
Overall, the aim of this study is to show the capabilities of the developed software library PyTomography. To demonstrate the software, we will explore example applications in \({}^{177}\)Lu-PSMA-617 dosimetry. In what follows, we elaborate on our methods and results for PyTomography, which is made publicly available to download; relevant links can be found in section VI..
## II Materials and Methods
### Software Architecture
An overview of the main components of PyTomography are shown in Figure 1. The main submodules are summarized as follows.
1. **Metadata**. Metadata classes contain all auxiliary information needed for image reconstruction such as voxel dimensions, angles of projections, and detector distance of measured projections.
2. **Transforms**. Transform classes are used to model the various physical effects involved in the imaging process such as attenuation and image blurring. While they function as independent mathematical operators, they are most commonly used as components of a system matrix. Transforms are defined by the mathematical property that they are linear endomorphisms.
3. **Projectors**. Unlike transforms, projectors are mathematical operators that map between different vector spaces. The system matrix, which maps from object space to projection space and represents a model of the imaging system, is an example of a projector.
4. **Priors**. Prior functions define a relative likelihood for a particular object prediction. They are used in Bayesian reconstruction algorithms.
5. **Algorithms**. Algorithms are used for image reconstruction. They require a set of measured projection data, a system matrix, and corresponding metadata.
**Callbacks**. Callbacks are used to obtain and store statistics throughout iterative image reconstruction algorithms.
* **Data**. The standard data type used to represent 3D objects and projections is the torch.Tensor class of PyTorch.
Metadata and transforms are used to build the system matrix, which has functionality for forward and back projections (Equations 1 and 2). Reconstruction algorithms take as input the system matrix, measured projection data, and optional Bayesian Prior functions. The Callback class is used to compute statistics dependent on iteration number in iterative algorithms; it can be used to generate bias-variance curves in various regions of the body, for example.
PyTomography has native input/output functionality for interfile and DICOM data. One of the primary focuses of the library is reconstruction of clinical data, which requires processing data from DICOM files such that it can be used in reconstruction. In SPECT imaging, PyTomography offers functionality for the following required steps:
1. Collimator properties of commercial scanner models are used to generate required parameters for the modeling of the point-spread function (PSF). PyTomography uses public data sheets from manufacturers to store collimator information in internal data files.
2. Conversion and alignment of CT scans to create attenuation maps. PyTomography assumes a bilinear relationship between Hounsfield Units and linear attenuation coefficients, and uses the cortical bone peak from measured CT data to obtain an effective CT energy required for such a conversion.
### Mathematical Description
The mathematical conventions used in this paper are as follows. Vectors are represented by lower case letters, while linear operators are represented by upper case letters. The product of two vectors \(vw\) and division of two vectors \(v/w\) are inferred to be point-wise operations. Components of a vector are given by \(v_{i}\), and components of linear operators are given by \(A_{ij}\).
#### ii.b.1 Transforms and Projections
In the paradigm of tomographic medical imaging, there are two main vector spaces: the "projection" space \(\mathcal{V}\) consisting of measured projection data \(g\) and the "object" space \(\mathcal{U}\) consisting of 3D objects \(f\). Since measured and reconstructed data is digitized, \(\mathcal{U}\) and \(\mathcal{V}\) here are finite dimensional vector spaces. In SPECT imaging, for example, \(\mathcal{V}\) consists of a set of 2D images acquired by counting photons emitted by radiopharmaceutical, and \(\mathcal{U}\) represents the spatial distribution of radioactivity concentration.
A linear imaging system can be characterized by a system matrix \(H\), which maps from object space to projection space. It's components \(H_{ij}\) represent the probability that a photon emitted from voxel \(j\) is detected in a detector element \(i\). The imaging process can be succinctly expressed as
\[g=Hf \tag{1}\]
The transpose \(H^{T}\) can be used to map from projection space to object space:
\[f^{\prime}=H^{T}g \tag{2}\]
Equations 1 and 2 are typically referred to as Forward Projection (FP) and Back Projection (BP), respectively. In practice, \(H\) is typically too large to store on the memory of a computer. FP is thus typically implemented using (i) rotations, (ii) a sequence of linear endomorphisms in \(\mathcal{U}\) given by \(\{A_{n}\}\), (iii) summation along a particular axis, and (iv) a sequence of linear endomorphisms in \(\mathcal{V}\) given by \(\{B_{k}\}\). BP is given by the transpose of this operation sequence. This sequence of operations, traditionally known as the "rotate and sum" technique, is able
Figure 1: Overview of the main components of PyTomography. Sample classes are shown in white boxes; these do not comprise the full set of classes available in the software.
to directly model PET and parallel collimator SPECT, and can be used in fan/cone beam CT and diverging/converging/pinhole collimator SPECT provided either (i) rebinning in \(\mathcal{V}\) or (ii) spatial deformations in \(\mathcal{U}\) are employed. While this operation sequence permits a computationally efficient and simple implementation for GPU-based systems, it also has limitations; De Man et. al [14] showed that alternative "distance-driven" approaches could be used to reduce artifacts and improve image resolution in CT imaging.
Linear endomorphisms \(A_{i}\) and \(B_{i}\) used in steps (ii) and (iv) of the operation sequence are implemented as standalone classes within the "transforms" submodule of PyTomography; operations \(A_{i}\) are identified as object-to-object transforms, and operations \(B_{i}\) are identified as projection-to-projection transforms. These operations are used to model phenomena such as attenuation correction and resolution recovery in SPECT/PET imaging. System matrices of various imaging systems are implemented as classes within the "projectors" submodule in PyTomography; they are instantiated using transforms that model all the necessary features of the imaging modality.
#### Reconstruction Algorithms
An image reconstruction algorithm \(A\) uses measured data \(g\) (i.e. projections) to estimate a corresponding object \(f\) that would produce projections \(g\) given a system matrix \(H\). This can be expressed as
\[\hat{f}=A(g,H,...) \tag{3}\]
where... includes all additional hyperparameters required for the algorithm. All reconstruction algorithms are implemented as classes within the "algorithms" submodule of PyTomography.
The present focus of PyTomography is statistical, iterative reconstruction algorithms, such as the ordered-subset expectation maximum (OSEM) algorithm [15]. OSEM assumes that \(f\) is a Poisson distributed random vector (this is the case in nuclear medicine imaging modalities) with mean \(\bar{f}\), and is used to to reconstruct the maximum likelihood estimate \(\hat{f}\)
of \(\bar{f}\) from measured projection data \(g\). To speed up computation, OSEM partitions \(g\) into \(M\) subsets of different projection angles (each with \(S_{m}\) elements), and a modified system matrix \(H_{m}\) considers forward projection for subset \(m\). The standard version of OSEM used in SPECT reconstruction can be expressed as
\[\hat{f}^{n,m+1}=\left[\frac{1}{H_{m}^{T}1}H_{m}^{T}\left(\frac{g_{m}}{H_{m}\hat {f}^{n,m}+s}\right)\right]\hat{f}^{n,m} \tag{4}\]
where \(g_{m}\) is a partitioned subset of the projection data, \(s\) is the estimated scatter contribution, \(1\in\mathcal{U}\) is a vector containing all 1's, and \(\hat{f}^{n,m}\) is defined such that \(n\) is the iteration index and \(\hat{f}^{n+1,0}\equiv\hat{f}^{n,M}\).
PyTomography also supports incorporation of Bayesian Priors \(V(\hat{f}):\mathcal{U}\rightarrow\mathbb{R}\) in statistical reconstruction algorithms. Bayesian Priors are implemented as standalone classes within the "priors" submodule of PyTomography; they are used to compute \(\nabla_{f}V\). There are currently two implemented variations of OSEM that enable inclusion of Bayesian Priors: the one step late[16] (OSL), implemented as
\[\hat{f}^{n,m+1}=\left[\frac{1}{H_{m}^{T}1+\beta\nabla_{f^{n,m}}V}H_{m}^{T} \left(\frac{g_{m}}{H_{m}\hat{f}^{n,m}+s}\right)\right]\hat{f}^{n,m} \tag{5}\]
and the block sequential regularized expectation maximum[17] (BSREM), implemented as
\[\hat{f}^{n,m+1}=\hat{f}^{n,m}+\alpha_{n}D\left[H_{m}^{T}\left(\frac{g_{m}}{H_ {m}\hat{f}^{n,m}+s}-1\right)-\beta\nabla_{f^{n,m}}V\right] \tag{6}\]
where \(\beta\) is a constant used to scale the strength of the prior, \(D=\left(S_{m}/M\cdot H^{T}1\right)^{-1}\) is the scaling matrix, and \(\alpha_{n}\) represents the relaxation sequence. In this paper, no relaxation is used (\(\alpha_{n}=1\) for all \(n\)). PyTomography also has support for the Kernelized Expectation Maximum (KEM) algorithm[18], given by
\[\hat{\alpha}^{n,m+1}=\left[\frac{1}{K^{T}H_{m}^{T}1}K^{T}H_{m}^{T}\left(\frac {g_{m}}{H_{m}K\hat{\alpha}^{n,m}+s}\right)\right]\hat{\alpha}^{n,m} \tag{7}\]
where \(K\) is a kernel operator consisting of fixed basis functions, and \(\hat{\alpha}^{n,m+1}\) are used as scaling factors for each basis function. The reconstructed image estimate is given by \(\hat{f}^{n,m}=K\hat{\alpha}^{n,m}\).
While PyTorchography offers implementation of many priors, such the quadratic and log-cosh, the prior featured in this paper is the relative difference penalty (RDP) [19], defined as
\[V(\hat{f})=\sum_{i}\sum_{k\in N_{i}}w_{ik}\frac{(\hat{f}_{i}-\hat{f}_{k})^{2}}{( \hat{f}_{i}+\hat{f}_{k})+\gamma|\hat{f}_{i}-\hat{f}_{k}|} \tag{8}\]
where \(N_{i}\) is the set of voxels immediately neighbouring \(i\), and \(w_{ik}\) represents a weighting between voxels \(i\) and \(k\). Usage of this prior tends to smooth an image and reduce noise, but it also tends to blur boundaries at the edges of organs. One way to mitigate this effect is to replace \(N_{i}\) in Equation 8 with a reduced set of neighbours which lie in a similar anatomical region. Since anatomical information is used, this is commonly referred to as an anatomical prior (AP); variations of this technique have recently become popular in clinical practice due to enhanced lesion quantitation and detectability in bone SPECT/CT [6]. In this paper, use of an AP refers to using only the 8 most similar neighbours based on absolute differences of HU values in a CT scan or attenuation map; RDP with the use of AP is referred to as RDP-AP.
### Examples
This section describes the multiple examples used to demonstrate the capabilities of PyTomography. Each example features the use of OSEM, BSREM, and KEM; relevant hyperparameters for each algorithm are shown in Table 1. Reconstruction is explored on both Monte Carlo simulated data (SIMIND) and clinical data (DICOM). Validation of PyTomography against other open-source software, and for each data type, is shown in the appendix.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Algorithm** & OSEM & BSREM & KEM \\ \hline \hline Subsets & 8 & 8 & 8 \\ \hline Neighbourhood Size & - & \(3\times 3\times 3\) & \(5\times 5\times 5\) \\ \hline \(k\) Nearest Neighbours & - & 8 & 40 \\ \hline Kernel & - & - & Eq. 5 of Vuohijoki et. al. [6] \\ \hline Prior & - & RDP-AP & - \\ \hline \end{tabular}
\end{table}
Table 1: Reconstruction parameters used in this paper. Neighbourhood size corresponds to the kernel size in KEM, and prior neighbourhood for BSREM. Nearest neighbours were obtained from attenuation maps.
The two examples are listed below:
1. **SIMIND**: This example highlights a use case of PyTomography for reconstruction of SPECT image data obtained from the SIMIND Monte Carlo simulation program [20]. It features reconstructions of digital phantom data representing a \({}^{177}\)Lu-PSMA-617 SPECT scan 38.6 minutes post injection. The digital phantom used for simulation was created using the extended-cardiac torso (XCAT) phantom [21], where organ concentrations were obtained using a physiologically-based pharmacokinetic PBPK model [22]; the model includes major relevant physiological and molecular events and consists of 112 coupled ordinary differential equations. Relevant SIMIND acquisition parameters included 120 projections, a radial distance of 25 cm, pixel spacing of \(3\;\mathrm{mm}\times 3\;\mathrm{mm}\), and dimensions of 128x384. Reconstructions were performed for 120 iterations. Organ masks were obtained using bilinear downsampling from ground truth phantoms; voxels with greater than 50% organ volume were included in the mask.
2. **DICOM**: This example highlights SPECT reconstruction on publicly available data from the Deep Blue data repository [23]. Patients received radiopharmaceutical therapy with \({}^{177}\)Lu-DOTATATE for neuroendocrine tumours (4 cycles of 7.4 GBq/cycle administered every 8 weeks); scans were taken at 4 different time points in the week following a therapeutic dose. Images consisted of 120 projections of shape \(256\times 256\) with resolution \(2.4\;\mathrm{mm}\times 2.4\;\mathrm{mm}\). Reconstructions were performed for 50 iterations for each algorithm. Organ masks for the liver and kidneys were obtained using a CT segmentation model [24]. Masks were then downsampled and aligned with SPECT images using bilinear interpolation; voxels with greater than 50% organ volume were included in the downsampled mask.
All computation was performed using a Microsoft Azure virtual machine (Standard NC6s v3) with a 6 CPUs (Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz), 112 GB of RAM, and a TeslaV100 GPU. Python scripts and notebooks used to obtain these results can be found at [https://github.com/qurit/PyTomography_paper_code](https://github.com/qurit/PyTomography_paper_code).
## III Results
For the SIMIND data, the time required for reconstruction was 412.8 s (3.44 s/iteration) for OSEM, 434.8 s (3.62 s/iteration) for BSREM, and 601.8 s (5.02 s/iteration) for KEM; sample reconstructions with corresponding RC curves are shown for each algorithm in Figure 2. BSREM clearly produced the least noisy images, and had the best RCs for the lungs, liver, kidneys, and bladder. KEM produced an image estimate with an intermediate level of noise, and had the best recovery coefficient for the salivary glands. OSEM yielded the noisiest images, and had the worst recovery coefficients in all regions except the salivary glands.
The time required to reconstruct the Deep Blue DICOM data was 464.7s (9.29 s/iteration) for OSEM, 453.1s (9.06 s/iteration) for BSREM, and 1825s (36.50 s/iteration) for KEM. Central coronal slices are shown in Figure 3; the patient had a region of high up
Figure 2: Left: Central coronal slices corresponding to truth, OSEM, BSREM, and KEM after 120 iterations of reconstruction. Right: Recovery coefficient vs. iteration number for sample organs; line widths are representative of error estimates based on reconstructions of 5 independent noise realizations. Ground truth phantoms are representative of a typical \({}^{177}\)Lu-PSMA-617 scan, and were generated using PBPK modeling and the XCAT software; SPECT acquisition was simulated using SIMIND.
take value in the liver, indicative of a liver lesion. Both BSREM and KEM can qualitatively be observed to confine activity within the boundary of the kidneys, with BSREM exhibiting slightly more of this behaviour. In addition, OSEM produced the most observed noise in the liver. A common artifact of both BSREM and KEM, however, is the noise in the liver lesion; use of anatomical nearest neighbours may cause the lesion activity noise profile to resemble the CT noise profile. The mean predicted counts in the kidneys were 1.98 (OSEM), 2.07 (BSREM), and 2.02 (KEM); these relative proportions are consistent with the RCs of Figure 2, suggesting that BSREM may yield the largest and most accurate recovery coefficients.
## IV Discussion
This technical report described the software architecture of PyTomography, and presented use cases on both SIMIND and DICOM data. The short times required for reconstruction permit extensive studies on phantom and patient data that many include multiple patients, noise realizations, and reconstruction algorithms. Validation of PyTomography against alternative reconstruction libraries is shown in the appendix.
As example applications, the use of OSEM, BSREM, and KEM were explored in \({}^{177}\)Lu
Figure 3: Coronal slices of activity concentration (colored) and Hounsfield Unit values (greyscale) reconstructed using OSEM, BSREM, and KEM. Data corresponds to the first time point of patient 4 from the “Lu-177 DOTATATE Anonymized Patient Datasets: Multi-Time Point Lu-177 SPECT/CT Scans” dataset [23].
SPECT reconstruction, where BSREM featured the use of the relative difference prior using similar anatomical neighbours. It should be emphasized that the purpose of this technical report was not to rank the different algorithms, but rather to demonstrate the capabilities of the software library. While some evidence was shown to favour BSREM over OSEM and KEM for kidney and liver dosimetry based on better recovery coefficients, this evidence is not strong enough to establish any extensive ranking of the different reconstruction algorithms. An extensive study that aims to quantitatively compare algorithms should involve (i) multiple XCAT activity and anatomical configurations, (ii) inclusion of patient motion, (iii) effect of misalignment between SPECT and CT images, (iv) different hyperparameters in each reconstruction algorithm and (iv) should seek to use real patient data to validate any phantom studies. In addition, recovery coefficients alone are not a sufficient metric to rank algorithms, studies should also consider source-to-background ratios as well as error analysis based on intra and inter patient variability.
While the RDP-AP prior was featured heavily in this study, the framework of PyTomography is designed to foster the development and validation of other novel prior functions as well. In addition, while KEM was featured using anatomically-based basis functions, the library permits use of any external image for construction of the basis functions. Different priors and basis functions may also have significant implications for (i) quantitative PET/SPECT imaging in dosimetry, and (ii) qualitative observer based studies in lesion detection and classification. Since the library is open source, newly developed functionality can be easily tested and verified by many independent research groups with their own data. The immediate goals for future development in PyTomography at the time of publication are
1. Fast PSF modeling for high energy SPECT isotopes, such as \({}^{131}\)I and \({}^{225}\)Ac.
2. System matrix modeling of "switching" detector SPECT systems, such as General Electric's StarGuide system.
3. System matrix modeling of PET systems.
4. Development of a 3D Slicer [25] extension
We highly encourage and appreciate open-source contributions to assist in the items listed above; those who wish to contribute are encouraged read the developers' guide on
the documentation website. Contributions could also include the implementation of novel reconstruction algorithms and modeling of different imaging systems (e.g. pinhole collimator SPECT, fan beam CT, etc.).
## V Conclusion
This work describes the python library PyTomography and highlights specific use cases in SPECT imaging. Implementation using the GPU-accelerated functionality of PyTorch functionality permits extremely fast reconstruction times compared to other open source alternatives. The class hierarchy provides flexibility when developing novel reconstruction techniques, but is also straightforward to use with traditional algorithms. The purpose of PyTomography is to create a transparent and computationally efficient library for medical image reconstruction, where novel reconstruction techniques are implemented, shared, and evaluated by experts in the community.
## VI Appendix
### Relevant Links
Links to the various webpages of PyTomography are contained in Table 2: GitHub contains the source code, readthedocs contains all relevant documentation, and PyPI is the host of the built source files.
### Validation of PyTomography
This section contains two examples used to validate PyTomography against other reconstruction software:
1. **SIMIND Data**: Radioactivity concentrations in a digital phantom were selected
\begin{table}
\begin{tabular}{|l|l|} \hline
**Website** & **Link** \\ \hline Github & [https://github.com/qurit/PyTomography](https://github.com/qurit/PyTomography) \\ Readthedocs & [https://pythonography.readthedocs.io/en/latest/](https://pythonography.readthedocs.io/en/latest/) \\ PyPI & [https://pypi.org/project/pythonography/](https://pypi.org/project/pythonography/) \\ \hline \end{tabular}
\end{table}
Table 2: Relevant Webpages
to correspond to a 24-hour post injection prostate cancer patient (1700 MBq total activity) imaged via the prostate-specific membrane antigen (PSMA)-targeting radiopharmaceutical, \({}^{177}\)Lu-PSMA-617. Simulated data was reconstructed with OSEM (2 iterations, 8 subsets) using (i) STIR and (ii) PyTomography. Reconstructions are shown in Figure 4 and are nearly identical, with error margins \(<5\%\) for all regions with non-negligible activity concentration \(>0.05\) MBq/mL. The reconstruction time in PyTomography (5.4 s) was significantly faster than the time required in STIR (51542.2 s), owing to the speed-up factor of GPU implementation.
2. **DICOM Data**: A Jaszczak phantom was filled with Lu-177 and scanned at 180 projections (\(128\times 128\), resolution \(4.8\mathrm{mm}\times 4.8\mathrm{mm}\)) using a Siemens Symbia T Series SPECT/CT scanner with Medium Energy collimators. Reconstruction was performed with OSEM (4 iterations, 10 subsets) using (i) the manufacturer scanner software and (ii) PyTomography. Reconstructions and corresponding 1 dimensional profiles are shown in Figure 5. The reconstruction performed using PyTomography and the on-board scanner software are similar; the root mean squared error (0.119 counts) provides a quantitative metric for this similarity.
Figure 4: Central coronal slices (left) and 1-dimensional profiles (right) of reconstructed SIMIND SPECT projection data. 1-dimensional profiles correspond to the superior-inferior location of the lines shown on the MIPs, and show the activity concentration at the central anterior-posterior plane.
Figure 5: Sample axial slice and 1D profiles of the Jaszczak phantom reconstructed using the scanner software, and PyTorchography.
## Acknowledgements
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) CGS D Award 569711, NSERC Discovery Grants RGPIN-2019-06467 and RGPIN-2021-02965, as well as computational resources and services provided by Microsoft for Health. |
2301.10080 | Practical Synchronization for OTFS | In the existing literature on joint timing and frequency synchronization of
orthogonal time frequency space modulation (OTFS), practically infeasible
impulse pilot with large peak-to-average power ratio (PAPR) is deployed. Hence,
in this paper, we propose a timing offset (TO) and carrier frequency offset
(CFO) estimation for OTFS over a linear time-varying (LTV) channel, using a low
PAPR pilot structure. The proposed technique utilizes the recently proposed
practically feasible pilot structure with a cyclic prefix (PCP). We exploit the
periodic properties of PCP in both delay and time domains to find the starting
point of each OTFS block. Furthermore, we propose a two-stage CFO estimation
technique with over an order of magnitude higher estimation accuracy than the
existing estimator using the impulse pilot. In the first stage, a coarse CFO
estimate is obtained which is refined in the second stage, through our proposed
maximum likelihood (ML) based approach. The proposed ML-based approach deploys
the generalized complex exponential basis expansion model (GCE-BEM) to capture
the time variations of the channel, absorb them into the pilot and provide an
accurate CFO estimate. Since our proposed synchronization technique utilizes
the same pilot deployed for channel estimation, it does not require any
additional overhead. Finally, we evaluate the performance of our proposed
synchronization technique through simulations. We also compare and show the
superior performance of our proposed technique to the only other existing joint
TO and CFO estimation method in OTFS literature. | Mohsen Bayat, Sanoopkumar P. S., Arman Farhang | 2023-01-24T15:33:54Z | http://arxiv.org/abs/2301.10080v1 | # Practical Synchronization for OTFS
###### Abstract
In the existing literature on joint timing and frequency synchronization of orthogonal time frequency space modulation (OTFS), practically infeasible impulse pilot with large peak-to-average power ratio (PAPR) is deployed. Hence, in this paper, we propose a timing offset (TO) and carrier frequency offset (CFO) estimation for OTFS over a linear time-varying (LTV) channel, using a low PAPR pilot structure. The proposed technique utilizes the recently proposed practically feasible pilot structure with a cyclic prefix (PCP). We exploit the periodic properties of PCP in both delay and time domains to find the starting point of each OTFS block. Furthermore, we propose a two-stage CFO estimation technique with over an order of magnitude higher estimation accuracy than the existing estimator using the impulse pilot. In the first stage, a coarse CFO estimate is obtained which is refined in the second stage, through our proposed maximum likelihood (ML) based approach. The proposed ML-based approach deploys the generalized complex exponential basis expansion model (GCE-BEM) to capture the time variations of the channel, absorb them into the pilot and provide an accurate CFO estimate. Since our proposed synchronization technique utilizes the same pilot deployed for channel estimation, it does not require any additional overhead. Finally, we evaluate the performance of our proposed synchronization technique through simulations. We also compare and show the superior performance of our proposed technique to the only other existing joint TO and CFO estimation method in OTFS literature.
OTFS, timing offset estimation, carrier frequency offset estimation, maximum likelihood estimation.
## I Introduction
Orthogonal time-frequency space (OTFS) is a prominent waveform candidate for the sixth-generation (6G) wireless communication systems due to its robustness to time-varying wireless channel and backward compatibility with orthogonal frequency division multiplexing (OFDM), [1]. Unlike OFDM, OTFS places modulated data symbols in the delay Doppler (DD) domain and spreads it on the time-frequency (TF) plane, thus exploiting the full diversity gain of the time and frequency selective channel in high mobility scenarios, [2]. Since the performance of multicarrier modulations depends on the orthogonality among subcarriers, timing and carrier synchronization are of paramount importance in modern wireless communication systems. Even though there exists a large body of work on OFDM synchronization [3], OTFS literature on this topic is still in its infancy.
Timing offset (TO) and/or carrier frequency offset (CFO) estimation in OTFS are addressed in [4, 5, 6, 7]. A threshold-based TO offset estimation technique for OTFS uplink transmission using random access (RA) preamble is presented in [4]. In [5], the authors developed a correlation-based method for TO estimation in OTFS downlink transmission, which uses a preamble consisting of a linear frequency-modulated (LFM) waveform and two OTFS symbols. Due to the rapid variation of the channel in high mobility scenarios, the TO estimated at the base station using the preamble would be outdated during the data transmission phase. Hence, the methods developed in [4] and [5] are not suitable for high-mobility scenarios, and CFO estimation is not addressed in either [4] or [5].
A time domain joint channel-CFO estimation and time domain equalization technique for OTFS are presented in [6]. In [6], the CFO is estimated and compensated as a part of the channel which hinders the pre-compensation of the CFO at the user terminal which is unsuitable for uplink transmissions. Recently, in [7], we addressed the TO and CFO estimation in the OTFS systems, where we developed a correlation-based estimation scheme that exploited the periodic structure of the pilot in the delay-time domain. The embedded impulse pilot which is proposed in [8] and widely used for channel estimation in OTFS systems is employed in [6] and [7].
Accurate TO and CFO synchronization using the methods proposed in [6] and [7] requires a high-power impulse pilot. However, the high power of the impulse pilot and the zero guards around it increases the peak-to-average power ratio (PAPR) of the transmitted signal [9]. High PAPR of the transmitted signal will result in a reduction in the efficiency of the power amplifier at the radio frequency (RF) front end, [10]. Hence, the widely used impulse pilot is not suitable for practical applications. To address this issue in [9], we proposed a novel embedded pilot which uses a constant amplitude pilot sequence with a cyclic prefix (CP), called pilot with cyclic prefix (PCP), placed along multiple delay bins of a single Doppler bin in the delay-Doppler domain. PCP significantly reduces the PAPR of the transmitted signal and thus, it is more suitable for practical applications than impulse pilot.
Based on the above, the very limited literature available on the synchronization in OTFS uses practically infeasible pilot structures. Hence, to address this issue in this paper, we develop a practical TO and CFO estimation for OTFS, using the PCP deployed for channel estimation in [9]. We exploit the periodicity of the PCP in the delay-time domain and propose a correlation-based TO and coarse CFO estimation. Furthermore, to improve the accuracy of CFO estimation we approximate the time variation of the channel using a generalized complex exponential basis expansion model (GCE-BEM) [11]. We propose a maximum likelihood (ML) CFO fine estimation technique that provides over an order of magnitude higher estimation accuracy than the existing estimator using
the impulse pilot. To corroborate our claims, we analyze the performance of our proposed TO and CFO estimation techniques through simulations. In our simulations, we study the mean and variance of the TO estimation error and the mean square error (MSE) of the CFO estimates.
The rest of this paper is organized as follows. Section II describes the system model. The proposed estimation techniques are presented in Sections III and IV, respectively, and their performance is evaluated by simulations in Section V. Finally, Section VI concludes the paper.
\(Notations\): Scalar values, vectors, and matrices are denoted by normal letters, boldface lowercase, and boldface uppercase, respectively. \(\mathrm{diag}[.]\), \(\mathrm{blkdiag}[.]\), \((.)_{k}\), and \(\max_{k}\{.\}\), present a diagonal matrix, block diagonal \(\mathrm{(}\mathrm{matrix}\mathrm{,\,}k^{\mathrm{th}}\mathrm{)}\) circular shift, and maximum of a function regarding \(k\), respectively. \(|.|\), \(\angle\), and \(\mathrm{Re}\{.\}\) denote the gain, phase, and real part of the complex argument, respectively. \(\mathbf{I}_{N}\) is an identity matrix with size \(N\times N\). The superscripts \((.)^{\mathrm{H}}\), \((.)^{\mathrm{T}}\) and \((.)^{-1}\) indicate hermitian, transpose, and inverse operations, respectively. \(\mathbb{C}^{M\times N}\) stands for a set of complex values with size \(M\times N\) and the Kronecker product is denoted by \(\otimes\). Finally, \(\mathcal{O}(.)\) presents the order of the complexity for a function.
## II System Model
We consider an OTFS system with \(M\) delay and \(N\) Doppler bins, with a Doppler resolution of \(\Delta\nu=\frac{1}{MNT_{\mathrm{s}}}\) and a delay resolution of \(\Delta\tau=T_{\mathrm{s}}\), where \(T_{\mathrm{s}}\) is the sampling period [1]. The data symbols and the pilot sequence are multiplexed together to form the delay-Doppler domain OTFS transmitted block \(\mathbf{D}\in\mathbb{C}^{M\times N}\) with the elements \(D[m,n]\) for \(m=0,\ldots,M-1\) and \(n=0,\ldots,N-1\). In this paper, we deploy the pilot structure, PCP, that was originally proposed for channel estimation in [9], also for synchronization. Considering \(L\) as the channel length, in PCP, a constant amplitude Zadoff-Chu (ZC) sequence with length \(L\) is placed in the Doppler bin \(n_{\mathrm{p}}\) and the delay bins \(n_{\mathrm{p}},\ldots,m_{\mathrm{p}}+L-1\). The last \(L-1\) samples of this sequence are then appended as a CP in the Doppler bin \(n_{\mathrm{p}}\) and the delay bins \(n_{\mathrm{p}}-L,\ldots,m_{\mathrm{p}}-1\). The remaining Doppler bins within the pilot region are set to zero. The delay-Doppler domain OTFS block where the PCP is embedded with data symbols, is shown in Fig. 1.
The OTFS transmitter spreads the symbols \(D[m,n]\) from the delay-Doppler to the delay-time domain by taking inverse discrete Fourier (IDFT) across the Doppler dimension, [2],
\[X[m,l]=\frac{1}{\sqrt{N}}\sum_{n=0}^{N-1}D[m,n]e^{\frac{j2\pi ln}{N}}, \tag{1}\]
where \(l=0,\ldots,N-1\) is the time index and \(m=0\ldots,M-1\) is the delay index. The delay-time domain signal is then converted to the serial stream \(\mathbf{x}=[x[0],\ldots,x[MN-1]]^{\mathrm{T}}\), where \(x[lM+m]=X[m,l]\). Finally, the OTFS transmit signal \(s[k]\) is formed by appending a CP with the length \(L_{\mathrm{CP}}\geq L-1\) at the beginning of the OTFS block. Assuming ideal pulse-shaping, the received delay-time signal for \(B\) OTFS blocks, after transmission over the linear time-varying (LTV) channel and in presence of TO and CFO, can be represented as
\[r[k]=e^{j\frac{2\pi nb}{MN}}\sum_{i=0}^{B-1}\sum_{\ell=0}^{L-1}h[\ell,k]s[k- \ell-\theta-iN_{\mathrm{T}}]+\eta[k], \tag{2}\]
where \(0\leq k\leq L_{\mathrm{CP}}+MN-1\), \(\theta\) and \(\varepsilon\) are the TO and CFO values, normalized by the delay and doppler spacings, respectively, and \(N_{\mathrm{T}}=MN+L_{\mathrm{CP}}\). \(h[\ell,k]\) is the delay-time domain channel impulse response of the \(k^{\mathrm{th}}\) delay tap at \(\ell^{\mathrm{th}}\) time instant. \(\eta[k]\) is the complex additive white Gaussian noise (AWGN) with the variance \(\sigma_{\eta}^{2}\).
## III Proposed TO Estimation Technique
In this section, we present our proposed TO estimation technique for OTFS using PCP. PCP has a very attractive dual periodicity property in the delay-time domain. The periodicity of PCP in the delay dimension is due to the presence of CP. Meanwhile, the periodicity in the time domain is due to the spreading effect of the OTFS transmitter which scales and then repeats each pilot sample across the time dimension. We exploit this dual periodicity of the PCP in the delay and time dimensions to estimate the TO. We consider the TO as \(\theta=\theta_{\mathrm{d}}+M\theta_{\mathrm{t}}\), where \(\theta_{\mathrm{d}}\) and \(\theta_{\mathrm{t}}\) are the TO in delay and time dimensions, respectively. Our proposed TO estimator finds \(\theta_{\mathrm{d}}\) and \(\theta_{\mathrm{t}}\) in two stages, without any estimation range limitation.
### _TO estimation in delay dimension_
The delay-time domain pilot sequence in the delay dimension can be split into two identical halves, each with the length \(L-1\), see Fig. 1. Assuming that the time variation of the channel within the pilot duration in delay, i.e., \(2L-1\) samples, is negligible, the LTV channel over this duration can be considered as linear time-invariant (LTI). Hence, the periodic property of the pilot along the delay dimension is preserved and the TO can be estimated by searching for two identical halves in the received pilot signal. However, channel time variations along the time dimension are not negligible. As it was shown in [7], the identical parts of the pilot along the time dimension should be brought as close as possible to each other to exploit the periodicity in time for TO estimation. The extreme case for this is satisfied for PCP in the delay-time domain as all the pilot samples in a given delay bin \(m\in\{m_{\mathrm{p}}-L,\ldots,m_{\mathrm{p}}+L-1\}\) have the same amplitude and the linear phase of \(2\pi n_{\mathrm{p}}l/N\) for \(l=0,1,\ldots,N-1\). For
Fig. 1: PCP structure in both delay-Doppler and delay-time domains.
instance, when \(n_{\rm p}=N/2\), the adjacent pilot samples across each delay row have a phase difference of \(\pi\), see Fig. 1. The impulse pilot in [7] can only exploit the diversity in \(L\) delay bins for TO estimation. In contrast, PCP achieves an improved timing estimation performance, as it takes advantage of the full diversity provided by all the \(2L-1\) delay bins allocated to pilot transmission.
As shown in Fig. 2, our proposed TO estimator searches for a periodic signal with identical parts in both delay and time dimensions using a sliding window. To find the periodic sequence in the delay dimension, we define a window with two halves covering \(L-1\) samples each. The window slides across the delay dimension to find two pairs of data samples with the highest similarity. In fact, the window searches for the pilot sequences with two identical halves along the delay dimension. To cast this process into a mathematical formulation, using the received signal \(r[k]\), we define the timing metric
\[P_{\rm d}[m]=\sum_{i=0}^{N-1}\sum_{u=0}^{L-2}r^{*}[iM+m+u]r[iM+m+u+L], \tag{3}\]
that can be efficiently implemented in an iterative manner as
\[P_{\rm d}[m+1]=P_{\rm d}[m]-\sum_{i=0}^{N-1}r^{*}[iM+m]r[iM+m+1]\] \[\qquad+\sum_{i=0}^{N-1}r^{*}[iM+m+L-2]r[iM+m+L-1], \tag{4}\]
where \(m\!=\!0,\ldots,M-1\) and \(l\!=\!0,\ldots,N-1\). Consequently, \(\theta_{\rm d}\) is estimated by finding the peak of this timing metric as
\[\hat{\theta}_{\rm d}=\arg\max_{m}\{|P_{\rm d}[m]|\}-(m_{\rm p}-L)-L_{\rm CP}- \lfloor\mu_{h}\rfloor, \tag{5}\]
where \(\mu_{h}=\frac{\sum_{l=0}^{L-1}(\ell+1)\alpha_{\ell}^{2}}{\sum_{l=0}^{L-1} \alpha_{\ell}^{2}}\) is the mean of delay that is imposed by the channel and \(\alpha_{\ell}\) for \(\ell=0,\ldots,L-1\) represents the channel power delay profile (PDP), [12]. The multipath effect of the channel leads to a bias in the TO estimate that can be corrected by the knowledge of the channel's first-order moment [13]. The proposed estimator even works without this knowledge by increasing the CP length with \(\lfloor\mu_{h}\rfloor\) samples.
### _TO estimation in time dimension_
The peak of the correlation function \(P_{\rm d}[m]\) on the row \(m^{\prime}_{\rm p}-L=\hat{\theta}_{\rm d}+(m_{\rm p}-L)+L_{\rm CP}+\lfloor\mu _{h}\rfloor\) of the delay-time grid can provide an estimate of \(\theta_{\rm t}\), where \(m^{\prime}_{\rm p}=\hat{\theta}_{\rm d}+m_{\rm p}\). However, this estimate is not very accurate. This is while even a single error in time cannot be afforded as the estimation error of one sample in \(\theta_{\rm t}\) leads to an effective error of \(M\) samples in the final TO estimate. This highlights the importance of the need for a highly accurate estimation of \(\theta_{\rm t}\). Thus, to estimate \(\theta_{\rm t}\), we deploy a sliding window with the length \(2N-1\) that covers the delay bins \(m^{\prime}_{\rm p}-L,\ldots,m^{\prime}_{\rm p}+L-1\) and slides along time, see Fig. 3. This window calculates the correlation between every two adjacent samples in time for all \(2L-1\) delay bins in the pilot region. This process can be mathematically shown as
\[P_{\rm t}[l]=\sum_{i=m^{\prime}_{\rm p}-L}^{m^{\prime}_{\rm p}+L-1}\sum_{v=0}^ {N-2}r^{*}[(l+v)M+i]r[(l+v+1)M+i], \tag{6}\]
that can be iteratively implemented as
\[P_{\rm t}[l+1]=P_{\rm t}[l]-\sum_{i=m^{\prime}_{\rm p}-L}^{m^{ \prime}_{\rm p}+L-1}r^{*}[lM+i]r[(l+1)M+i]\] \[+\sum_{i=m^{\prime}_{\rm p}-L}^{m^{\prime}_{\rm p}+L-1}r^{*}[(l+ N-2)M+i]r[(l+N-1)M+i], \tag{7}\]
where \(l=0,1,...,N-1\). Hence, \(\theta_{\rm t}\) is estimated by finding the peak of \(P_{\rm t}\), i.e.,
\[\hat{\theta}_{\rm t}=\arg\max_{l}\{|P_{\rm t}[l]|\}. \tag{8}\]
Fig. 4 shows a snapshot of the timing metrics at the SNR of \(20\) dB for an OTFS system with \(M=128\) and \(N=32\) for both LTI and LTV channels.
After correcting the TO with \(\hat{\theta}=\hat{\theta}_{\rm d}+M\hat{\theta}_{\rm t}\), in the following section, we propose a novel two-stage CFO estimation technique. Our proposed technique finds a coarse estimate of the CFO by using the angle of the timing metric that we used for TO estimation, and then this estimate is refined by using our proposed ML estimation technique.
Fig. 3: Sliding window for estimation of \(\theta_{\rm t}\).
Fig. 2: Sliding window for estimation of \(\theta_{\rm d}\).
## IV CFO Estimation Using Maximum Likelihood
Considering this phase difference, and averaging the correlation angle over the delay bins allocated with the pilot at the timing instant \(\hat{\theta}_{\rm t}\) from (6),
\[\Upsilon_{\hat{\theta}_{\rm t}}=\!\frac{1}{2L-1}\sum_{i=m_{\rm p}^{ \prime}-L}^{m_{\rm p}^{\prime}+L-1}\angle\sum_{v=0}^{N-2}(r^{*}[(\hat{\theta}_{ \rm t}+v)M+i]\times\] \[r[(\hat{\theta}_{\rm t}+v+1)M+i]), \tag{9}\]
a coarse CFO estimate can be obtained as
\[\hat{\varepsilon}_{\rm c}=\frac{N}{2\pi}\Upsilon_{\hat{\theta}_{\rm t}}-n_{ \rm p}. \tag{10}\]
To refine this CFO estimate and improve the estimation accuracy, in the following, we develop an ML-based technique as a fine CFO estimation stage.
After correcting the TO, the received pilots in the delay bins \(m_{\rm p}\) to \(m_{\rm p}+L-1\) over all the bins along the time dimension are used for CFO estimation. For ease of explanation, in the rest of the paper, the received pilots refer to the received signal in the delay bins \(m_{\rm p}\) to \(m_{\rm p}+L-1\). After removing the CP from the received pilot, and stacking the resulting signals at different time slots, \(\mathbf{r}_{l,{\rm p}}=[r[L_{\rm CP}+lM+m_{\rm p}],\ldots,r[L_{\rm CP}+lM+m_{ \rm p}+L-1]]^{\rm T}\), into the vector \(\mathbf{r}^{\rm p}=[\mathbf{r}_{0,{\rm p}}^{\rm T},\mathbf{r}_{1}^{\rm T}, \ldots,\mathbf{r}_{N-1,{\rm p}}^{\rm T}]^{\rm T}\in\mathbb{C}^{NL\times 1}\), using (2), \(\mathbf{r}^{\rm p}\) can be expressed as,
\[\mathbf{r}^{\rm p}=\boldsymbol{\Gamma}(\varepsilon)\mathbf{H}\mathbf{s}^{\rm p }+\boldsymbol{\eta}, \tag{11}\]
where, \(\boldsymbol{\Gamma}(\varepsilon)\!=\!{\rm blkdiag}[\boldsymbol{\Gamma}_{0}, \boldsymbol{\Gamma}_{1},\ldots,\boldsymbol{\Gamma}_{N-1}]\) with
\[\boldsymbol{\Gamma}_{l}={\rm diag}[e^{\frac{j2\pi\epsilon(L_{\rm CP}+lM+m_{ \rm p})}{N_{\rm T}}},e^{\frac{j2\pi\epsilon(L_{\rm CP}+lM+m_{\rm p}+1)}{N_{\rm T }}},\ldots,\]
\[e^{\frac{j2\pi\epsilon(L_{\rm CP}+lM+m_{\rm p}+L-1)}{N_{\rm T}}}], \tag{12}\]
and \(\mathbf{H}={\rm blkdiag}[\mathbf{H}_{0},\mathbf{H}_{1},...,\mathbf{H}_{N-1}]\) with \(\mathbf{H}_{l}\) being the channel matrix, whose structure is shown in (13), on the top of the next page. In (11), \(\mathbf{s}^{\rm p}=[\mathbf{s}_{0,{\rm p}}^{\rm T},\ldots,\mathbf{s}_{N-1,{\rm p }}^{\rm T}]^{\rm T}\in\mathbb{C}^{NL\times 1}\), \(\mathbf{s}_{l,{\rm p}}\) is the transmitted delay-time pilot samples in the delay bins \(m_{\rm p}\) to \(m_{\rm p}+L-1\) and the time slot \(l\). \(\boldsymbol{\eta}\sim\mathcal{CN}(\mathbf{0},\sigma_{\eta}^{2}\mathbf{l}_{LN})\) is the AWGN vector with size \(LN\times 1\) that affects the pilot. By interchanging the convolution order in (11), the received pilot at the time slot \(l\) can be expressed as
\[\mathbf{r}_{l,{\rm p}}=\boldsymbol{\Gamma}_{l}(\varepsilon)\mathbf{A}_{l,{ \rm p}}\mathbf{h}_{l}+\boldsymbol{\eta}_{l}, \tag{14}\]
where \(\mathbf{A}_{l,{\rm p}}=[\mathbf{S}_{l,{\rm p}}^{0},...,\mathbf{S}_{l,{\rm p}}^ {L-1}]\), \(\mathbf{S}_{l,{\rm p}}^{\rm f}={\rm diag}[(\mathbf{s}_{l,{\rm p}})\ell]\), \(\mathbf{h}_{l}=[(\mathbf{h}_{0}^{\rm T})^{\rm T},(\mathbf{h}_{1}^{\rm T})^{\rm T },\ldots,(\mathbf{h}_{l}^{L-1})^{\rm T}]^{\rm T}\), and \(\mathbf{h}_{l}^{\rm f}=[h[\ell,L_{\rm CP}+lM+m_{\rm p}],h[\ell,L_{\rm CP}+lM+m_{ \rm p}+1],\ldots,h[\ell,L_{\rm CP}+lM+m_{\rm p}+L-1]]^{\rm T}\) for \(\ell=0,1,\ldots,L-1\). \(\boldsymbol{\eta}_{l}\in\mathbb{C}^{L\times 1}\) is the AWGN at time slot \(l\) affecting the pilot.
BEM-based methods are used for approximating the time variation of the LTV channels. Depending on the basis functions, different BEM methods such as complex exponential based (CE-BEM) [14], polynomial based BEM [15], Karhunen-Loeve (KL) decomposition based BEM [16], are presented in the literature. Due to its simplicity and high accuracy, in this paper, we use the oversampled GCE-BEM [11] to approximate the channel time variation in the delay-time domain. The channel coefficient for the \(\ell^{\rm th}\) path at the time instant \(k\) can be expressed using GCE-BEM as
\[h[\ell,k]=\sum_{q=0}^{Q-1}B[k,q]c_{\ell}(q), \tag{15}\]
where \(B[k,q]=e^{j2\frac{2\pi\epsilon(q-Q/2)\mathrm{1k}N}{\left(\frac{Q}{2}\right)}}\), \(0\leq k\leq MN-1,\ 0\leq\ell\leq L-1\) and \(1\leq q\leq Q\). For the accurate approximation of the time variation of the channel, the oversampling factor and the number of basis functions are chosen as \(K\!\geq\!1\) and \(Q=\lceil 2K\nu_{\max}(MNT_{s})\rceil+1\), respectively, [11]. Using (15), \(\mathbf{h}_{l}\) can be expressed in terms of BEM coefficients as
\[\mathbf{h}_{l}=(\mathbf{I}_{L}\otimes\mathbf{B}_{l}^{\rm p})\mathbf{c}, \tag{16}\]
where \(\mathbf{B}_{l}^{\rm p}=B[k,q]\ \forall k\in\{L_{\rm CP}+lM+m_{\rm p},L_{\rm CP}+lM+m_{ \rm p}+1,\ldots,L_{\rm CP}+lM+m_{\rm p}+L-1\}\), \(\mathbf{c}=[\mathbf{c}_{0}^{\rm T},\mathbf{c}_{1}^{\rm T},\ldots,\mathbf{c}_{L- 1}^{\rm T}]^{\rm T}\) and \(\mathbf{c}_{\ell}=[c_{\ell}(0),c_{\ell}(1),\ldots,c_{\ell}(Q-1)]^{\rm T}\). Inserting, (16) in (14), \(\mathbf{r}^{\rm p}\) in (11) can be approximated using GCE-BEM as
\[\mathbf{r}^{\rm p}=\boldsymbol{\Gamma}(\varepsilon)\mathbf{G}\mathbf{c}+ \boldsymbol{\eta}, \tag{17}\]
where \(\mathbf{G}=[\mathbf{G}_{0}^{\rm T},\mathbf{G}_{1}^{\rm T},\ldots,\mathbf{G}_{N-1 }^{\rm T}]^{\rm T}\) and \(\mathbf{G}_{l}=\mathbf{A}_{l}(\mathbf{I}_{L}\otimes\mathbf{B}_{l}^{\rm p})\) for \(l=0,1,\ldots,N-1\).
For a given pair \((\mathbf{c},\varepsilon)\), the vector \(\mathbf{r}^{\rm p}\) is assumed to have the Gaussian distribution with the mean \(\boldsymbol{\Gamma}(\varepsilon)\mathbf{G}\mathbf{c}\) and covariance matrix \(\sigma_{\eta}^{2}\mathbf{I}_{LN}\), [17]. Thus, the joint probability density function of \(\mathbf{r}^{\rm p}\), parameterized by \((\tilde{\mathbf{c}},\tilde{\varepsilon})\), is given by
\[f(\mathbf{r}^{\rm p};\tilde{\mathbf{c}},\tilde{\varepsilon})=\frac{1}{(\pi \sigma_{\eta}^{2})^{NL}}e^{\frac{-1}{\sigma_{\eta}^{2}}[\mathbf{r}^{\rm p}- \boldsymbol{\Gamma}(\tilde{\varepsilon})\mathbf{G}\mathbf{c}]^{\rm H}[ \mathbf{r}^{\rm p}-\boldsymbol{\Gamma}(\tilde{\varepsilon})\mathbf{G}\tilde{ \mathbf{c}}]}. \tag{18}\]
Thus, the ML estimates of the BEM coefficient vector and the CFO are obtained as
\[(\hat{\mathbf{c}},\hat{\varepsilon})=\arg\max_{\mathbf{c},\tilde{\varepsilon}}\{f( \mathbf{r}^{\rm p};\tilde{\mathbf{c}},\tilde{\varepsilon})\}. \tag{19}\]
Taking the logarithm and removing the constant terms, the estimation problem in (19) can be simplified as
\[(\hat{\mathbf{c}},\tilde{\varepsilon})=\arg\max_{\tilde{\mathbf{c}},\tilde{ \varepsilon}}\{g(\tilde{\mathbf{c}},\tilde{\varepsilon})\}, \tag{20}\]
where \(g(\tilde{\mathbf{c}},\tilde{\varepsilon})=\frac{-1}{\sigma_{\eta}^{2}}[ \mathbf{r}^{\rm p}-\boldsymbol{\Gamma}(\tilde{\varepsilon})\mathbf{G}\tilde{ \mathbf{c}}]^{\rm H}[\mathbf{r}^{\rm p}-\boldsymbol{\Gamma}(\tilde{ \varepsilon})\mathbf{G}\tilde{\mathbf{c}}]\) is the joint cost function. The maximization problem in (20) can be solved in two steps. In step 1, we find the \(\tilde{\mathbf{c}}\) which maximizes the joint cost function parameterized by \(\tilde{\varepsilon}\). In step 2, the \(\tilde{\mathbf{c}}\) obtained in
Fig. 4: One snapshot of the timing metrics in
step 1 is used to find a new cost function for \(\tilde{\varepsilon}\) and we perform a grid search in the vicinity of coarse CFO estimate to find the fine CFO estimate which maximizes the cost function for \(\tilde{\varepsilon}\). Thus, we fix the \(\tilde{\varepsilon}\) and the \(\tilde{\mathbf{c}}\) that maximizes \(g(\tilde{\mathbf{c}},\tilde{\varepsilon})\) can be obtained as
\[\tilde{\mathbf{c}}(\tilde{\varepsilon})=(\mathbf{G}^{\mathrm{H}}\mathbf{G})^{- 1}\mathbf{G}^{\mathrm{H}}\mathbf{\Gamma}^{\mathrm{H}}(\tilde{\varepsilon}) \mathbf{r}^{\mathrm{p}}. \tag{21}\]
Substituting \(\tilde{\mathbf{c}}(\tilde{\varepsilon})\) into \(g(\tilde{\mathbf{c}},\tilde{\varepsilon})\), the cost function for CFO can be obtained as
\[g_{\mathrm{CFO}}(\tilde{\varepsilon})=(\mathbf{r}^{\mathrm{p}})^{\mathrm{H}} \mathbf{\Gamma}(\tilde{\varepsilon})\mathbf{\Lambda}\mathbf{\Gamma}^{\mathrm{ H}}(\tilde{\varepsilon})\mathbf{r}^{\mathrm{p}}, \tag{22}\]
where \(\mathbf{\Lambda}=\mathbf{G}(\mathbf{G}^{\mathrm{H}}\mathbf{G})^{-1}\mathbf{G} ^{\mathrm{H}}\). The fine estimate of CFO can then be obtained using a single-dimensional search centered around the coarse CFO estimate \(\hat{\varepsilon}_{\mathrm{c}}\) in (10) as
\[\hat{\varepsilon}=\arg\max_{\tilde{\varepsilon}}\{g_{\mathrm{CFO}}(\tilde{ \varepsilon})\}. \tag{23}\]
In addition, channel estimation can also be developed using the estimated CFO. The BEM coefficients can be estimated after obtaining the CFO estimate as, \(\hat{\mathbf{c}}=(\mathbf{G}^{\mathrm{H}}\mathbf{G})^{-1}\mathbf{G}^{\mathrm{ H}}\mathbf{\Gamma}^{\mathrm{H}}(\tilde{\varepsilon})\mathbf{r}^{\mathrm{p}}\), and finally the complete LTV channel in delay-time can be estimated using (15).
Regarding complexity, the periodic structure of the pilot in the delay-time domain can reduce the complexity of the maximum-likelihood estimator. In other words, the existing repetition of every \(L\) pilot sample leads to a reduction of the complexity of the estimator by a factor of \(L\). Additionally, \(\mathbf{\Lambda}\) is a symmetric matrix that provides the opportunity to only calculate half of the cost function, which reduces the complexity by a factor of 2. Thus, the cost function in (22) can be calculated in the form
\[g_{\mathrm{CFO}}(\tilde{\varepsilon})=-\beta[0]+2\mathrm{Re}\{\sum_{m=0}^{N-1} \beta[m]e^{\frac{i2\pi mz}{N}}\}, \tag{24}\]
and
\[\beta[m]=\sum_{k=0}^{NL-1-mL}\mathbf{\Lambda}[(k+mL))_{NL},k]\mathbf{r}^{ \mathrm{p}*}[(k+mL)_{NL}]\mathbf{r}^{\mathrm{p}}[k], \tag{25}\]
where \(\mathbf{\Lambda}[i,j]\) and \(\mathbf{r}^{\mathrm{p}}[i]\) are the \([i,j]^{\mathrm{th}}\) and \(i^{\mathrm{th}}\) entries of the \(\mathbf{\Lambda}\) and \(\mathbf{r}^{\mathrm{p}}\), respectively. Calculating the coast function \(g_{\mathrm{CFO}}(\tilde{\varepsilon})\) using (22) requires \(\mathcal{O}(N^{2}L^{2})\) complex multiplications. However, it can be reduced to \(\mathcal{O}(\frac{N^{2}L}{2})\) by using (24) and (25).
## V Simulation Results
In this section, we numerically analyze the performance of both the proposed estimation techniques. The reduced-OTFS system with 16-QAM symbols in this analysis is formed by \(M=64,128,256\) and \(N=64,32,16\) delay and Doppler bins, respectively [18]. We use the extended vehicular A (EVA) channel model with length \(L=21\), [19], the bandwidth of \(8.25\) MHz and the delay-Doppler resolution \((\Delta\tau,\Delta\nu)=(121.21\,\mathrm{nsec},2.01\,\mathrm{kHz})\). The power of the PCP is set to 40 dB and the pilot is inserted at the center of the Doppler axis within delay bins, \(m_{\mathrm{p}}-L\) to \(m_{\mathrm{p}}+L-1\), [9]. To model the channel with GCE-BEM, we choose the oversampling factor \(K=4\) and the order of BEM basis functions \(Q=1,3,6,8\) for different normalized maximum Doppler spreads of \(\nu_{\mathrm{max}}=0,0.66,1.64,2.73\) kHz, respectively [11]. Throughout our simulations, the normalized TO and CFO values are randomly generated from a uniform distribution in the range \([-\frac{MN}{2},\frac{MN}{2})\) and \([-\frac{N-\nu_{\mathrm{max}}T}{2},\frac{N-\nu_{\mathrm{max}}T}{2})\), respectively, where \(T=MNT_{\mathrm{s}}\) is the total time duration of an OTFS block.
In Fig. 5, we compare the performance of our proposed TO estimation technique using PCP with the impulse pilot-assisted method proposed in [7]. We analyze the estimation error mean and variance as a function of signal-to-noise ratio (SNR), for the normalized Doppler spread of \(\nu_{\mathrm{max}}\!\!T\!\approx\!1.36\). The PDP of the channel leads to a small bias in the proposed estimation technique. Although we removed \(\lfloor\!\mu_{\mathrm{h}}\!\rfloor\) from the estimated TO in (5), the existing bias in Fig. 5 originates from the fractional part of the mean delay in the channel. This figure also shows that, for the same pilot power level, the proposed technique outperforms the technique in [7]. Fig. 6 shows the performance of the proposed TO estim
Fig. 5: TO estimation comparison for the impulse pilot and PCP where \(M\!\!=\!\!128\) and \(N\!\!=\!\!32\).
Fig. 6: TO estimation comparison vs. normalized maximum Doppler spread for PCP at SNR=20 dB.
normalized Doppler spread, for different combinations of \(M\) and \(N\), and a fixed \(MN=4096\). It can be observed that as the normalized Doppler spread increases, estimation accuracy also increases. This improvement originates from the diversity provided by the time-selectivity of the channel.
To analyze the performance of our proposed CFO estimation technique, we assume perfect knowledge of TO. In Fig. 7, we evaluate the MSE performance of our proposed CFO estimation technique as a function of SNR and compare it with the CFO estimation technique proposed in [7]. It can be observed that the CFO estimation proposed in this paper gives an order of magnitude better estimation accuracy than the method using impulse pilot in [7]. In Fig. 8, we study the performance of our proposed CFO estimator as a function of Doppler spread. Furthermore, our results show that the MSE performance degrades as the Doppler spread or the number of delay bins increases. Higher \(M\) leads to a channel with more time variation within a column of the OTFS block which is the main reason for CFO estimation degradation.
## VI Conclusion
In this paper, we proposed TO and CFO estimation techniques for OTFS using PCP. We exploit the periodicity property of PCP in the time domain to develop the correlation-based metrics for TO and CFO estimation. Furthermore, the TO estimation accuracy is improved using diversity offered by the constant amplitude PCP in different delay bins. To improve the accuracy of the CFO estimate we approximated the time variation of the channel using GCE-BEM and developed an ML estimator. Additionally, we also developed low-complexity implementation strategies for the proposed techniques. Since we use the same practical pilot used for channel estimation, the proposed method does not impart additional overhead for synchronization. Hence the proposed synchronization methods are highly apt for practical OTFS systems in high mobility scenarios in the envisioned 6G systems.
|
2305.02173 | Uniqueness of real ring spectra up to higher homotopy | We discuss a notion of uniqueness up to $n$-homotopy and study examples from
stable homotopy theory. In particular, we show that the $q$-expansion map from
elliptic cohomology to topological $K$-theory is unique up to $3$-homotopy,
away from the prime $2$, and that upon taking $p$-completions and
$\mathbf{F}_p^\times$-homotopy fixed points, this map is uniquely defined up to
$(2p-3)$-homotopy. Using this, we prove new relationships between Adams
operations on connective and dualisable topological modular forms -- other
applications, including a construction of a connective model of Behrens' $Q(N)$
spectra away from $2N$, will be explored elsewhere. The technical tool
facilitating this uniqueness is a variant of the Goerss--Hopkins obstruction
theory for real spectra, which applies to various elliptic cohomology and
topological $K$-theories with a trivial complex conjugation action as well as
some of their homotopy fixed points. | Jack Morgan Davies | 2023-05-03T15:12:30Z | http://arxiv.org/abs/2305.02173v1 | # Uniqueness of real ring spectra up to higher homotopy
###### Abstract
We discuss a notion of _uniqueness up to \(n\)-homotopy_ and study examples from stable homotopy theory. In particular, we show that the \(q\)-expansion map from elliptic cohomology to topological \(K\)-theory is unique up to \(3\)-homotopy, away from the prime \(2\), and that upon taking \(p\)-completions and \(\mathbf{F}_{p}^{\times}\)-homotopy fixed points, this map is uniquely defined up to \((2p-3)\)-homotopy. Using this, we prove new relationships between Adams operations on connective and dualisable topological modular forms--other applications, including a construction of a connective model of Behrens' \(Q(N)\) spectra away from \(2N\), will be explored elsewhere. The technical tool facilitating this uniqueness is a variant of the Goerss-Hopkins obstruction theory for _real_ spectra, which applies to various elliptic cohomology and topological \(K\)-theories with a trivial complex conjugation action as well as some of their homotopy fixed points.
###### Contents
* 1 Definitions and notation
* 1.1 Uniqueness up to \(n\)-homotopy
* 1.2 \(\mathrm{K}(1)\)-local homology theories and comodules
* 2 Real spectra and their Goerss-Hopkins obstruction theory
* 2.1 Real \(\psi\)-modules and \(\bar{\psi}\)-modules
* 2.2 Real spectra and their Goerss-Hopkins spectral sequence (Th.E)
* 3 Uniqueness of maps of between real \(\mathbf{E}_{\infty}\)-rings
* 3.1 Real elliptic cohomology and \(K\)-theories
* 3.2 Uniqueness of \(p\)-adic topological \(q\)-expansion map (Ths.A and B)
* 3.3 Higher functoriality of elliptic Adams operations (Ths.C and D)
## Introduction
Inside a \(1\)-category \(\mathcal{C}\), objects with a universal property are unique up to unique isomorphism, meaning the mapping set between any two such objects contains a single element. If \(\mathcal{C}=\mathrm{h}_{1}\mathcal{D}\) is the homotopy \(1\)-category of an \(\infty\)-category \(\mathcal{D}\), then uniqueness in \(\mathcal{C}\) up to unique isomorphism translates to uniqueness in \(\mathcal{D}\)_up to \(1\)-homotopy_. More generally, we say an object in \(\mathcal{D}\) with a property \(P\) is _unique up to \(n\)-homotopy_ if all mapping spaces between any two objects with this property \(P\) are \((n-1)\)-connected, meaning their homotopy groups \(\pi_{d}\) vanish in degrees \(0\leq d\leq n-1\). In theory, studying the uniqueness of objects up to \(n\)-homotopy is most elegant for \(n=\infty\), however, in practice, there are many objects of interest inside an \(\infty\)-category without an obvious or viable universal property.
In this article, we are interested in \(\mathbf{E}_{\infty}\)-rings of low chromatic height such as elliptic cohomology and topological \(K\)-theories, and our goal is to prove that certain morphisms between such \(\mathbf{E}_{\infty}\)-rings are unique up to \(n\)-homotopy for some finite \(n\). To state our first theorem, let us write \(\mathrm{tmf}\) for the \(\mathbf{E}_{\infty}\)-ring of _connective topological modular forms_ and \(\mathrm{KO}\llbracket q\rrbracket\) for _real Tate \(K\)-theory_; we recall these definitions in SS1.2.
**Theorem A**.: _A map of \(\mathbf{E}_{\infty}\)-rings \(\mathrm{tmf}\to\mathrm{KO}\llbracket q\rrbracket[\frac{1}{2}]\) is uniquely determined by its effect upon applying complex \(K\)-theory homology in degree \(0\), up to \(3\)-homotopy.1_
Footnote 1: As one will see in our proof of Th. A, the \(\mathrm{tmf}\) in Th. A can be replaced with \(\mathrm{TMF}\) (but not with \(\mathrm{Tmf}\) using our techniques; see Rmk.3.21), and the \(\mathrm{KO}\llbracket q\rrbracket[\frac{1}{2}]\) with \(\mathrm{KO}(\!(q)\!)[\frac{1}{2}]\) or \(\mathrm{KO}[\frac{1}{2}]\); see Th.3.12 for more \(p\)-adic generalisations.
For example, the topological \(q\)-expansion map \(\mathrm{tmf}\to\mathrm{KO}\llbracket q\rrbracket\), itself a map of \(\mathbf{E}_{\infty}\)-rings, is uniquely determined up to \(3\)-homotopy, away from the prime \(2\).2 This theorem generalises the usual statement that such morphisms between the \(\mathrm{K}(1)\)-localisations of these \(\mathbf{E}_{\infty}\)-rings are unique up to \(1\)-homotopy; see [16, Pr.A.6] for example. We also show that when completed at an odd prime \(p\), the above theorem admits generalisations to a large class of elliptic cohomology and topological \(K\)-theories, such as \(\mathrm{Tmf}\), \(\mathrm{TMF}_{0}(N)\), and \(\mathrm{KO}\); see Th.3.12. In another direction, if we complete at a prime \(p\) and replace \(\mathrm{tmf}_{p}\) and \(\mathrm{KO}\llbracket q\rrbracket_{p}\) with their \(G\)-homotopy fixed points for a nontrivial finite subgroup \(G\leq\mathbf{F}_{p}^{\times}\), where \(\mathbf{F}_{p}^{\times}\) acts on these \(\mathbf{E}_{\infty}\)-rings through stable \(p\)-adic Adams operations, we obtain a stronger uniqueness statement depending on the order of \(G\).
Footnote 2: It is crucial that we work away from the prime \(2\) in this article. At the end of the day, this is because the homotopy groups of \(\mathrm{KO}_{2}\) are not supported in degrees divisible by \(4\); see Rmk.3.14.
**Theorem B**.: _Let \(p\) be an odd prime and \(G\leq\mathbf{Z}_{p}^{\times}\) be a nontrivial finite subgroup with order \(g\). A map of \(\mathbf{E}_{\infty}\)-rings \(\mathrm{tmf}_{p}^{hG}\to\mathrm{K}\llbracket q\rrbracket_{p}^{hG}\) is uniquely determined by its effect on the zeroth \(p\)-adic complex \(K\)-theory homology group up to \((2g-1)\)-homotopy._
When \(G=\{\pm 1\}\), this is a \(p\)-complete version of Th.A. On the other extreme, if \(G=\mathbf{F}_{p}^{\times}\) is the maximal finite subgroup, we obtain a uniqueness statement up to \((2p-3)\)-homotopy. In this latter case, the \(\mathbf{E}_{\infty}\)-ring \(\mathrm{K}\llbracket q\rrbracket_{p}^{hG}\) is a Tate \(K\)-theory analogue of the Adams summand \(\mathrm{L}=\mathrm{K}^{h\mathbf{F}_{p}^{\times}}\) and \(\mathrm{tmf}_{p}^{h\mathbf{F}_{p}^{\times}}\) is the _height \(2\) Adams summand_ of [11, SS3.1].
Both Ths.A and B are proven with applications in mind. Indeed, we treat these theorems almost as a "\(n\)-homotopy extension property" for the localisation map \(\operatorname{Tmf}\to\operatorname{TMF}\), either away from \(2\) or completed at an odd prime. To be precise, we use the above statements to prove that there is a family of homotopies between the stable Adams operations \(\psi^{k}\) acting on \(\operatorname{tmf}\) from [1, Th.A]:
\[\psi^{k}\psi^{\ell}\simeq\psi^{k\ell}\colon\operatorname{tmf}[\frac{1}{k\ell} ]\to\operatorname{tmf}[\frac{1}{k\ell}]\]
Moreover, this family of \(1\)-homotopies of morphisms of \(\mathbf{E}_{\infty}\)-rings can be chosen to be associative and unital up to \(2\)-homotopy. This is succinctly captured by the following result.
**Theorem C**.: _There is a homotopy of morphisms of \(\mathbf{E}_{\infty}\)-rings \(\psi^{-1}\simeq\operatorname{id}\colon\operatorname{tmf}[\frac{1}{2}]\to \operatorname{tmf}[\frac{1}{2}]\) which is associative up to \(2\)-homotopy. More generally, fixing an integer \(n\), there is a functor_
\[\Psi^{n}\colon B(\prod_{p\mid n}\mathbf{N})\to\operatorname{h}_{2}\operatorname{ CAlg}\]
_sending the unique point of \(B(\prod_{p\mid n}\mathbf{N})\) to \(\operatorname{tmf}[\frac{1}{2n}]\) and \((p_{1}^{e_{1}},\dots,p_{r}^{e_{r}})\) to \(\psi^{\prod_{i}p_{i}^{e_{i}}}\), where \(\operatorname{h}_{2}\operatorname{CAlg}\) denotes the homotopy \(2\)-category of \(\mathbf{E}_{\infty}\)-rings._
Using the techniques of [1], where the operations \(\psi^{k}\) are constructed, it is not clear that \(\psi^{-1}\simeq\operatorname{id}\), let alone that the functor analogous to \(\Psi^{n}\) into the homotopy \(1\)-category of \(\mathbf{E}_{\infty}\)-rings even exists. If we "invert \(\Delta\)" and replace \(\operatorname{tmf}\) with the spectrum of periodic topological modular forms \(\operatorname{TMF}\), then the corresponding statements for Adams operations can be found in [1, Th.D]--in fact, these results for \(\operatorname{TMF}\) are crucial in our proof of Th.C. Other applications of Th.A include the construction of connective models of Behrens' \(Q(N)\) spectra as well as operations thereupon; this is current work-in-progress. Such a construction was the original inspiration for this article.
Of course, there is also a \(p\)-adic version of Th.C depending on a choice of \(G\leq\mathbf{Z}_{p}^{\times}\).
**Theorem D**.: _Fix an odd prime \(p\) and \(G\leq\mathbf{Z}_{p}^{\times}\) a nontrivial finite subgroup of order \(g\). Then there is a functor_
\[\Psi_{p,G}\colon B\mathbf{Z}_{p}^{\times}/G\to\operatorname{h}_{2g-2} \operatorname{CAlg}\]
_sending the unique point of \(B\mathbf{Z}_{p}^{\times}/G\) to \(\operatorname{tmf}_{p}^{\operatorname{h}G}\) and \(\overline{k}\in\mathbf{Z}_{p}^{\times}/G\) to \(\psi^{k}\)._
Setting \(G=\mathbf{F}_{p}^{\times}\) so that \(g=p-1\), this theorem and Th.B support the moral that stable homotopy theory, especially chromatic homotopy theory, becomes more sparse as the prime \(p\) grows.
To prove all of the above theorems we use a variant of Goerss-Hopkins obstruction theory at odd primes \(p\). Classically, one would use \(p\)-complete complex \(K\)-theory as the base cohomology theory to compute obstruction groups in the category of \(\operatorname{K}(1)\)-local \(\mathbf{E}_{\infty}\)-rings. If we did this we would only obtain uniqueness in Th.A up to \(1\)-homotopy, which is well-known, and the rest of our applications would falter. To amend this, we use a variant of Goerss-Hopkins obstruction theory now based on \(\operatorname{K}^{hG}\) for a nontrivial finite subgroup \(G\leq\mathbf{Z}_{p}^{\times}\) following an idea of Behrens at the prime \(p=2\). The following is the analogue of the results of the appendix [1, SS12.A]; see Df.2.4 for the definition of a _real_\(\mathbf{E}_{\infty}\)-ring.
**Theorem E**.: _Fix an odd prime \(p\), a nontrivial finite subgroup \(G\leq{\bf Z}_{p}^{\times}\), and two \(p\)-complete real \({\bf E}_{\infty}\)-rings \(A,B\) where \(B\) is \({\rm K}(1)\)-local. Let us also write \({\rm K}^{\!hG}={\rm L}\). Then for any map of \({\bf E}_{\infty}\)-rings \(f\colon A\to B\) there exists a second quadrant spectral sequence converging to \(\pi_{t-s}({\rm CAlg}(A,B),f)\) with \(E_{2}\)-page given as_
\[E_{2}^{0,0}={\rm Alg}_{{\rm L}_{*}}^{\bar{\theta}}({\rm L}_{*}^{\wedge}A,{\rm L }_{*}^{\wedge}B)\]
\[E_{2}^{s,t}=H_{\theta}^{s}({\rm L}_{*}^{\wedge}A/{\rm L}_{*},\Omega^{t}{\rm L }_{*}^{\wedge}B)\qquad t\geq 1\]
_and zero elsewhere._
The fact that both TMF and \({\rm KO}\llbracket q\rrbracket\) are real, essentially because the Adams operation \(\psi^{-1}\) acts trivially on these spectra, allows us to use the above theorem combined with a vanishing statement to prove a \(p\)-adic version of Th. A. More generally, we use an algebro-geometric criterion to show that many of our favourite elliptic cohomology theories and topological \(K\)-theories are real; see Th.3.5.
### Outline
This article has three sections: background, construction of our obstruction theory with real \(\theta\)-algebras and real spectra, and the applications of this obstruction theory to Th.A and Th.C. In more detail:
* In SS 1.1, the concept of _uniqueness up to \(n\)-homotopy_, in the sense we will use it, is formally defined and discussed.
* In SS1.2, we recall definitions from \({\rm K}(1)\)-local stable homotopy theory such as \({\rm K}(1)\)-local homology theories, \(\theta\)-algebras, and their reduced variants.
* In SS2.1, we define and discuss real \(\theta\)-algebras and real reduced \(\theta\)-algebras, which we call real \(\bar{\theta}\)-algebras. In particular, we show that the categories of graded real \(\theta\)-algebras and graded real \(\bar{\theta}\)-algebras are equivalent and compare their Andre-Quillen cohomology.
* In SS2.2, the notion of _real_ spectra is introduced, which allows us to prove Th.E.
* In SS3.1, we discuss an algebro-geometric condition which implies that the certain elliptic cohomology or topological \(K\)-theories are real, meaning \(\{\pm 1\}\)-real. These techniques are then also used to show various other \(G\)-homotopy fixed point spectra are real.
* In SS3.2, we prove a vanishing statement for the Andre-Quillen cohomology of \(\bar{\theta}\)-algebras. A generalisation of Th.A is then proven as a corollary of this vanishing statement together with Th.E, along with Th.B.
* In SS 3.3, the \({\bf E}_{\infty}\)-ring Tmf is decomposed into the periodic spectra TMF and \({\rm KO}\llbracket q\rrbracket\) glued together along \({\rm KO}(\!(q)\!)\). The 1- and 2-homotopies between Adams operations on TMF and \({\rm KO}\llbracket q\rrbracket\) (using their modular description) are then glued together on \({\rm KO}(\!(q)\!)\) using Th.B which yields Th.D. We then obtain Th.C by further gluing in some rational information.
### Notation
Throughout this article, \(p\) will be an odd3 prime, \(G\) will denote a nontrivial finite subgroup of \(\mathbf{Z}_{p}^{\times}\) which we often think of as a subgroup of the groups of lifts \(\mathbf{F}_{p}^{\times}\leq\mathbf{Z}_{p}^{\times}\), and \(\Gamma=\mathbf{Z}_{p}^{\times}/G\) will denote the quotient group. We will also use \(g\) to denote the order of \(G\). Let K be the \(\mathbf{E}_{\infty}\)-ring of _(\(p\)-complete) periodic complex \(K\)-theory_, and write \(\mathrm{L}=\mathrm{K}^{hG}\) for the \(G\)-homotopy fixed points of K with respect to the stable \(p\)-adic Adams operations. In particular, when \(G=\{\pm 1\}\) we have \(\mathrm{L}=\mathrm{KO}\), and when \(G=\mathbf{F}_{p}^{\times}\) this is the classical periodic Adams summand. Both K and L have \(\mathbf{E}_{\infty}\)-actions of \(\mathbf{Z}_{p}^{\times}\) and \(\mathbf{Z}_{p}^{\times}/G=\Gamma\), respectively, given by these Adams operations \(\psi^{k}\); see [10, SS5.5] for example.
Footnote 3: Many of the statements of this article hold for \(p=2\), however, the vanishing statement Pr.3.13 at \(p=2\) has to be altered to reflect the \(2\)-local homotopy groups of KO; see [11, 12]. This alteration then ruins the prime \(2\) versions of all of our results from the introduction. For this reason, as well as the facts that many arguments are streamlined at odd primes and Behrens has made many of our statements here at \(p=2\) (see [11, SS12.7 & §12.A]), we restrict this whole article to odd primes \(p\).
### Acknowledgements
Thank you to Venkata Sai Narayana Bavisetty for stimulating discussions about \(Q(N)\) which eventually led to this project. I'd like to also thank Tommy Lundemo and Lennart Meier for their stimulating conversations and feedback.
## 1 Definitions and notation
We will freely use the language of \(\infty\)-categories to ground our homotopy theory.
### Uniqueness up to \(n\)-homotopy
The study of uniqueness in homotopy theory here will be quite superficial and specialised to keep our discussion here elementary.
**Definition 1.1**.: Let \(f\colon X\to Y\) be a map of spaces and \(y\) be a point in \(Y\). Write \(M_{y}=X\times_{Y}\{y\}\) for the _moduli space of lifts of \(y\)_. For a lift \(x\) of \(y\), so an element \(x\in M_{y}\), we say \(x\) is _unique up to \(1\)-homotopy_ if \(M_{y}\) is connected, ie, \(\pi_{0}M_{y}\) is the singleton set \(\{x\}\). For \(2\leq n\leq\infty\), we say that a unique lift \(x\) of \(y\) up to \(1\)-homotopy is _unique up to \(n\)-homotopy_ if the group \(\pi_{k}(M_{y},x)\) vanishes for all \(0<k<n\).4
Footnote 4: If \(x\) is unique up to \(\infty\)-homotopy, one often says that \(x\) is _unique up to contractible choice_.
The above definition is motivated by a simple observation: say we have a unique lift \(x\) of \(y\) up to \(1\)-homotopy, and let \(x^{\prime}\) be another lift of \(y\). This uniqueness of \(x\) yields a homotopy \(h\colon\Delta^{1}\to M_{y}\) from \(x\) to \(x^{\prime}\). To quantify if this homotopy \(h\) recognising the uniqueness of \(x\) is unique up to \(2\)-homotopy, take another homotopy \(h^{\prime}\colon\Delta^{1}\to M_{y}\) from \(x\) to \(x^{\prime}\). The composite path \(h^{\prime}\star h^{-1}\colon\Delta^{1}\to M_{y}\) determines a loop \(\gamma\) in \(M_{y}\) based at \(x\). If \(x\) is unique up to \(2\)-homotopy, then \(\pi_{1}(M_{y},x)\) vanishes and this loop \(\gamma\) can be contracted to the constant loop at \(x\) by a \(2\)-homotopy \(H\colon\Delta^{2}\to M_{y}\). This \(2\)-homotopy \(H\) also takes the form of a \(2\)-homotopy between \(h\) and \(h^{\prime}\), witnessing that homotopies recognising the uniqueness of \(x\) are
homotopic. The story goes on, with uniqueness up to \(n\)-homotopy meaning that potentially different \((n-1)\)-homotopies can be glued together to form a map \(\partial\Delta^{n+1}\to M_{y}\), which can be lifted to a map from \(\Delta^{n+1}\) if \(\pi_{n}(M_{y},x)\) vanishes.
The reader is encouraged to consider examples such as the uniqueness of maps of pointed space \(S^{n}\to S^{n}\) recognising multiplication by \(2\) on \(\pi_{n}\) for various \(n\). For a more complicated example also relevant to this article, it is shown in [13, Th.A] that the sheaf \(\mathcal{O}^{\mathrm{top}}\) defining Tmf is uniquely determined up to \(1\)-homotopy by the fact that it defines natural elliptic cohomology theories on the small affine etale site of \(\mathcal{M}_{\mathrm{Ell}}\).5
Footnote 5: We believe that \(\mathcal{O}^{\mathrm{top}}[\frac{1}{2}]\) is uniquely determined by this property up to \(3\)-homotopy by combining the ideas of [13] together with Th.A.
The following is a simple criterion for calculating homotopical uniqueness.
**Proposition 1.2**.: _Let \(f\colon X\to Y\) be a map of spaces and \(y\) be an element of \(Y\) such that the connected component of \(y\) is contractible, for example, if \(Y\) itself is discrete. Then a unique lift \(x\) of \(y\) up to \(1\)-homotopy is unique up to \(n\)-homotopy if and only if \(\pi_{k}(X,x)\) vanishes for \(0<k<n\)._
Proof.: Writing \(Y_{y}\) for the connected component of \(y\) and \(X_{y}=X\times_{Y}Y_{y}\), notice there is a natural equivalence of spaces \(M_{f}\simeq X_{y}\times_{Y_{y}}\{y\}\) from the pasting lemma for Cartesian diagrams. This leads to the fibration \(M_{y}\to X_{y}\to Y_{y}\) over the point \(y\), now viewed as an element in \(Y_{y}\). The desired result now follows from the long exact sequence of the above fibration.
### \(\mathrm{K}(1)\)-local homology theories and comodules
Fix an odd prime \(p\) and implicitly work \(\mathrm{K}(1)\)-locally. In particular, our algebra will be completed at a prime \(p\), hence tensor products will be \(p\)-completed over \(\mathbf{Z}_{p}\). We will also write stacks with a subscript \(\mathbf{Z}_{p}\) which indicates the Cartesian product with the formal stack \(\operatorname{Spf}\mathbf{Z}_{p}\) (as opposed to \(\operatorname{Spec}\mathbf{Z}_{p}\)).
**Definition 1.3**.: Let \(E\) and \(X\) be spectra. Define the _(\(\mathrm{K}(1)\)-local) \(E\)-homology of \(X\)_ by the formula
\[E_{*}^{\wedge}X=\pi_{*}L(E\otimes X)\]
where \(L\) is localisation at the first Morava \(K\)-theory \(\mathrm{K}(1)\).
This is not a homology theory in the classical sense, as infinite direct sums are not preserved, however, it is the sensible replacement when working with the \(\mathrm{K}(1)\)-local stable homotopy category; see [10, SS9]. For spectra \(X\) such that \(\mathrm{K}_{*}^{\wedge}X\) is torsion-free, it follows that \(\mathrm{K}_{*}^{\wedge}X\), which _a priori_ is only \(L\)-complete, is actually \(p\)-complete. As we will often make this torsion-free assumption, we will sometimes call \(\mathrm{K}_{*}^{\wedge}X\) the \(p\)-adic \(\mathrm{K}\)-homology of \(X\).
The natural home for our \(\mathrm{K}\)- and \(\mathrm{L}\)-homology groups are the following algebraic categories.
**Definition 1.4**.: A _graded Morava module_ or _graded \(\psi\)-module_ is a \(\mathbf{Z}\)-graded \(\mathrm{K}_{*}\)-module \(M_{*}\) such that each \(M_{n}\) is \(L\)-complete together with a continuous \(\mathbf{Z}_{p}^{\times}\)-action, which we denote as Adams operations \(\psi^{k}\) for \(k\in\mathbf{Z}_{p}^{\times}\), such that for all \(a\in\mathrm{K}_{*}\) and \(m\in M_{*}\) we have \(\psi^{k}(am)=\psi^{k}(a)\psi^{k}(m)\) for all \(k\in\mathbf{Z}_{p}^{\times}\). There is a related ungraded notion over \(\mathbf{Z}_{p}\), where the action of \(\mathbf{Z}_{p}^{\times}\) on \(\mathbf{Z}_{p}\) is trivial. Denote the categories of \(\psi\)-modules (resp. graded \(\psi\)-modules) by \(\mathrm{Mod}_{\mathbf{Z}_{p}}^{\psi}\) (resp. \(\mathrm{Mod}_{\mathrm{K}_{*}}^{\psi}\)). For a fixed nontrivial finite subgroup \(G\leq\mathbf{Z}_{p}^{\times}\), a _reduced graded Morava module_ or _graded \(\psi\)-module_ is a \(\mathbf{Z}\)-graded \(L\)-complete \(\mathrm{L}_{*}\)-module \(M_{*}\) with a continuous \(\Gamma\)-action with the same compatibility as \(\psi\)-modules, with \(\mathrm{L}\) replacing \(\mathrm{K}\). There is also an obvious ungraded notion of \(\bar{\psi}\)-module over \(\mathbf{Z}_{p}\). Write \(\mathrm{Mod}_{\mathbf{Z}_{p}}^{\bar{\psi}}\) and \(\mathrm{Mod}_{\mathrm{L}_{*}}^{\bar{\psi}}\) for these categories of \(\bar{\psi}\)-modules.
The appropriate multiplicative objects in these categories are \(\theta\)-algebras.
**Definition 1.5**.: A _\(\theta\)-algebra_ is a \(\mathbf{Z}_{p}\)-algebra \(A_{0}\) equipped with the structure of a \(\psi\)-module and a \(\mathbf{Z}_{p}^{\times}\)-equivariant homogeneous operator \(\theta\) with \(\theta(1)=0\) such that the following two conditions hold for all \(x,y\in A_{0}\):
1. \(\theta(xy)=x^{p}\theta(y)+\theta(x)y^{p}+p\theta(x)\theta(y)\)
2. \(\theta(x+y)=\theta(x)+\theta(y)-\frac{1}{p}\sum_{i=1}^{p-1}{p\choose i}x^{i}y^ {p-i}\)
There is also a graded version, whose definition we safely leave to the reader, as all graded \(\theta\)-algebras we will see in this article are of the form \(\mathrm{K}_{*}\mathop{\otimes}A_{0}\). Write \(\mathrm{Alg}_{\mathrm{K}_{*}}^{\theta}\) and \(\mathrm{Alg}_{\mathbf{Z}_{p}}^{\theta}\) for the categories of graded and ungraded \(\theta\)-algebras. A _reduced graded \(\theta\)-algebra_ is a \(\mathbf{Z}\)-graded \(\mathrm{L}_{*}\)-algebra \(A_{*}\) equipped with the structure of a \(\bar{\psi}\)-module and a \(\Gamma\)-equivariant operation \(\theta\) satisfying the same axioms as above. Also define ungraded \(\bar{\theta}\)-algebras over \(\mathbf{Z}_{p}\), and write \(\mathrm{Alg}_{\mathrm{L}_{*}}^{\bar{\theta}}\) and \(\mathrm{Alg}_{\mathbf{Z}_{p}}^{\bar{\theta}}\) for the categories of graded and ungraded \(\bar{\theta}\)-algebras.
The following crucial theorem connects these algebraic notions to homotopy theory.
**Theorem 1.6**.: _Let \(X\) be a spectrum and \(A\) be an \(\mathbf{E}_{\infty}\)-ring. Then \(\mathrm{K}_{*}^{\wedge}X\) is a \(\psi\)-module, \(\mathrm{K}_{*}^{\wedge}A\) is a \(\theta\)-algebra, \(\mathrm{L}_{*}^{\wedge}X\) is a \(\bar{\psi}\)-module, and \(\mathrm{L}_{*}^{\wedge}A\) is a \(\bar{\theta}\)-algebra._
Proof.: We refer the reader to [11, 12, 13] for the first two statements, and the second two follow as \(\mathrm{L}_{*}^{\wedge}Y\) can be computed for any spectrum \(Y\) as the homotopy groups of \((\mathrm{K}\mathop{\otimes}Y)^{hG}\) by \(\mathrm{K}(1)\)-local ambidexterity and a collapsing \(G\)-homotopy fixed point spectral sequence as \(|G|||\mathbf{F}_{p}^{\times}|=p-1\) is invertible. Indeed, as discussed in [12, 13] (where a reference to [18, Th.1.5] is given), ambidexterity states that the natural map
\[Z_{hG}\xrightarrow{\simeq}Z^{hG}\]
is a \(\mathrm{K}(1)\)-equivalence for all spectra \(Z\) with \(G\)-action, leading to the chain of equivalences
\[\mathrm{L}\mathop{\otimes}Y\xleftarrow{\mathrm{K}}_{hG}\mathop{\otimes}Y \xleftarrow{\mathrm{(K}\mathop{\otimes}Y)}_{hG}\xrightarrow{\simeq}{\mathrm{( K}\mathop{\otimes}Y)}^{hG}\]
using the fact that the smash product is cocontinuous in each variable.
The forgetful functor from \(\theta\)-algebras to \(\psi\)-modules has a left adjoint written as \(\mathbf{P}_{\theta}\). We will also write \(\mathbf{P}_{\theta,*}\) for the graded analogue, as well as \(\mathbf{P}_{\bar{\theta}}\) and \(\mathbf{P}_{\bar{\theta},*}\) in the reduced cases. One can now use such "free functors" to construct simplicial resolutions and define Andre-Quillen cohomology groups; see [10, SS4] or [10, SS2.4]. For us, given a graded \(\theta\)-algebra \(A_{*}\) and a \(\psi\)-module \(M_{*}\), we define \(H^{\theta}_{\theta}(A_{*}/\operatorname{K}_{*},M_{*})\) as the cohomology of the cochain complex associated with the cosimplicial object
\[\operatorname{Alg}^{\theta}_{\operatorname{K}_{*}/A_{*}}(P_{\bullet},A_{*} \ltimes M_{*})\]
where \(P_{\bullet}\to A_{*}\) is a simplicial resolution of \(A_{*}\) by free graded \(\theta\)-algebras on \(\psi\)-modules which are projective as \(\operatorname{K}_{*}\)-modules. The same goes for \(\bar{\theta}\)-algebras.
## 2 Real spectra and their Goerss-Hopkins obstruction theory
The notion of a _real spectrum_ will be useful for us, as this strong condition will simultaneously allow us to reduce many statements about L-homology to K-homology. When \(G=\{\pm 1\}\leq\mathbf{Z}_{p}^{\times}\), then we will see in SS3.1 that many of our favourite elliptic cohomology and topological \(K\)-theories, such as KO, are real with respect to \(G=\{\pm 1\}\).
Fix an odd prime \(p\), a nontrivial finite subgroup \(G\leq\mathbf{Z}_{p}^{\times}\), and implicitly localise everything at \(\operatorname{K}(1)\).
### Real \(\psi\)-modules and \(\bar{\psi}\)-modules
The following notions of _real_\(\theta\)-algebras and spectra are blatantly adapted from Behrens' definition [13, 14].
**Definition 2.1**.: Given a graded \(\psi\)-module \(M_{*}\), we say that \(M_{*}\) is _real_ if the natural map of \(\psi\)-modules \(\operatorname{K}_{*}\otimes M_{0}\to M_{*}\) is an isomorphism and if the \(G\)-action on \(M_{0}\) is trivial. Given a graded \(\bar{\psi}\)-module \(N_{*}\), we say that \(N_{*}\) is _real_ if the natural map \(\operatorname{L}_{*}\otimes N_{0}\to N_{*}\) is an equivalence. If we have to clarify which \(G\) the adjective real refers to, we will write _real with respect to \(G\)_.
The justification for this definition is given by the following theorem. First, note that there is a functor
\[F\colon\operatorname{Mod}^{\bar{\psi}}_{\operatorname{L}_{*}}\to\operatorname {Mod}^{\psi}_{\operatorname{K}_{*}}\qquad M_{*}\mapsto\operatorname{K}_{*} \otimes_{\operatorname{L}_{*}}M_{*}\]
where the \(\mathbf{Z}_{p}^{\times}\)-action on \(FM_{*}\) can be described by noting that \(\operatorname{L}\to\operatorname{K}\) is \(\mathbf{Z}_{p}^{\times}\)-equivariant, where the action on \(\operatorname{L}\) is restricted from its natural \(\Gamma\)-action.
**Theorem 2.2**.: _Fix an odd prime \(p\) and a nontrivial finite subgroup \(G\leq\mathbf{Z}_{p}^{\times}\). The functors_
\[F\colon\operatorname{Mod}^{\bar{\psi}}_{\operatorname{L}_{*}}\to\operatorname {Mod}^{\psi}_{\operatorname{K}_{*}}\qquad M_{*}\mapsto\operatorname{K}_{*} \otimes_{\operatorname{L}_{*}}M_{*}=LM_{*}\]
\[\{\pm 1\}\colon\operatorname{Mod}^{\psi}_{\operatorname{K}_{*}}\to\operatorname {Mod}^{\bar{\psi}}_{\operatorname{L}_{*}}\qquad N_{*}\mapsto N_{*}^{\{\pm 1\}}\]
_restricts to an equivalence between real \(\bar{\psi}\)-modules and real \(\psi\)-modules. Moreover, both of these functors lift to \(\operatorname{Alg}^{\bar{\theta}}_{\operatorname{L}_{*}}\) and \(\operatorname{Alg}^{\theta}_{\operatorname{L}_{*}}\) through the evident forgetful functors, and hence also restrict to equivalences between real \(\bar{\theta}\)-algebras and real \(\theta\)-algebras._
Proof.: The natural isomorphisms
\[(\operatorname{K}_{*}\otimes_{\operatorname{L}_{*}}M_{*})^{G}\simeq( \operatorname{K}_{*}\otimes M_{0})^{G}\simeq\operatorname{L}_{*}\otimes M_{0} \simeq M_{*}\] \[\operatorname{K}_{*}\otimes_{\operatorname{L}_{*}}N_{*}^{G}\simeq \operatorname{K}_{*}\otimes_{\operatorname{L}_{*}}(\operatorname{K}_{*} \otimes N_{0})^{G}\simeq\operatorname{K}_{*}\otimes_{\operatorname{L}_{*}}( \operatorname{L}_{*}\otimes N_{0})\simeq N_{*}\]
show these functors are mutual inverses when restricted to real modules. Above we have used that \(G\)-fixed points commute with \(p\)-completed tensor products, which follows as the additive norm \(X_{G}\to X^{G}\) from coinvariants to invariants is an isomorphism as the order of \(G\) is invertible from the divisibility relation \(|G||\mathbf{F}_{p}^{\times}|=p-1\).
A key observation that we will use many times later, is that given a real spectrum \(X\), then either \(\operatorname{L}_{*}^{\wedge}X\) or \(\operatorname{K}_{*}^{\wedge}X\) determine each other--an immediate consequence of Pr.2.5 and the above theorem. Another corollary of the above is that we can calculate Andre-Quillen cohomology of a real \(\bar{\theta}\)-algebra \(A_{*}\) as the Andre-Quillen cohomology of \(LA_{*}\) as a \(\theta\)-algebra.
**Corollary 2.3**.: _Given a graded real \(\bar{\theta}\)-algebra \(A_{*}\) and a graded real \(\bar{\psi}\)-module \(M_{*}\), then the functor \(F\) induces an isomorphism of abelian groups_
\[H_{\bar{\theta}}^{s}(A_{*}/\operatorname{L}_{*},M_{*})\simeq H_{\bar{\theta}} ^{s}(FA_{*}/\operatorname{K}_{*},FM_{*}),\qquad s\geq 0.\]
Proof.: The assumption that \(A_{*}\) is real yields \(A_{*}=\operatorname{L}_{*}\otimes A_{0}\), so let us choose a free resolution \(P_{\bullet}\) of \(A_{*}\) as a graded \(\bar{\theta}\)-algebra of the form \(P_{\bullet}=\operatorname{L}_{*}\otimes P_{\bullet}^{\prime}\) where \(P_{\bullet}^{\prime}\to A_{0}\) is a free simplicial \(\bar{\theta}\)-algebra resolution. Let us choose \(P_{\bullet}^{\prime}\) a little more carefully. As explained by Behrens' for the prime \(p=2\) in [1, SS12.A] (on the page above formula (A.2)), we can choose topological generators \(\{x_{\alpha}\}\) of \(A_{0}\) as a \(\bar{\theta}\)-algebra, and we may further take each \(x_{\alpha}\) to have open isotropy inside \(\Gamma=\mathbf{Z}_{p}^{\times}/G\), else we choose other generators. Hence, for each \(\alpha\) there is a \(j\geq 1\) such that the generator \(x_{\alpha}\) defines a map of \(\bar{\theta}\)-algebras
\[x_{\alpha}\colon\mathbf{P}_{\bar{\theta}}\left(\mathbf{Z}_{p}[(\mathbf{Z}/p^{ j}\mathbf{Z})^{\times}/G]\right)\to A_{0}\]
such that map of \(\bar{\theta}\)-algebras
\[P_{0}^{\prime}=\mathbf{P}_{\bar{\theta}}\left(\bigoplus_{\alpha}\mathbf{Z}_{p }[(\mathbf{Z}/p^{j}\mathbf{Z})^{\times}/G]\right)\to A_{0}\]
defined by all of the generators, where the direct sum above is implicitly \(L\)-completed, is surjective. By restricting the \(\Gamma\)-action on the \(\bar{\theta}\)-algebra above to a \(\mathbf{Z}_{p}^{\times}\)-action along the quotient of groups \(\mathbf{Z}_{p}^{\times}\to\Gamma\), we see that \(P_{0}^{\prime}\) is the start of a free simplicial resolution for \(A_{0}\) as a \(\theta\)-algebra now. Indeed, both free functors \(\mathbf{P}_{\theta}\) or \(\mathbf{P}_{\bar{\theta}}\) simply adjoin a free operator \(\theta\) commuting with the \(\mathbf{Z}_{p}^{\times}\)- or \(\Gamma\)-action and satisfying the axioms of Df.1.5. These operators agree if \(G\) acts trivially as it does on \(P_{0}^{\prime}\). Similarly, analysing the kernel of \(P_{0}^{\prime}\to A_{0}\) leads to \(P_{1}^{\prime}\to P_{0}^{\prime}\), and inductively we obtain the whole free simplicial resolution \(P_{\bullet}^{\prime}\) of \(A_{0}\), importantly, as **both** a \(\theta\)-algebra and a \(\bar{\theta}\)-algebra. In particular, as \(F\) does not change alter degree zero, we see that \(FP_{\bullet}\to FA_{*}\) is a free simplicial resolution of \(A_{*}\) as a graded \(\theta\)-algebra. This leads to the isomorphism of cosimplicial objects
\[\operatorname{Alg}_{\operatorname{L}_{*}/A_{*}}^{\bar{\theta}}(P_{\bullet},A_ {*}\ltimes M_{*})\xrightarrow{\simeq}\operatorname{Alg}_{\operatorname{K}_{*} /FA_{*}}^{\theta}(FP_{\bullet},FA_{*}\ltimes FM_{*})\]
which upon taking associated cochain complexes and cohomology yields the desired isomorphism.
### Real spectra and their Goerss-Hopkins spectral sequence (Th.E)
Our interest in real \(\psi\)-modules and \(\bar{\psi}\)-modules comes from our interest in the class of _real_ spectra.
**Definition 2.4**.: For a given nontrivial finite subgroup \(G\leq{\bf Z}_{p}^{\times}\), we say a spectrum \(X\) is _real_ if \({\rm K}^{\wedge}_{*}X\) is torsion-free, concentrated in even degrees, and that the induced \(G\)-action in degree zero is trivial.
The case when \(G=\{\pm 1\}\) is our motivation for this notation--a real spectrum \(X\) with respect to \(\{\pm 1\}\) is one such that inclusion of fixed points \({\rm KO}^{\wedge}_{0}X\to{\rm K}^{\wedge}_{0}X\) given by the complex conjugation action is an isomorphism, as we will shortly see.
This definition only depends on the \({\rm K}(1)\)-localisation of \(X\) for a fixed (odd) prime \(p\)--we will not need a potential integral definition in this article. A real spectrum should be thought of as "a nice spectrum with trivial \(\psi^{\gamma}\)-action for \(\gamma\in G\)". For example, using Ex.3.2 and Th.3.5 we will see that \({\rm tmf}\), \({\rm TMF}_{0}(N)\), \({\rm KO}\), and \({\rm KO}\llbracket q\rrbracket\) are all real. Some non-examples include \({\rm K}\), as the classical calculation
\[{\rm K}^{\wedge}_{0}{\rm K}\simeq{\rm Cont}({\bf Z}_{p}^{\times},{\bf Z}_{p})\]
shows that the \(\{\pm 1\}\)-action on the left, translated to the conjugation action on the right, is nontrivial. The eager reader can similarly show that \({\rm TMF}_{1}(N)\) is **not** real for \(N\geq 3\) and \(G=\{\pm 1\}\) using Pr.3.7 and showing the \(\{\pm 1\}\)-action there is nontrivial. Moreover, for \(G\neq\{\pm 1\}\), one can also check that \({\rm Tmf}\) itself is not real, again using Pr.3.7.
It follows rather easily that the \({\rm L}\)-homology of a real \({\bf E}_{\infty}\)-ring is real.
**Proposition 2.5**.: _Let \(X\) be a spectrum such that \({\rm K}^{\wedge}_{*}X\) is torsion-free and concentrated in even degrees. Then \(X\) is real if and only if the map_
\[{\rm L}^{\wedge}_{0}X\to{\rm K}^{\wedge}_{0}X \tag{2.6}\]
_induced by the inclusion of fixed points \({\rm L}\to{\rm K}\), is an isomorphism. If \(X\) is real spectrum, then the natural maps_
\[{\rm K}_{*}\otimes{\rm K}^{\wedge}_{0}X\xrightarrow{\sim}{\rm K}^{\wedge}_{*} X\qquad\qquad{\rm L}_{*}\otimes{\rm L}^{\wedge}_{0}X\xrightarrow{\sim}{\rm L}^{ \wedge}_{*}X \tag{2.7}\]
_are isomorphisms. In particular, the \(\psi\)-modules \({\rm K}^{\wedge}_{*}X\) and the \(\bar{\psi}\)-module \({\rm L}^{\wedge}_{*}X\) are both real in their respective categories._
The condition that (2.6) was an isomorphism was Behrens' original definition of a real (what he called _Bott periodic_ for \(G=\{\pm 1\}\) and at \(p=2\)), so the above can further be seen as a reconciliation between our two definitions.
Proof.: Let us first note that the fact that \({\rm K}^{\wedge}_{*}X\) is torsion-free and concentrated in even degrees implies that the natural map
\[{\rm K}_{*}\otimes{\rm K}^{\wedge}_{0}X\xrightarrow{\sim}{\rm K}^{\wedge}_{* }X \tag{2.8}\]
is an isomorphism of graded \(\theta\)-modules. As another remark, notice that for any spectrum \(Y\), then the natural map \({\rm L}\otimes Y\to{\rm K}\otimes Y\) is the inclusion of \(\{\pm 1\}\)-homotopy fixed points by
K(1)-local ambidexterity; we also used this fact in the proof of Th.1.6. In particular, we can now consider the following \(G\)-homotopy fixed point spectral sequence (HFPSS):
\[E_{2}^{s,t}\simeq H^{s}(G,\mathrm{K}_{t}^{\wedge}\!X)\simeq H^{s}(G,\mathrm{K}_{ *}\otimes\mathrm{K}_{0}^{\wedge}\!X)\Longrightarrow\mathrm{L}_{t-s}^{\wedge}X \tag{2.9}\]
The above spectral sequence collapses immediately, as the groups \(\mathrm{K}_{*}^{\wedge}\!X\) are \(p\)-complete for an odd prime \(p\), hence the order of \(G\) is invertible, and the edge map
\[\mathrm{L}_{*}^{\wedge}\!X\xrightarrow{\cong}(\mathrm{K}_{*}^{\wedge}\!X)^{G} \tag{2.10}\]
is an isomorphism. In degree zero the above map is the desired inclusion of fixed points map (2.6), and it is now clear that this map is an isomorphism if and only if the \(G\)-action on \(\mathrm{K}_{0}^{\wedge}\!X\) is trivial.
The calculation of the \(E_{\infty}\)-page of the HFPSS (2.9) can be summarised by stating that there is a natural isomorphism of graded reduced \(\theta\)-modules
\[\mathrm{L}_{*}\otimes\mathrm{K}_{0}^{\wedge}\!X\xrightarrow{\cong}\mathrm{L}_ {*}^{\wedge}\!X\]
which when combined with (2.6) yields the second isomorphism of (2.7); the first is (2.8).
Essentially as a corollary of our study of "reality" above, we can prove Th.E
Proof of Th.E.: By [1, Th.2.4.14], our hypotheses produce a spectral sequence converging to the homotopy groups of the space \(\mathrm{CAlg}(A,B)\) based at \(f\), so we are left to compute the \(E_{2}\)-page. Using Pr.2.5, we see that \(\mathrm{K}_{*}^{\wedge}\!X\) and \(\mathrm{L}_{*}^{\wedge}\!X\) are both real if \(X\) is real, so this applies to both \(X=A,B\). For \(s=t=0\), the given \(E_{2}\)-page takes the form
\[E_{2}^{0,0}=\mathrm{Alg}_{\mathrm{K}_{*}}^{\theta}(\mathrm{K}_{*}^{\wedge}\!A,\mathrm{K}_{*}^{\wedge}\!B)\simeq\mathrm{Alg}_{\mathrm{L}_{*}}^{\bar{\theta} }(\mathrm{L}_{*}^{\wedge}\!A,\mathrm{L}_{*}^{\wedge}\!B)\]
using Th.2.2, and for \(t\geq 1\)
\[E_{2}^{s,t}=H_{\theta}^{s}(\mathrm{K}_{*}^{\wedge}\!A/\,\mathrm{K}_{*}, \mathrm{K}_{*}^{\wedge}\!B)\simeq H_{\bar{\theta}}^{s}(\mathrm{L}_{*}^{\wedge }\!A/\mathrm{L}_{*},\mathrm{L}_{*}^{\wedge}\!B)\]
using Cor.2.3, as desired.
## 3 Uniqueness of maps of between real \(\mathbf{E}_{\infty}\)-rings
In this section, we apply the obstruction theory of Th.E to morphisms of certain \(\mathbf{E}_{\infty}\)-rings.
### Real elliptic cohomology and \(K\)-theories
The condition that the \(G\)-action on \(\mathrm{K}_{0}^{\wedge}\!X\) be trivial can naturally be tricky to determine, so it can be difficult to show that various spectra are real. If \(X\) is an elliptic cohomology theory or a form of \(K\)-theory and \(G=\{\pm 1\}\), then we have some hope. Write \(\mathcal{M}_{\mathrm{El}}\) for the _compactified moduli stack of elliptic curves_, so the stack which classifies generalised elliptic curves; see [1].
**Definition 3.1**.: Let \(f\colon\mathfrak{X}\to\mathcal{M}_{\mathrm{Ell}}\) be a morphism of formal Deligne-Mumford stacks determined by a generalised elliptic curve \(C\) over \(\mathfrak{X}\). Consider \(f\) as an object in the slice \(2\)-category \(\mathrm{fDM}_{/\mathcal{M}_{\mathrm{Ell}}}\). There is an involution \(\tau\) of \(f\) in this \(2\)-category defined by the pair \((\mathrm{id}_{\mathfrak{X}},[-1])\) where \([-1]\colon C\to C\) is the inversion isomorphism on the generalised elliptic curve \(C\). We say that \(f\) is _inverse fixed_ if there exists a trivialisation of \(\tau\), so a \(2\)-morphism in \(\mathrm{fDM}_{/\mathcal{M}_{\mathrm{Ell}}}\) between \(\tau\) and the identity.
The following remarks can look tautological, to begin with, so the reader is advised to read the following example and non-example in parallel.
_Example 3.2_.: Consider \(\mathfrak{X}=\mathcal{M}_{\mathrm{Ell},\mathbf{Z}_{p}}=\mathcal{M}_{\mathrm{ Ell}}\times\mathrm{Spf}\,\mathbf{Z}_{p}\). In this case, there is a \(2\)-morphism in \(\mathrm{fDM}_{/\mathcal{M}_{\mathrm{Ell}}}\) given by \([-1]\colon(\mathrm{id},\mathrm{id})\to(\mathrm{id},[-1])=\tau\). In the case of \(\mathfrak{X}=\mathcal{M}_{0}^{\mathrm{sm}}(n)_{\mathbf{Z}_{p}}\) things are a little less tautological. Here, \(\mathcal{M}_{0}^{\mathrm{sm}}(n)\) is the moduli stack of smooth elliptic curves equipped with a cyclic subgroup of order \(n\) for a positive integer \(n\) not divisible by \(p\); see [13, SS5]. In this case, we can also define a \(2\)-morphism \([-1]\colon(\mathrm{id},\mathrm{id})\to(\mathrm{id},[-1])\) as the \([-1]\)-action of an elliptic curve fixes a choice of cyclic subgroup. The sections of the sheaf \(\mathcal{O}^{\mathrm{top}}\) on these stacks are \(\mathrm{Tmf}_{p}\) and \(\mathrm{TMF}_{0}(N)_{p}\), respectively. The same \(2\)-morphism also works for \(\mathfrak{X}=\mathcal{M}_{\mathrm{Tate},\mathbf{Z}_{p}}\), where \(\mathcal{M}_{\mathrm{Tate}}\) is the Tate moduli stack, and also for the smooth Tate moduli stack \(\mathcal{M}_{\mathrm{Tate}}^{\mathrm{sm}}\), and the moduli stack \(\mathcal{M}_{\mathrm{G}_{m}}\) of forms of \(\mathbf{G}_{m}\); see [12, SS3.5] or [11, Dfs.1.1-2]. The \(\mathbf{E}_{\infty}\)-rings associated to these stacks are \(\mathrm{KO}[\![q]\!]_{p}\), \(\mathrm{KO}(\!(q)\!)_{p}\), and \(\mathrm{KO}_{p}\), respectively.
_Non-example 3.3_.: The moduli stack \(\mathfrak{X}=\mathcal{M}_{1}^{\mathrm{sm}}(n)_{\mathbf{Z}_{p}}\) is **not** inverse fixed for \(n\geq 3\). Here \(\mathcal{M}_{1}^{\mathrm{sm}}(n)\) is the moduli stack of smooth elliptic curves with a chosen point of order \(n\) for some positive integer \(n\) not divisible by \(p\). To see this is not inverse fixed, one can use the converse to Th.3.5 and explicitly check that \(\mathrm{TMF}_{1}(n)_{p}\) is not real using Pr.3.7. Also, the \(2\)-morphisms of Ex.3.2 do not work here as \([-1]\) acts nontrivially on the point \(p\) of order \(n\), so long as \(n\geq 3\). The same goes for the \(2\)-fold cover of \(\mathcal{M}_{\mathrm{Tate},\mathbf{Z}_{p}}\) and \(\mathcal{M}_{\mathbf{G}_{m},\mathbf{Z}_{p}}\), so \(\mathrm{Spf}\,\mathbf{Z}_{p}[\![q]\!]\) and \(\mathrm{Spf}\,\mathbf{Z}_{p}\), respectively, as their associated \(\mathbf{E}_{\infty}\)-rings are \(\mathrm{K}[\![q]\!]_{p}\) and \(\mathrm{K}_{p}\) which are not real.
Let us clarify the origins of these natural families of \(\mathbf{E}_{\infty}\)-rings above.
**Definition 3.4**.: Fix an odd prime \(p\). By [10, SS12] or [12], there is a sheaf \(\mathcal{O}^{\mathrm{top}}\) of \(\mathbf{E}_{\infty}\)-rings on the small etale site of \(\mathcal{M}_{\mathrm{Ell},\mathbf{Z}_{p}}\) which is uniquely defined (up to \(1\)-homotopy) by the fact that its affine sections come equipped with the natural structure of an elliptic cohomology theory; see [11]. In particular, given a formal Deligne-Mumford stack \(\mathfrak{X}\) with an etale map \(f\colon\mathfrak{X}\to\mathcal{M}_{\mathrm{Ell},\mathbf{Z}_{p}}\) there is an associated \(\mathbf{E}_{\infty}\)-ring \(\mathcal{O}^{\mathrm{top}}(\mathfrak{X})\). Furthermore, as discussed in [10, SSA] and [12, SS5], respectively (or in [11, Prs.1.11 & 1.18]), there are similarly defined sheaves of \(\mathbf{E}_{\infty}\)-rings \(\mathcal{O}^{\mathrm{mult}}\) and \(\mathcal{O}^{\mathrm{Tate}}\) on the moduli stacks \(\mathcal{M}_{\mathbf{G}_{m},\mathbf{Z}_{p}}\) and \(\mathcal{M}_{\mathrm{Tate},\mathbf{Z}_{p}}\).
Examples of sections of the above sheaves are the familiar (and also real, by Th. 3.5) \(\mathbf{E}_{\infty}\)-rings
\[\mathcal{O}^{\mathrm{top}}(\mathcal{M}_{\mathrm{Ell},\mathbf{Z}_{p}})= \mathrm{Tmf}_{p}\qquad\mathcal{O}^{\mathrm{top}}(\mathcal{M}_{0}^{\mathrm{sm}}( n)_{\mathbf{Z}_{p}})=\mathrm{TMF}_{0}(n)_{p}\]
\[\mathcal{O}^{\mathrm{Tate}}(\mathcal{M}_{\mathrm{Tate},\mathbf{Z}_{p}})= \mathrm{KO}[\![q]\!]_{p}\qquad\mathcal{O}^{\mathrm{Tate}}(\mathcal{M}_{ \mathrm{Tate},\mathbf{Z}_{p}}^{\mathrm{sm}})=\mathrm{KO}(\!(q)\!)_{p}\qquad \mathcal{O}^{\mathrm{mult}}(\mathcal{M}_{\mathbf{G}_{m},\mathbf{Z}_{p}})= \mathrm{KO}_{p}\]
where \(\mathcal{M}_{0}^{\mathrm{sm}}(n)\) denotes the moduli of smooth elliptic curves equipped with a cyclic subgroup of order \(n\).
**Theorem 3.5**.: _Suppose \(\mathfrak{X}\) is a formal Deligne-Mumford stack and that we are given an **affine** morphism \(f\colon\mathfrak{X}\to\mathcal{M}_{\operatorname{Ell},\mathbf{Z}_{p}}\). Make one of the following four additional assumptions:_
1. _Suppose that_ \(\mathfrak{X}=\operatorname{Spf}R\) _is affine and_ \(f\) _is flat, leading us to a Landweber exact homotopy commutative elliptic cohomology theory_ \(E\) _associated with_ \(f\)_._
2. _Suppose that_ \(f\) _is etale, which yields an_ \(\mathbf{E}_{\infty}\)_-ring_ \(E=\mathcal{O}^{\operatorname{top}}(\mathfrak{X})\)_._
3. _Suppose that_ \(f\) _admits an etale factorisation through_ \(\mathcal{M}_{\operatorname{Tate}}\)_, which yields an_ \(\mathbf{E}_{\infty}\)_-ring_ \(\mathcal{O}^{\operatorname{Tate}}(\mathfrak{X})\)_._
4. _Suppose that_ \(f\) _admits an etale factorisation through_ \(\mathcal{M}_{\mathbf{G}_{m}}\)_, which yields an_ \(\mathbf{E}_{\infty}\)_-ring_ \(\mathcal{O}^{\operatorname{mult}}(\mathfrak{X})\)_._
_If \(f\) is inverse fixed, then \(E\) is real with respect to the group \(\{\pm 1\}\)._
The above theorem is key for us to apply Th.E, which in turn comes down to a \(K\)-theory calculation. See [10, SS12.1 & 12.5] for the following definitions.
**Definition 3.6**.: Write \(\mathcal{M}_{\operatorname{Ell}}^{\operatorname{ord}}\) for the _moduli stack of generalised elliptic curves with ordinary reduction over \(p\)-complete rings_ and \(\mathcal{M}_{\operatorname{Ell}}^{\operatorname{ord}}(p^{\infty})\) for the continuous \(\mathbf{Z}_{p}^{\times}\)-torsor over \(\mathcal{M}_{\operatorname{Ell}}^{\operatorname{ord}}\) defined by pairs \((C,\alpha)\) where \(C\) is a generalised elliptic curve with ordinary reduction over a \(p\)-complete ring \(R\) and \(\alpha\colon\widehat{C}\simeq\widehat{\mathbf{G}}_{m}\) is an isomorphism of formal groups over \(R\). From the lemma following the proof of [10, 11], the stack \(\mathcal{M}_{\operatorname{Ell}}^{\operatorname{ord}}(p^{\infty})\) is affine with global sections _Katz' ring of \(p\)-adic modular forms \(V\)_.
**Proposition 3.7**.: _Let \(f\colon\mathfrak{X}\to\mathcal{M}_{\operatorname{Ell},\mathbf{Z}_{p}}\) and \(E\) be as in hypotheses 1-4 of Th. 3.5 (here we do **not** assume \(f\) to be inverse fixed). Let us write \(f^{\operatorname{ord}}\colon\mathfrak{X}^{\operatorname{ord}}\to\mathcal{M}_{ \operatorname{Ell}}^{\operatorname{ord}}\) for the base-change of \(f\) over \(\mathcal{M}_{\operatorname{Ell}}^{\operatorname{ord}}\to\mathcal{M}_{ \operatorname{Ell},\mathbf{Z}_{p}}\). Then the zeroth \(\operatorname{K}(1)\)-local \(\operatorname{K}\)-theory of \(E\) fits into the following Cartesian diagram of formal stacks:_
(3.8)
_Moreover, the natural map of graded \(\theta\)-algebras \(\operatorname{K}_{*}\otimes\!\operatorname{K}_{0}^{\wedge}\!E\to\operatorname{ K}_{*}^{\wedge}\!E\) is an isomorphism._
The above is a generalisation of [10, Pr.12.6.1] from affine schemes to general \(\mathfrak{X}\).
Proof.: Under hypothesis 1, this is precisely [10, Pr.12.6.1]. Continuing now under hypotheses 2-4, write \(\mathfrak{X}^{\operatorname{ord}}(p)\) be the formal Deligne-Mumford stack defined by the Cartesian diagram
\[\begin{CD}\mathfrak{X}^{\operatorname{ord}}(p)@>{}>{}>\mathcal{M}_{ \operatorname{Ell}}^{\operatorname{ord}}(p)\\ @V{}V{\mathfrak{X}^{\operatorname{ord}}}V@V{f^{\operatorname{ord}}}V{}V\\ \end{CD}\]
where \(\mathcal{M}^{\mathrm{ord}}_{\mathrm{Ell}}(p)\) is the \(\mathbf{F}_{p}^{\times}\)-etale torsor (see [10, 12]) of \(\mathcal{M}^{\mathrm{ord}}_{\mathrm{Ell}}\) representing ordinary generalised elliptic curves curves \(C\) with a choice of isomorphism \(\mu_{p}\simeq\widehat{C}[p]\) of finite group schemes. By [10, 11], the stack \(\mathcal{M}^{\mathrm{ord}}_{\mathrm{Ell}}(p)\) is affine, and \(f^{\mathrm{ord}}\) is affine by base-change, hence \(\mathfrak{X}^{\mathrm{ord}}(p)\) is an affine formal Deligne-Mumford stack \(\mathrm{Spf}\,W\). By evaluating either \(\mathcal{O}^{\mathrm{top}}_{\mathrm{K}(1)}\) or \(\mathcal{O}^{\mathrm{Tate}}_{\mathrm{K}(1)}\) on \(\mathfrak{X}^{\mathrm{ord}}(p)\), we obtain an \(\mathbf{E}_{\infty}\)-ring \(E(p)\) and by descent for these etale sheaves, an \(\mathbf{F}_{p}^{\times}\)-Galois extension of \(\mathbf{E}_{\infty}\)-rings \(E\to E(p)\). In particular, this map induces an equivalence \(E\simeq E(p)^{h\mathbf{F}_{p}^{\times}}\). By [10, 11], this proposition holds for \(E(p)\), so in particular we now have the commutative diagram of formal Deligne-Mumford stacks
where we now know the outer rectangle is Cartesian. To see the right square is Cartesian and finish the proof, it suffices to show that the map \(\gamma\) is the \(\mathbf{F}_{p}^{\times}\) quotient of the map \(\gamma(p)\). We know that \(\pi\) is an \(\mathbf{F}_{p}^{\times}\)-quotient as it was constructed as the base-change of an \(\mathbf{F}_{p}^{\times}\)-torsor, so by naturality of the vertical maps, it suffices to show that \(\pi^{\prime}\) is also an \(\mathbf{F}_{p}^{\times}\)-quotient, in other words, that \(\mathrm{K}_{0}^{\wedge}E\to\mathrm{K}_{0}^{\wedge}E(p)\) is the inclusion of \(\mathbf{F}_{p}^{\times}\)-fixed points. To see this consider the following chain of natural isomorphisms:
\[\mathrm{K}_{*}^{\wedge}E\xrightarrow{\simeq}\pi_{*}(\mathrm{K}\otimes E(p)^{h \mathbf{F}_{p}^{\times}})\stackrel{{\simeq}}{{\leftarrow}}\pi_{*} ((\mathrm{K}\otimes E(p))^{h\mathbf{F}_{p}^{\times}}\xrightarrow{\simeq}( \mathrm{K}_{*}^{\wedge}E(p))^{\mathbf{F}_{p}^{\times}}\]
The first follows as \(E\to E(p)\) is an \(\mathbf{F}_{p}^{\times}\)-Galois extension, the second as \(\mathrm{K}(1)\)-local ambidexterity naturally identifies homotopy fixed points with homotopy orbits and hence homotopy fixed points commute with tensoring in one variable, and the third as the \(E_{2}\)-page of the \(\mathbf{F}_{p}^{\times}\)-HFPSS for \(\mathrm{K}\otimes E(p)\) is concentrated in the zeroth row as \(|\mathbf{F}_{p}^{\times}|=p-1\) is invertible.
For the "moreover" statement, the above shows that vertical morphisms in the diagram of \(\theta\)-algebras
are inclusions of \(\mathbf{F}_{p}^{\times}\)-fixed points. Our desired conclusion follows from the fact that the lower-horizontal morphism above is an isomorphism, a consequence of the fact that \(E(p)\) is Landweber exact.
Proof of Th.3.5.: To see that \(E\) is real, notice that Pr.3.7 shows that \(\mathrm{K}_{*}^{\wedge}E\) is concentrated in even degrees. Moreover, the facts that \(\mathrm{K}_{*}\) is torsion-free and that \(\mathrm{K}_{0}^{\wedge}E\) is flat over \(\mathrm{Spf}\,\mathbf{Z}_{p}\) (and hence also flat over \(\mathrm{Spec}\,\mathbf{Z}\)) as all of the maps in (3.8) are flat and \(\mathcal{M}^{\mathrm{ord}}_{\mathrm{Ell}}\to\mathrm{Spf}\,\mathbf{Z}_{p}\) is flat (as \(\mathrm{Spf}\,V\to\mathcal{M}^{\mathrm{ord}}_{\mathrm{Ell}}\) is a \(\mathbf{Z}_{p}^{\times}\)-torsor and \(V\) is flat over \(\mathbf{Z}_{p}\)) show that \(\mathrm{K}_{*}^{\wedge}E\) is torsion-free. We are left to show that \(\mathrm{K}_{0}^{\wedge}E\) has trivial \(\{\pm 1\}\)-action--this part of the proof is inspired by the proof
of [DFHH14, Lm.12.7.14] and [Mei22, Ex.6.12]. Pr.3.7 shows that \(\operatorname{Spf}\operatorname{K}_{0}^{\wedge}\!E\) represents pairs \((h,\alpha)\) where \(h\) is a map of stacks \(h\colon T\to\mathfrak{X}^{\operatorname{ord}}\) from our test stack \(T\) and \(\alpha\) is an isomorphism of formal groups \(\alpha\colon\widehat{\mathbf{G}}_{m}\simeq\widehat{C}_{h}\) over \(T\), where \(C_{h}\) is the elliptic curve defined by the composition \(T\to\mathfrak{X}^{\operatorname{ord}}\to\mathcal{M}_{\operatorname{Ell}}^{ \operatorname{ord}}\). The \(\mathbf{Z}_{p}^{\times}\)-action on these \(T\)-points are given by sending the pair \((h,\alpha)\) to \((h,\alpha\circ[k])\) for each \(p\)-adic unit \(k\), where \([k]\colon\widehat{\mathbf{G}}_{m}\to\widehat{\mathbf{G}}_{m}\) is the \(k\)-fold multiplication map on \(\widehat{\mathbf{G}}_{m}\). This defines our \(\mathbf{Z}_{p}^{\times}\)-action on \(\operatorname{Spf}\operatorname{K}_{0}^{\wedge}\!E\) in the slice 2-category of stacks over \(\mathcal{M}_{\operatorname{Ell}}\). To finish the proof, it suffices to show that the restricted \(\{\pm 1\}\)-action on \(\operatorname{K}_{0}^{\wedge}\!E\) is trivial.
Notice that this restricted \(\{\pm 1\}\)-action on \(\operatorname{K}_{0}^{\wedge}\!E\) is isomorphic in the slice 2-category of stacks over \(\mathcal{M}_{\operatorname{Ell}}\) to the \(\{\pm 1\}\) action given by the identity on \(\operatorname{Spf}\operatorname{K}_{0}^{\wedge}\!E\) as the inversion map \([-1]\) on the generalised elliptic curve \(C_{K}\) defined over \(\operatorname{Spf}\operatorname{K}_{0}^{\wedge}\!E\)--this follows as the formal completion of this inversion map on \(C_{K}\) induces the inversion map on the associated formal group \(\widehat{C}_{K}\), and the isomorphism \(\alpha\) is a homomorphism of formal groups and hence commutes with this inverse map. However, we assumed that \(f\), and hence also \(\mathfrak{X}^{\operatorname{ord}}\to\mathfrak{X}\to\mathcal{M}_{\operatorname {Ell},\mathbf{Z}_{p}}\), is inverse fixed, meaning this \(\{\pm 1\}\)-action is trivial. This finishes our proof.
Although Th.3.5 allows us to abstractly apply Th.E to \(\mathbf{E}_{\infty}\)-rings coming from \(\mathcal{O}^{\operatorname{top}}\), \(\mathcal{O}^{\operatorname{Tate}}\), or \(\mathcal{O}^{\operatorname{mult}}\), to obtain workable calculations, ie, to show certain vanishing results, we will need to know the following statement about the KO-homology of these \(\mathbf{E}_{\infty}\)-rings.
**Proposition 3.9**.: _Let \(f\), \(\mathfrak{X}\), and \(E\) be as in parts 2-4 of Th.3.5, assume \(f\) is inverse fixed, and write \(\Gamma=\mathbf{Z}_{p}^{\times}/\{\pm 1\}\). Then in degree zero \(\operatorname{KO}_{*}^{\wedge}\!E=A_{*}\) is ind-etale over its \(\Gamma\)-fixed points \(A_{0}^{0}\), \(A_{0}^{0}\) is the \(p\)-adic completion of a formally smooth algebra over \(\mathbf{Z}_{p}\), and the continuous \(\Gamma\)-cohomology groups \(H_{c}^{s}(\Gamma,A_{*})\) vanish for \(s\geq 1\)._
Our proof will essentially boil down to complex \(K\)-theory calculations and known facts.
Proof.: For the first two statements, note that as \(E\) is real with respect to \(\{\pm 1\}\) by Th.3.5, the complexification map
\[A_{0}=\operatorname{KO}_{0}^{\wedge}\!E\xrightarrow{\simeq}\operatorname{K}_{ 0}^{\wedge}\!E=B\]
is an isomorphism by Pr.2.5. This reduces us to show that \(B\) is ind-etale over its \(H=\mathbf{Z}_{p}^{\times}\)-fixed points \(B^{H}\), and that \(B^{H}\) is the \(p\)-completion of a formally smooth algebra over \(\mathbf{Z}_{p}\). As in the proof of Pr.3.7, let us define stacks \(\mathfrak{X}^{\operatorname{ord}}(p)\) as \(\mathbf{F}_{p}^{\times}\)-Galois extensions of \(\mathfrak{X}^{\operatorname{ord}}\) with associated \(\mathbf{E}_{\infty}\)-ring \(E(p)\). By Pr.3.7, we also obtain the zeroth complex \(K\)-theory of \(E(p)\) as an \(\mathbf{F}_{p}^{\times}\)-Galois extension of \(B\), which we will denote as \(B(p)\). Another application of Pr.3.7, this time applied to \(\mathfrak{X}\), and \(\mathfrak{Y}\), where \(\mathfrak{Y}\) is either \(\mathcal{M}_{\operatorname{Ell}}\), \(\mathcal{M}_{\operatorname{Tate}}\), or \(\mathcal{M}_{\mathbf{G}_{m}}\), depending on which part of Th.3.5 we find ourselves in, leads to the commutative diagram of formal Deligne-Mumford stacks
(3.10)
in which all squares are Cartesian; \(\mathfrak{Y}^{\operatorname{ord}}(p^{\infty})\) is defined such that the right square is Cartesian. In particular, \(\mathfrak{Y}^{\operatorname{ord}}(p^{\infty})=\operatorname{Spf}W\) is affine as \(g\) is affine. From this, we obtain the commutative
diagram of \(p\)-adic rings
where the vertical morphisms are inclusions of \(H\)-fixed points and the right horizontal morphisms are inclusions of \(\mathbf{F}_{p}^{\times}\)-fixed points. As \(\mathfrak{X}^{\mathrm{ord}}(p)\) is affine by construction, and by [12, Cor.5.8(2)] we have \(\mathfrak{X}^{\mathrm{ord}}(p)\simeq\operatorname{Spf}B(p)^{H}\), then it follows by (3.10) and base-change that \(B(p)^{H}\to B\) is ind-etale. By taking \(\mathbf{F}_{p}^{\times}\)-fixed points, it follows that \(B^{H}\to B\) is also an ind-etale extension. Similarly, to see \(B^{H}\) is the \(p\)-completion of a formally smooth \(\mathbf{Z}_{p}\)-algebra, we use the fact that \(B\to B(p)\) is an etale cover and the fact that formally smooth is an adjective which is etale local on the source; see [14, 061K]. To see that \(B(p)^{H}\) is a \(p\)-completion of a formally smooth \(\mathbf{Z}_{p}\)-algebra, we appeal to [12, Lm.5.5(3)].
For the cohomological vanishing, first note the isomorphism of \(\Gamma\)-modules
\[A_{4t}=\operatorname{KO}_{4t}^{\wedge}E\simeq\operatorname{KO}_{0}^{\wedge}E( \chi_{2t})=A_{0}(\chi_{2t})\]
where \(\chi_{t}\colon\Gamma\to\Gamma\) is the continuous character sending \(k\mapsto k^{t}\). This comes from the action of the stable Adams operations on \(\operatorname{KO}_{*}\) and the fact that \(E\) is real by Th.3.5. We now use a general fact that if \(R\to S\) is an ind-Galois extension with respect to \(\Gamma\), then the continuous cohomology groups \(H_{c}^{s}(\Gamma,S(\chi))\) vanish for all characters \(\chi\colon\Gamma\to\Gamma\); this is explained clearly in the proof of part 2 of [12, Lm.5.5]. This finishes the proof, as we know that \(A_{0}^{0}\to A_{0}\) is a \(\Gamma\)-Galois extension, hence so is \(A_{*}^{0}\to A_{*}\).
Both Th.3.5 and Pr.3.9 have refinements to \(G\)-homotopy fixed points of various \(\mathbf{E}_{\infty}\)-rings.
**Corollary 3.11**.: _Let \(G\) be a nontrivial finite subgroup of \(\mathbf{Z}_{p}^{\times}\). Then \(\operatorname{Tmf}_{p}^{hG}\), \(\operatorname{TMF}_{p}^{hG}\), \(\operatorname{KO}\llbracket q\rrbracket_{p}^{hG}\), and \(\operatorname{KO}(\!(q)\!)_{p}^{hG}\) are all real, where all \(G\)-actions are induced by the \(p\)-adic stable Adams operations. Moreover, if \(E\) is any of these \(\mathbf{E}_{\infty}\)-rings, then in degree zero \(\operatorname{L}_{*}^{\wedge}\!E=A_{*}\) is ind-etale over its \(\Gamma\)-fixed points \(A_{0}^{0}\), \(A_{0}^{0}\) is the \(p\)-adic completion of a formally smooth algebra over \(\mathbf{Z}_{p}\), and the continuous \(\Gamma\)-cohomology groups \(H_{c}^{s}(\Gamma,A_{*})\) vanish for \(s\geq 1\)._
Proof.: The classical calculations that \(\operatorname{K}_{0}^{\wedge}\operatorname{Tmf}\simeq V\) and \(\operatorname{K}_{0}^{\wedge}\operatorname{K}\llbracket q\rrbracket\simeq \operatorname{Cont}(\mathbf{Z}_{p}^{\times},\mathbf{Z}_{p})\), where \(V\) is the \(p\)-completion of Katz' ring of divided congruences ([11, SS1.4]) yield the following equivalences
\[\operatorname{K}_{0}^{\wedge}\operatorname{Tmf}^{hG}\simeq(\operatorname{K}_ {0}^{\wedge}\operatorname{Tmf})^{G}\simeq V^{G}\qquad\operatorname{K}_{0}^{ \wedge}\operatorname{K}\llbracket q\rrbracket^{G}\simeq\operatorname{Cont}( \mathbf{Z}_{p}^{\times},\mathbf{Z}_{p})^{G}=\operatorname{Cont}(\Gamma, \mathbf{Z}_{p})\]
as the \(p\)-adic Adams operations on these \(\mathbf{E}_{\infty}\)-rings, so [13, Th.A] for \(\operatorname{Tmf}_{p}\), [13, SS5.5] for the rest, induce the \(\psi\)-module \(\mathbf{Z}_{p}^{\times}\)-action on the above groups. It is clear from the above descriptions that \(G\) acts trivially on the groups above, and the smooth cases, so \(\operatorname{TMF}_{p}\) and \(\operatorname{K}(\!(q)\!)_{p}\), are similar. This combined with the computations
\[\operatorname{K}_{*}^{\wedge}\!E=\operatorname{K}_{*}^{\wedge}\!R^{hG}\simeq( \operatorname{K}_{*}^{\wedge}\!R)^{G}\simeq\operatorname{K}_{*}\otimes( \operatorname{K}_{0}^{\wedge}\!R)^{G}\simeq\operatorname{K}_{*}\otimes \operatorname{K}_{0}^{\wedge}\!E\]
for any of the \(E=R^{hG}\) above, shows that these \(\mathbf{E}_{\infty}\)-rings are real with respect to \(G\). For the "moreover" statement, one simply copies the proof for Pr.3.9, only needing to replace \(\{\pm 1\}\) with \(G\) and KO with L.
### Uniqueness of \(p\)-adic topological \(q\)-expansion map (Ths.A and B)
In this section, we prove Ths.A and B. First, we have the following \(p\)-adic generalisation of Th.A.
**Theorem 3.12**.: _Let \(p\) be an odd prime, \(f,E\) be as in parts 2-4 of Th.3.5 and \(f^{\prime},E^{\prime}\) be as in parts 3-4 of Th.3.5. Suppose that both \(f\) and \(f^{\prime}\) are inverse fixed. Then every map of \(\mathbf{E}_{\infty}\)-rings \(f\colon E\to E^{\prime}\) is uniquely determined by its effect on the zeroth \(\mathrm{K}(1)\)-local \(\mathrm{KO}\)-homology (or \(\mathrm{K}\)-homology) group up to \(3\)-homotopy._
To prove the above theorem, we will use Th.E together with the following vanishing statement of Andre-Quillen cohomology groups.
**Proposition 3.13**.: _Fix an odd prime \(p\) and a nontrivial finite subgroup \(G\leq\mathbf{Z}_{p}^{\times}\). Suppose \(A_{*}\) is a graded \(\bar{\theta}\)-algebra and that \(M_{*}\) is a graded \(\bar{\theta}\)-module. Write \(A_{*}^{0}=A_{*}^{\Gamma}\) where \(\Gamma=\mathbf{Z}_{p}^{\times}/G\). Suppose the following conditions hold:_
1. _Both_ \(A_{*}\) _and_ \(M_{*}\) _are both torsion-free and real with respect to_ \(G\)_._
2. \(A_{0}^{0}\) _is formally smooth over_ \(\mathbf{Z}_{p}\)_._
3. _The continuous group cohomology groups_ \(H_{c}^{s}(\Gamma,M_{u})\) _vanish for_ \(s>0\) _and every_ \(u\in\mathbf{Z}\)_._
4. \(A_{0}\) _is ind-etale over_ \(A_{0}^{0}\)_._
_Then \(H_{\bar{\theta}}^{s}(A_{*}/\mathrm{L}_{*},M_{*}[t])=0\) for \(s\geq 2\) and all \(t\) not divisible by \(2g\)._
To prove this vanishing statement, one would simply copy Lawson-Naumann's proof of [12, Lm.5.15], replacing their \(\mathbf{Z}_{p}^{\times}\) with our \(\Gamma\); also see Behrens' proof of [16, Lm.12.7.5 & Lm.12.7.13] for a similar argument. Let us avoid too much repetition and forgo proof. The vanishing for \(t\) not divisible by \(2g\) comes from the fact that \(M_{*}\) is real, so \(L_{*}\otimes M_{0}=M_{*}\), and \(L_{*}\) is concentrated in degrees divisible by \(2g\).
Proof of Th.3.12.: First note that the zeroth KO- and K-homology of real spectra with respect to \(\{\pm 1\}\) are naturally isomorphic by Pr.2.5.
Now, consider the classical Goerss-Hopkins obstruction theory of [12, Cor.4.4] based on K. By Th.3.5, we see the spectra in sight are real and Cor.2.3 states it suffices to compute Andre-Quillen cohomology of their KO-homology. By Pr.3.9 and Pr.3.13 (for \(G=\{\pm 1\}\)) we see that enough obstruction groups vanish to conclude that \(f\) is uniquely determined up to \(1\)-homotopy--this fact is already well-known.
By Pr.1.2, it now suffices to show that the groups \(\pi_{k}(\mathrm{CAlg}(E,E^{\prime}),f)\) vanish for all \(0<k<3\). Note that both \(E\) and \(E^{\prime}\) are \(p\)-complete and \(E^{\prime}\) is \(\mathrm{K}(1)\)-local by assumption, and
that according to Th.3.5 both \(E\) and \(E^{\prime}\) are real. We may then use Th.E to obtain a spectral sequence that abuts to the desired homotopy groups:
\[H^{s}_{\bar{\theta}}(\operatorname{KO}_{*}^{\wedge}\!E/\operatorname{KO}_{*}, \operatorname{KO}_{*}^{\wedge}\!E^{\prime}[-t])\Longrightarrow\pi_{t-s}( \operatorname{CAlg}(E,E^{\prime}),f)\]
Combining Pr.3.9 with Pr.3.13, we see these groups vanish for \(s\geq 2\) and \(t\) not divisible by \(4\). This spectral sequence above is concentrated in the \(s=0\) and \(s=1\) rows in degrees \(t\) divisible by \(4\). This immediately yields the desired vanishing of \(\pi_{i}\) for \(0<i<3\).
_Remark 3.14_.: At the prime \(2\), all of the arguments above work as in the odd primary case, except that the vanishing result Pr.3.13 at the prime \(2\) (see [13, 10]) does not guarantee the desired vanishing of obstruction groups. In essence, this comes down to the fact that \(\pi_{1}\operatorname{KO}\neq 0\).
To prove Th.A, we combine the general \(p\)-adic uniqueness statement Th.3.12 with standard rational stable homotopy theory.
Proof of Th.A.: Consider the Cartesian diagram of spaces
induced by the arithmetic fracture square for \(\operatorname{KO}\!\llbracket q\rrbracket[\tfrac{1}{2}]\), where the products are taken over all odd primes. Applying the functor \(\tau_{\leq 2}\) does not generally commute with limits of spaces, however, from the long exact sequence of a fibre product on homotopy groups, we see that the diagram
(3.15)
is Cartesian if and only if the induced map
\[\pi_{3}\operatorname{CAlg}\left(\operatorname{tmf},\left(\prod_{p\neq 2} \operatorname{KO}\!\llbracket q\rrbracket_{p}\right)_{\mathbf{Q}}\right) \rightarrow\pi_{2}\operatorname{CAlg}(\operatorname{tmf},\operatorname{ KO}\!\llbracket q\rrbracket[\tfrac{1}{2}])\]
is zero. We claim that the \(\pi_{3}\) above vanishes, which would imply that (3.15) is Cartesian. To see the above \(\pi_{3}\) vanishes, note the classical fact that \(\operatorname{tmf}_{\mathbf{Q}}\) is the free formal cdga on two variables \(x\) and \(y\), where \(|x|=8\) and \(|y|=12\); see [12, Pr.4.47]. This implies that for a rational \(\mathbf{E}_{\infty}\)-ring \(R\), the mapping space \(\operatorname{CAlg}(\operatorname{tmf},R)\) is naturally equivalent to \(\Omega^{8}R\times\Omega^{12}R\). In particular, as the homotopy groups of \((\prod\operatorname{KO}\!\llbracket q\rrbracket_{p})_{\mathbf{Q}}\) are concentrated in degrees divisible
by 4, we see that the above \(\pi_{3}\) vanishes.
We claim that all of the spaces in (3.15) are discrete. If this were true, then a morphism of \(\mathbf{E}_{\infty}\)-rings from \(\operatorname{tmf}\) to \(\operatorname{KO}\llbracket q\rrbracket[\tfrac{1}{2}]\) is uniquely determined by its combined effect upon taking \(p\)-completions for all odd primes \(p\) and rationalisation up to \(3\)-homotopy. Using the above-mentioned fact that \(\operatorname{tmf}_{\mathbf{Q}}\) is a free formal rational cdga and the fact that the homotopy groups of \(\operatorname{KO}\llbracket q\rrbracket_{\mathbf{Q}}\) and \((\prod\operatorname{KO}\llbracket q\rrbracket_{p})_{\mathbf{Q}}\) are supported in degrees divisible by 4, we see that the lower two spaces in (3.15) are discrete. Moreover, we note that the effect of a morphism of \(\mathbf{E}_{\infty}\)-rings from \(\operatorname{tmf}\) to \(\operatorname{KO}\llbracket q\rrbracket[\tfrac{1}{2}]\) on rational homotopy groups is determined by its effect on zeroth rational K-homology. Indeed, this is because rationally we have an equivalent of spectra \(\operatorname{K}_{\mathbf{Q}}\simeq\bigoplus\mathbf{S}_{\mathbf{Q}}[2n]\) and all above rational \(\mathbf{E}_{\infty}\)-rings \(R\) have homotopy groups supported in even degrees, hence there is an identification \(\pi_{*}R\simeq\operatorname{K}_{0}R\), natural in \(R\).
Noting that \(\operatorname{tmf}\to\operatorname{Tmf}\) is a \(\operatorname{K}(1)\)-local equivalence ([Beh20, p.30]), we see that the upper-right space of (3.15) is discrete by Pr.1.2 and Th.3.12 with \(\mathfrak{X}=\mathcal{M}_{\operatorname{Ell},\mathbf{Z}_{p}}\) and \(\mathfrak{Y}=\mathcal{M}_{\operatorname{Tate},\mathbf{Z}_{p}}\).
In summary, we see that all of the spaces in (3.15) are discrete, and that a morphism of \(\mathbf{E}_{\infty}\)-rings \(\operatorname{tmf}\to\operatorname{KO}\llbracket q\rrbracket[\tfrac{1}{2}]\) is uniquely determined by its effect on zeroth \(p\)-adic K-homology for all odd primes \(p\) and zeroth rational K-homology up to \(3\)-homotopy. As all of these homology groups are completions or localisations of the zeroth integral K-homology group, we see such morphisms of \(\mathbf{E}_{\infty}\)-rings are uniquely determined by their effect on zeroth K-homology up to \(3\)-homotopy.
To prove Th.B, we follow the general outline of the proof of Th.3.12.
Proof of Th.B.: As the zeroth L- and K-homology of real spectra are naturally isomorphic by Pr.2.5, then the classical Goerss-Hopkins obstruction theory of [1, Cor.4.4] based on K, combined with Cor.2.3 and Cor.3.11 show we are left to compute Andre-Quillen cohomology of L-homologies as \(\bar{\theta}\)-algebras. By Cor.3.11 and Pr.3.13, enough obstruction groups vanish to show that \(f\) is uniquely determined up to \(1\)-homotopy.
By Pr.1.2, it now suffices to show that the groups \(\pi_{k}(\operatorname{CAlg}(\operatorname{tmf}_{p}^{hG},\operatorname{K} \llbracket q\rrbracket_{p}^{hG}),f)\) vanish for all \(0<k<2g-1\). By Cor.3.11, we see both \(\mathbf{E}_{\infty}\)-rings are real, so we can apply Th.E to obtain a spectral sequence that abuts to the desired homotopy groups:
\[H_{\bar{\theta}}^{s}(\operatorname{L}_{*}^{\wedge}\operatorname{tmf}^{hG}/ \operatorname{L}_{*},\operatorname{L}_{*}^{\wedge}\operatorname{K} \llbracket q\rrbracket^{hG}[-t])\Longrightarrow\pi_{t-s}(\operatorname{CAlg}( \operatorname{tmf}^{hG},\operatorname{K}\llbracket q\rrbracket^{hG}),f)\]
Combining Cor.3.11 with Pr.3.13, we see these groups vanish for \(s\geq 2\) and \(t\) not divisible by \(2g\), and we are done.
### Higher functoriality of elliptic Adams operations (Ths.C and D)
In [1], we defined morphisms of \(\mathbf{E}_{\infty}\)-rings \(\psi^{k}\colon\operatorname{tmf}[\tfrac{1}{k}]\to\operatorname{tmf}[\tfrac{1} {k}]\) for each integer \(k\) which we called _stable Adams operations_ due to either their construction, effect on homotopy groups, or relationship with the classical operations on topological \(K\)-theory. Our construction of these
operations used Goerss-Hopkins obstruction theory and an unfortunate consequence of these techniques was a failure to prove any relationship between these operations for varying \(k\). Using the homotopical uniqueness of the \(q\)-expansion map from the previous section, we will see that these Adams operations on Tmf satisfy the usual relations \(\psi^{k\ell}\simeq\psi^{k}\psi^{\ell}\) up to 2-homotopy.
Let us start with a proof of Th.D--actually, we will first prove a slight generalisation for dualisable topological modular forms.
**Theorem 3.16**.: _Let \(p\) be an odd prime and \(G\leq\mathbf{Z}_{p}^{\times}\) be a nontrivial finite subgroup with order \(g\). Writing \(\Gamma=\mathbf{Z}_{p}^{\times}/G\), there exists a functor_
\[\Psi_{p,G}\colon B\Gamma\to\mathrm{h}_{2g-2}\operatorname{CAlg}\]
_sending the unique point of \(B\Gamma\) to \(\operatorname{Tmf}_{p}^{hG}\) and each \(p\)-adic unit \(\overline{k}\) to \(\psi^{k}\)._
The argument above is **not** that the Adams operations \(\psi^{k}\) on \(\operatorname{Tmf}_{p}\) exist and are highly unique by obstruction theory, but rather we take highly coherent homotopies between Adams operations on \(\operatorname{TMF}_{p}\), using their modular descriptions, and then glue these homotopies together using the aforementioned obstruction theory.
Proof.: To start with, let us construct a functor \(\Psi_{1}\colon B\Gamma\to\mathrm{h}_{1}\operatorname{CAlg}\) with the desired properties. This is equivalent to constructing a morphism of \(\mathbf{E}_{1}\)-monoids \(\Gamma\to\tau_{\leq 0}\operatorname{CAlg}(\operatorname{Tmf}_{p}, \operatorname{Tmf}_{p})\). To do this, consider the Cartesian diagrams of \(\mathbf{E}_{\infty}\)-rings
given by taking global sections of [16, Df.5.10] and then taking \(G\)-homotopy fixed points, which induces the following diagram of spaces:
(3.17)
As in the proof of Th.A above, this above square would be Cartesian if we could show that \(\pi_{d+1}\operatorname{CAlg}(\operatorname{Tmf}_{p}^{hG},\operatorname{KO}( \!(q)\!)_{p}^{h_{G}})\) vanishes. Let us now restrict to those \(d\) in the range \(0\leq d\leq 2g-3\). For \(G=\{\pm 1\}\), we see that the desired \(\pi_{d+1}\) vanishes by Th.3.12 (for \(\mathfrak{X}=\mathcal{M}_{\operatorname{Ell},\mathbf{Z}_{p}}\) and \(\mathfrak{Y}=\mathcal{M}_{\operatorname{Tate},\mathbf{Z}_{p}}^{\operatorname{ sm}}\)) and Pr.1.2. For a general \(G\), we appeal to the spectral sequence of Th.E and the vanishing of much of the \(E_{2}\)-page from Cor.3.11 and Pr.3.13, which together show this group vanishes. In particular, when \(d=0\) the fact that (3.17) is Cartesian means that to construct a map of spaces \(\Gamma\to[\operatorname{Tmf}_{p}^{hG},\operatorname{Tmf}_{p}^{hG}]_{ \operatorname{CAlg}}\), it suffices to construct two maps
\[\Psi_{\mathrm{T}}\colon\Gamma\to[\operatorname{Tmf}_{p}^{hG},\operatorname{TMF }_{p}^{hG}]_{\operatorname{CAlg}}\qquad\qquad\Psi_{\mathrm{K}}\colon\Gamma \to[\operatorname{Tmf}_{p}^{hG},\operatorname{KO}[\![q]\!]_{p}^{hG}]_{ \operatorname{CAlg}}\]
which agree when restricted to \([\operatorname{Tmf}_{p}^{hG},\operatorname{KO}((q))_{p}^{hG}]_{\operatorname{CAlg}}\)--we will come back to the \(\mathbf{E}_{1}\)-structures shortly. Define the first map \(\Psi_{\operatorname{T}}\) as the composition
\[\mathbf{Z}_{p}^{\times}/\{\pm 1\}\stackrel{{\Psi_{p}^{\operatorname{sm} }}}{{\longrightarrow}}\operatorname{CAlg}(\operatorname{TMF}_{p}^{hG}, \operatorname{TMF}_{p}^{hG})\to\operatorname{CAlg}(\operatorname{Tmf}_{p}^{hG },\operatorname{TMF}_{p}^{hG})\to\tau_{\leq 0}\operatorname{CAlg}( \operatorname{Tmf}_{p}^{hG},\operatorname{TMF}_{p}^{hG})\]
where \(\Psi_{p}^{\operatorname{sm}}\) is a morphism of \(\mathbf{E}_{1}\)-monoids, a \(p\)-adic version of [10, Pr.2.37], itself a corollary of the fact that \(\operatorname{TMF}_{p}\) is the global sections of the sheaf \(\mathcal{O}_{\operatorname{BT}_{2}^{p}}^{\operatorname{top}}\) on the moduli stack of elliptic curves; see [10, Th.5.17]. The map \(\Psi_{\operatorname{K}}\) can either be constructed similarly to \(\Psi_{p}^{\operatorname{sm}}\) using a moduli construction, as in [10, Pr.1.18], or one can use the uniqueness statement Th.3.12 to see that morphisms from \(\operatorname{Tmf}_{p}^{hG}\) to \(\operatorname{KO}[\![q]\!]_{p}^{hG}\) are uniquely determined by their effect on \(p\)-adic K-homology up to \((2g-1)\)-homotopy, hence there exists a homotopy between \(\psi^{k}\psi^{\ell}\) and \(\psi^{k\ell}\), as these morphisms have the same modular description on \(p\)-adic K-homology before taking \(G\)-homotopy fixed points; see Pr.3.7 for the K-theory calculations and the proof of Th.3.5 to see the modular description of these Adams operations. This second approach, using uniqueness up to homotopy and the modular description of Adams operations, shows that these two maps \(\Psi_{\operatorname{T}}\) and \(\Psi_{\operatorname{K}}\) agree when restricted to \(\operatorname{KO}(\!(q)\!)_{p}^{hG}\). We then obtain a morphism of spaces
\[\Psi_{1}\colon\Gamma\to[\operatorname{Tmf}_{p}^{hG},\operatorname{Tmf}_{p}^{ hG}]_{\operatorname{CAlg}} \tag{3.18}\]
and to check this is a morphism of discrete \(\mathbf{E}_{1}\)-monoids, so equivalently monoids of sets, we reuse the arguments from above: to check the relation between \(\psi^{k}\psi^{\ell}\) and \(\psi^{k\ell}\) holds inside the discrete monoid \([\operatorname{Tmf}_{p}^{hG},\operatorname{Tmf}_{p}^{hG}]_{\operatorname{ CAlg}}\), we need to produce a homotopy between these morphisms. Using (3.17) now for \(d=1\), we obtain the desired homotopy restricted to \(\operatorname{TMF}_{p}^{hG}\) from \(\Psi_{p}^{\operatorname{sm}}\) in the construction of \(\Psi_{\operatorname{T}}\), and also when restricted to \(\operatorname{KO}[\![q]\!]_{p}^{hG}\) using the homotopy uniqueness of morphisms from \(\operatorname{Tmf}_{p}^{hG}\) to \(\operatorname{KO}[\![q]\!]_{p}^{hG}\). These homotopies agree on \(\operatorname{KO}(\!(q)\!)_{p}\) as such morphisms are unique up to \((2g-1)\)-homotopy, hence we obtain a homotopy \(\psi^{k}\psi^{\ell}\simeq\psi^{k\ell}\) on \(\operatorname{Tmf}_{p}^{hG}\), and we see that (3.18) can be upgraded to a morphism of discrete \(\mathbf{E}_{1}\)-monoids.
To construct the functor promised in the theorem, let us first note that the natural map \(\operatorname{Tmf}_{p}\to\operatorname{TMF}_{p}\) is a localisation at an element \(\Delta^{r}\in\pi_{24r}\operatorname{Tmf}_{p}\), where for \(p=3\) we have \(r=3\), and else \(r=1\). This persists to the \(G\)-homotopy fixed points. Indeed, it is clear from the effect on homotopy groups that the natural map \(\operatorname{Tmf}_{p}^{hG}\to\operatorname{TMF}_{p}^{hG}\) is a localisation at \(\Delta^{s}\in\pi_{24s}\operatorname{Tmf}_{p}^{hG}\), where \(s\) is divisible by \(2p-2\); smaller choices of \(s\) are possible, but this does not play a role in our argument. This implies that the natural map of spaces (whose truncation appears in (3.17))
\[\operatorname{CAlg}(\operatorname{Tmf}_{p}^{hG},\operatorname{Tmf}_{p}^{hG}) \to\operatorname{CAlg}(\operatorname{Tmf}_{p}^{hG},\operatorname{TMF}_{p}^{ hG})\simeq\operatorname{CAlg}(\operatorname{TMF}_{p}^{hG}, \operatorname{TMF}_{p}^{hG})\]
can be identified with the natural map of \(\mathbf{E}_{1}\)-monoids
\[\operatorname{CAlg}(\operatorname{Tmf}_{p}^{hG},\operatorname{Tmf}_{p}^{hG}) \to\operatorname{CAlg}(\operatorname{TMF}_{p}^{hG},\operatorname{TMF}_{p}^{ hG}) \tag{3.19}\]
defined by inverting this \(\Delta^{s}\). We now claim that for \(0\leq d\leq 2g-3\), the following natural
diagram of \(\mathbf{E}_{1}\)-monoids
(3.20)
is Cartesian. Indeed, this follows from the Cartesian diagram (3.17) and Th.B which shows that the spaces in the right of (3.17) are discrete, hence the natural map (3.19) is an isomorphism on homotopy groups in positive degrees. Therefore, to construct our desired map of \(\mathbf{E}_{1}\)-monoids \(\Gamma\to\tau_{\leq 2g-3}\operatorname{CAlg}(\operatorname{Tmf}_{p}^{hG}, \operatorname{Tmf}_{p}^{hG})\) it suffices to construct maps of \(\mathbf{E}_{1}\)-monoids into the other factors of the Cartesian diagram (3.20). This is achieved using our map \(\Psi_{1}\) and \(\Psi_{p}^{\operatorname{sm}}\) discussed above, which agree on \(\operatorname{TMF}_{p}^{hG}\) as the construction of \(\Psi_{1}\) also used \(\Psi_{p}^{\operatorname{sm}}\).
Proof of Th.D.: Post-compose the functor of Th.3.16 with the connective cover functor.
Finally, we can prove Th.C by combining Th.D with standard rational information.
Proof of Th.C.: Fix an integer \(n\) with a collection of primes \(p|n\) and write \(M=\prod_{p|n}\mathbf{N}\). For each odd prime \(\ell\nmid n\), the inclusion of monoids
\[M\xrightarrow{(p_{i}^{\varepsilon_{i}})\mapsto\prod p_{i}^{\varepsilon_{i}}} \mathbf{N}\to\mathbf{Z}\to\mathbf{Z}_{\ell}\]
factors as a injection \(M\to\mathbf{Z}_{p}^{\times}\), and remains injective after a further quotient \(M\to\mathbf{Z}_{p}^{\times}/\{\pm 1\}\) as \(M\) contains no elements with additive inverses other than \(0\). For all such odd \(\ell\), we then have a morphism of \(\mathbf{E}_{1}\)-monoids
\[M\to\tau_{\leq 1}\operatorname{CAlg}(\operatorname{tmf}_{\ell},\operatorname{ tmf}_{\ell})\]
by Th.D with respect to \(G=\{\pm 1\}\). There is an analogous rational construction
\[\Psi_{\mathbf{Q}}^{n}\colon M\to\tau_{\leq 1}\operatorname{CAlg}(\operatorname{ tmf}_{\mathbf{Q}},\operatorname{tmf}_{\mathbf{Q}})\]
which we define as follows: First, note that the rational Goerss-Hopkins obstruction theory in Step 1 of [11, SS12.9] shows that the codomain of \(\Psi_{\mathbf{Q}}^{n}\) is discrete and equivalent to the set of maps of \(\mathbf{Q}\)-algebras endomorphisms of \(\pi_{*}\operatorname{tmf}_{\mathbf{Q}}\). Next, recall from the proof of Th.A that \(\operatorname{tmf}_{\mathbf{Q}}\) is the free formal cdga on two variables \(x\) and \(y\), where \(|x|=8\) and \(|y|=12\). We then define \(\Psi_{\mathbf{Q}}^{n}\) by sending \(p_{i}\) to the endomorphism of \(\operatorname{tmf}_{\mathbf{Q}}\) sending \(x\) to \(p_{i}^{4}x\) and \(y\) to \(p_{i}^{6}y\) and extending multiplicatively.
To glue together \(\Psi_{\ell}^{n}\) with \(\Psi_{\mathbf{Q}}^{n}\), use the arithmetic fracture square for \(\operatorname{tmf}[\frac{1}{2mn}]\) which yields the diagram of spaces
where the left vertical map and upper horizontal map are morphisms of \(\mathbf{E}_{1}\)-monoids. Again, as in the proof of Th.A, for any rational \(\mathbf{E}_{\infty}\)-ring \(R\), the space \(\operatorname{CAlg}(\operatorname{tmf},R)\) is equivalent to \(\Omega^{\infty+8}R\times\Omega^{\infty+12}R\), so the above diagram is Cartesian, as \(\pi_{2}\operatorname{CAlg}(\operatorname{tmf}_{\mathbf{Q}},(\prod\operatorname {tmf}_{\ell})_{\mathbf{Q}})\) vanishes. The construction of \(\Psi^{n}\) now comes down to showing that \(\prod\Psi^{n}_{\ell}\) and \(\Psi^{n}_{\mathbf{Q}}\) agree when restricted to the space in the lower-right corner. Again appealing to the rational Goerss-Hopkins obstruction theory in Step 1 of [11, SS12.9] shows that this lower-right space is discrete and equivalent to the associated mapping set of graded \(\mathbf{Q}\)-algebras after taking homotopy groups. In particular, the calculations of \(\psi^{k}\) found in [13, Th.C] show that both \(\prod\Psi^{n}_{\ell}\) and \(\Psi^{n}_{\mathbf{Q}}\) agree when restricted to this space in the lower-right corner, leading to our desired morphism of spaces
\[\Psi^{n}\colon M\to\tau_{\leq 1}\operatorname{CAlg}(\operatorname{tmf}[\frac{1}{ 2n}],\operatorname{tmf}[\frac{1}{2n}]).\]
Again, to see that this rectifies to a morphism of \(\mathbf{E}_{1}\)-monoids, we want to check that there exist homotopies recognising this fact. As in the proof of Th.3.16 above, one should do this by referring to the uniqueness of morphisms of \(\mathbf{E}_{\infty}\)-rings from \(\operatorname{tmf}_{\mathbf{Q}}\) to \((\prod\operatorname{tmf}_{\ell})_{\mathbf{Q}}\), and this is clear from the above characterisation of \(\operatorname{tmf}_{\mathbf{Q}}\) and the fact that \((\prod\operatorname{tmf}_{\ell})_{\mathbf{Q}}\) has trivial \(\pi_{i}\) for \(1\leq i\leq 7\).
Similarly, we can lift the homotopies \(\psi^{-1}\simeq\operatorname{id}\) on \(\operatorname{tmf}_{\ell}\) using Th.3.16 and on \(\operatorname{tmf}_{\mathbf{Q}}\) (by construction), to a homotopy on \(\operatorname{tmf}[\frac{1}{2}]\). This finishes the proof.
_Remark 3.21_.: The only reason we restricted from \(\operatorname{Tmf}\) to \(\operatorname{tmf}\) to prove Th.C is as rationally \(\operatorname{tmf}\) has a simple with universal property. Perhaps one can use the Cartesian square
to prove a version of Th.C for \(\operatorname{Tmf}\).
|
2306.14241 | Impact of Network Delay and Decision Imperfections in IoT Assisted
Cruise Ship Evacuation | Major challenges of assisting passengers to safely and quickly escape from
ships when an emergency occurs, include complex realistic features such as
human behavior uncertainty, dynamic human traversal times, and the computation
and communication delays in the systems that offer advice to users during an
emergency. In this paper, we present simulations that examine the influence of
these key features on evacuation performance in terms of evacuation time. The
approach is based on our previously proposed lookup table-based ship passenger
evacuation method, i.e., ANT. The simulation results we present show that
delays in the users' reception of instructions significantly impair the
effectiveness of the evacuation service. In contrast, behavior uncertainty has
a weaker influence on the performance of the navigation method. In addition,
these effects also vary with the extent of the behavior uncertainty, the
dynamics of the traversal time distributions, and the delay in receiving
directions. These findings demonstrate the importance of carefully designing
evacuation systems for passenger ships in a way that takes into account all
realistic features of the ship's indoor evacuation environment, including the
crucial role of information technology. | Yuting Ma, Erol Gelenbe, Kezhong Liu | 2023-06-25T13:27:45Z | http://arxiv.org/abs/2306.14241v1 | # Impact of Network Delay and Decision Imperfections in IoT Assisted Cruise Ship Evacuation
###### Abstract
Major challenges of assisting passengers to safely and quickly escape from ships when an emergency occurs, include complex realistic features such as human behavior uncertainty, dynamic human traversal times, and the computation and communication delays in the systems that offer advice to users during an emergency. In this paper, we present simulations that examine the influence of these key features on evacuation performance in terms of evacuation time. The approach is based on our previously proposed lookup table-based ship passenger evacuation method, i.e., ANT. The simulation results we present show that delays in the users' reception of instructions significantly impair the effectiveness of the evacuation service. In contrast, behavior uncertainty has a weaker influence on the performance of the navigation method. In addition, these effects also vary with the extent of the behavior uncertainty, the dynamics of the traversal time distributions, and the delay in receiving directions. These findings demonstrate the importance of carefully designing evacuation systems for passenger ships in a way that takes into account all realistic features of the ship's indoor evacuation environment, including the crucial role of information technology.
Ship evacuation, Emergency navigation, WSN-assisted, Behavior uncertainty, Dynamics, Communication delays
## I Introduction
One of the main safety requirements for passenger ships is offering a reliable and efficient emergency evacuation service to passengers. Therefore, evacuation analysis is of primary importance even in the first stages of the design of a vessel [1, 2, 3]. The International Maritime Organization (IMO) has issued many Circulars about evacuation analysis and procedures for passenger and ro-ro ships. For example, in 2016, IMO approved a new version of the guidelines for evacuation analysis for new and existing passenger ships to enhance the ability to evacuate passenger vessels in case of accidents [4].
However, much of the research in this area does not consider many realistic features which may have a huge impact on both user safety and navigation efficiency [5, 6, 7, 8], such as variations in traversal times across passages and staircases due to vessel motion conditions, human behavior uncertainty caused by missing or misunderstanding instructions due to panic, noise, and overcrowding as illustrated in Figure 1, and delays in the arrival of correct instructions due to computation and communication delays in heavily loaded communication networks that monitor and interact with the physical world. As a result, evacuation metrics such as evacuation time and passenger congestion calculated by these methods are not necessarily in line with those in actual situations. Thus the design of the evacuation system based on such analyses may not ensure the needed safe and timely navigation of evacuences. Hence, this paper investigates the influence of such key features on the performance of the navigation method proposed in [9] under simulated emergency conditions.
In this paper, we make some model simplifications and assume that the inclination angle of the damaged ship changes at regular intervals, so that the traversal times encountered across each individual corridor and staircase accordingly change due to the variation of the inclination status. In addition, some significant extensions are made in this paper with respect to previous work [9] by introducing an error probability _PoE_. Then we change the delay probability _PoD_ and its extent _SoD_, to learn the variation in evacuation performance. In addition,
Fig. 1: Schematic representation of behavior uncertainty of a human user during an emergency.
we explore the combined effects of both behavior uncertainty and service delay on evacuation times.
## II Related work
**Global path-planning** algorithms (e.g., Dijkstra, A*, RRT, Ant Colony Optimization, Genetic algorithm, etc.,) have often been successfully used in evacuation routing to minimize path length or to maximize the distance from the hazard nodes to the exit node, based on 2D/3D maps of a built environment [10, 11]. However, the resulting navigation paths provided by those methods are not necessarily useful since evacuees are likely to encounter unexpected hazards due to changing emergency conditions, such as the spread of a fire, flooding of parts of the ship, or cascaded failures in emergency management systems themselves. Some work has attempted to address the dynamics of danger through the Expected Number of Oscillations (ENO) concept [12], that quantifies the dynamics of the emergency and explores navigation paths that have the smallest probability of changing repeatedly.
Furthermore, such global path planning methods require that the system be completely specified regarding the parameters needed to choose optimal paths, in contradiction with realistic situations where emergency guidance is needed. Therefore, follow-up studies relax the requirement that all the information about the hazards and exits is precisely known prior to path computation, which enhances their applicability to address practical scenarios [5, 13].
Using **local path-planning** methods such as Artificial Potential Fields [14], and the best direction for users based on conditions in their local neighborhood [15, 16] together with techniques such as partial reversal, users can avoid entering hazardous areas. However, these methods neglect the fact that the evacuation system for passenger ships must offer safety guarantees, and that the emergency navigation system must offer directions that guarantee a time to the exit that does not exceed a specified upper bound (i.e., the survival or capsizing time) even under worst-case circumstances (e.g., the case where the roll angle reaches the 30\({}^{\circ}\)) in the presence of dynamic hazards and ship motion.
However our earlier work [9] proposed an emergency navigation system based on condition which neglect the effect of uncertainty regarding the evacuees' behavior due to panic or errors, and the possible lack of up-to-date information regarding estimates of typical delays due to computtaional or communication overload in the information processing system. Thus in the present paper we specifically address the effect of delays in providing accurate guidance to end users, as well as the possibility that human evacuees may make mistakes in applying the guidance instructions due to panic or lack of attention.
## III Impact of delay in user navigation
The lookup tables in a static scenario guarantee that users at different locations can identify the hazard-free direction along which they can arrive at the exit with minimum delay, while not exceeding the worst-case time bound from the current location to the exit. However, walking conditions are dynamic in practice due to individual differences of the passengers, as well as other factors such as lighting, possible smoke conditions, panic, and the ship's motion. These lookup tables then need to be recomputed due to the overall system dynamics over time. Once there is a significant change in the tuples contained in the lookup table at each node in the navigation graph, the corresponding navigation direction needs to be updated for each of the users. However, wireless sensor network (WSN, and other network and computational server congestion effects may occur, so that updates of the navigation direction can be delayed, or even occur with errors due to the use of incomplete data in the decision algorithms [17, 18, 19]. Thus we analyze and discuss in the following section the impact of these dynamics on the effective delays that may be experienced by the users.
### _Simulation setup_
The simulated indoor environment is the second, third, and fourth floors of the Yangtze Gold 7 Cruise, as shown in Figure 2, where 346 navigation nodes are deployed in the simulated floors with one exit node. The number of passageway segments and staircase segments is 600 and 5, respectively. In our simulation settings, the worst-case traversal time \(T_{\mathrm{W}}(\overline{v_{i}v_{j}^{*}})\) across each segment \(\overline{v_{i}v_{j}^{*}}\) is calculated according to the worst-case traversal speed, which is set to 0.067 m/s, and the typical traversal time experienced in traversing each segment is set to a random value in the interval \([T_{\mathrm{N}}(\overline{v_{i}v_{j}^{*}}),T_{\mathrm{W}}(\overline{v_{i}v_{ j}^{*}})\). \(T_{\mathrm{N}}(\overline{v_{i}v_{j}^{*}})\) = 0.67m/s, which is set according to the average movement speed of passengers on the passenger ships where the walking condition remains horizontal. In addition, the deadline \(T_{\mathrm{D}}\) for ship capsizing is calculated as follows:
\[T_{\mathrm{D}}=T_{\mathrm{S}}-T_{\mathrm{A}}-T_{\mathrm{EL}}. \tag{1}\]
where \(T_{\mathrm{S}}\) denotes the total survival time until the ship will capsize, and \(T_{\mathrm{A}}\) is the awareness time beginning upon initial notification of an emergency and ending when passengers accept the situation and start to move based on the provided navigation direction. \(T_{\mathrm{EL}}\) denotes the sum of embarkation time and the launching time, i.e., the time required to provide for abandonment by the total number of persons on board. According to the guidelines approved by the Maritime Safety Committee (MSC), \(T_{S}\) equals to 60 minutes, \(T_{\mathrm{A}}\) equals to 5 minutes, and \(T_{\mathrm{EL}}\) equals to 25 minutes in our simulations. That is, \(T_{\mathrm{D}}\) equals to 30 minutes unless stated otherwise. In addition, the change interval of typical traversal time across each segment is set to 5s.
### _Simulation results_
We first examine the manner in which performance is affected by the delay in the computing and communication system which advises the users, and combine a probabilistic and temporal representation of the imperfection with which the system provides advice.
We assume that a delay in the communication and computing system can occur with some probability _PoD_ at any of the
user nodes. Data about the system as a whole is assumed to be updated in unit time steps, and we define _SoD_ as being the number of unit time steps regarding the delay with which the data is used. Thus, for instance, \(\textit{SoD}=5\) means that at some time \(t\) the computational system will provide decisions for the users based on the data that was available at time \(t-5\).
Thus in Figure 3 we consider results for the following parameters \(\textit{PoD}=0.1\) and \(\textit{SoD}=1,~{}2,~{}3,~{}4,~{}5\). More particularly, we show the relative difference in evacuation time for each user with these five different delay values for all of the unique Node identifiers \(0\) to \(345\) (assigned when the navigation network is constructed) in order to visualize the simulation results. The relative difference in evacuation times for each node is defined as follows:
\[\Delta(i,\textit{PoD},\textit{SoD})=\frac{T(i,\textit{PoD},\textit{SoD})-T(i) }{T(i)}, \tag{2}\]
where \(T(i)\) denotes the "ideal" evacuation time for a user at location (node) \(i\) all the way to the exit, when there is no delay, i.e. \(\textit{PoD}=0\), while \(T(i,\textit{PoD},\textit{SoD})\) is the corresponding value with probability _PoD_ that there will be a delay of value _SoD_. We clearly see that guiding evacuees with up-to-date navigation suggestions has better performance with respect to the evacuation time, and therefore the observed relative differences are positive in most cases. But it is worth noticing that there are still a few cases where the performance of guiding with up-to-date navigation directions is even worse than that of guiding with outdated navigation suggestions. It is mainly because the navigation based on up-to-date suggestions is not the optimal scheme for addressing the dynamics of walking conditions.
In addition, we also simulate the average evacuation time with different delay values for all the users located in the 346 different user nodes. Figure 4 plots the relative difference in average evacuation time defined as:
\[\Delta_{AVG}(\textit{PoD},\textit{SoD})\] \[=\frac{\sum_{i=0}^{345}T(i,\textit{PoD},\textit{SoD})-\sum_{i=0}^{ 345}T(i)}{\sum_{i=0}^{345}T(i)}. \tag{3}\]
It is observed that the relative difference in average evacuation time initially increassd with the delay _SoD_, but when _SoD_ exceeds \(3\), the relative difference does not continue to increase. Thus it appears that for the parameters we have chosen there is a criticality point for the worst evacuation performance, and in our simulation, it is at \(\textit{SoD}=3\).
We also carry out a group of simulations, where \(150\) users are randomly deployed at the user nodes, and Figure 5 shows the average evacuation time and the relative difference of the \(150\) users from \(53\) runs of the simulation. The observation from Figure 5 aligns with the findings presented in Figures 3 and 4.
We further evaluate the influence of the different possibilities of delay in navigation service on evacuation time. Specifically, we conduct a group of simulations, where the possibility of delay in navigation service at each node is set to \(0.0,~{}0.1,~{}0.2,~{}0.3,~{}0.4\), and \(0.5\). The delay value \(\textit{SoD}\) is set to \(1,~{}2\), and \(3\). Figure 6 shows the relative difference in average evacuation time for 346 users at different user nodes. We can see that the relative difference in average evacuation time is increased with the possibility of delay in navigation service and achieves its maximum when \(\textit{PoD}=0.4\) for \(\textit{SoD}=1,~{}2,~{}3\).
## IV Impact of behavior uncertainty
When an emergency occurs, the evacuees may move due to panic or difficulties in reading instructions, along a random direction instead of the direction provided by the evacuation system. In order to explore the resulting performance, we represent this effect by an error probability and measure the evacuation time accordingly. 346 users are inserted into the simulated network in Section III. In addition, in this group of simulations, the typical traversal time across each segment is static in order to focus on the impact of the behavior uncertainty. The simulation parameters used in this section are the same as those in Section III.
Figure 7 plots the relative difference in the evacuation time between escaping according to the navigation directions totally and escaping with a certain uncertainty probability. The relative difference is defined as follows:
\[\delta(i,PoE)=\frac{T(i,PoE)-T(i)}{T(i)}. \tag{4}\]
We can see that the relative differences are positive in all cases, which means failing to escape according to the provided navigation direction impairs the capability of the evacuation scheme.
Furthermore, we measure the relative difference in average evacuation time for 346 users, which is calculated as follows:
\[\delta_{AVG}(PoE)=\frac{\sum_{i=0}^{345}T(i,PoE)-\sum_{i=0}^{345}T(i)}{\sum_{i =0}^{345}T(i)}. \tag{5}\]
Figure 8 plots the relative difference in average evacuation time when escaping with different probabilities of behavior uncertainty (i.e., the probabilities of escaping along an error direction). While behavior uncertainty has an effect on evacuation time, it is tiny. Moreover, we can see that the performance deteriorates with the increase of the uncertainty probability until it achieves 0.4 (i.e., \(PoE=0.4\)). That is, in our simulation, a greater probability than 0.4 will not further jeopardize the evacuation performance. Compared with the impact of the delay in navigation service, the impact of behavior uncertainty is relatively slight, which means the lookup table-based method is more resilient to behavior uncertainty than navigation delay.
## V Impact of both behavior uncertainty and delay in navigation service
In this experiment, we evaluate the combined influence of both the navigation delay and behavior uncertainty on evacuation performance measured by evacuation time.
The experiment is done in a scenario where the refreshing period is set to 5s, the probability, as well as the delay value, is set as 0.4 and 3, respectively, and the uncertainty probability is fixed at 0.4. Figure 9 plots the relative difference in the evacuation time taken for all users to arrive at the specified exit. We observe that there is a dramatic increment in the evacuation time when we take the above-mentioned realistic features into consideration. Therefore, from a practical perspective, it is of significant value to consider the behavior uncertainty and the navigation delay to assist users with a successful escape while doing path planning for passengers on damaged ships.
Fig. 4: Relative difference in average evacuation time between guiding without delay and guiding with different extents of delay.
Fig. 5: Average evacuation time and the relative difference of the 150 users
## VI Conclusions and Further Research
In passenger ships and other large vehicles and aircraft, reliable emergency evacuation is required for ensuring passenger trust in the means of travel, and for their safety and wellbeing. Thus over the last decade substantial research has been conducted in designing technology-assisted means to provide passengers with the best advice regarding evacuation procedures during emergencies [20] and many of the proposed approaches use some form of optimization [21].
However during emergencies, especially if the vessel is damaged, it becomes very challenging both for the underlying ICT (Information and Telecommunication Technologies) and for the passengers and staff to implement and follow instructions that are based on prior optimization and established routines. In addition, the ICT infrastructure may also be damaged and disconnected, and the recommended evacuation routes and the communication network may be congested. Thus the need to support user safety and evacuation in such complex and
Fig. 8: Relative difference in average evacuation time between escaping without behavior uncertainty and with different uncertainty probabilities.
Fig. 6: Relative difference of average evacuation time of the 346 users for PoD = 0.0, 0.1, 0.2, 0.3, 0.4, 0.5 and SoD=1, 2, and 3, respectively.
Fig. 7: Evacuation time with different uncertainty probabilities and the relative difference in evacuation time between escaping without behavior uncertainty and with different uncertainty probabilities.
Fig. 9: Evacuation time and the relative difference in evacuation time.
dynamic environments with many rapidly changing variables can also overwhelm the ICT-based emergency management system itself [22]. In addition, path planning based on the updated emergency situation may be accompanied by messages which flood the communication system about environmental dynamics, aggravate the congestion, and result in packet losses and longer end-to-end delays in communicating decisions.
Thus this work is the first study of the influence of such key side effects of emergency evacuation in a practical ship evacuation scenario. Through extensive simulations, this paper analyzes the effects on passenger evacuation performance in ship emergencies, of realistic features including the behavior uncertainty represented with different probabilities, the dynamics of traversal time with change frequencies, and the different delays in the arrival of navigation instructions to passengers. Furthermore, these simulations use real-world parameters from the real passenger cruise ship "Yangtze Gold 7" and its passenger evacuation system, and evaluate the effect of delays in the information that reaches the human users during its passenger evacuation.
Assuming that ongoing situational information is gathered by wireless sensor networks [23, 24, 25], we have considered the effect of delayed decisions which are computed and forwarded by a central decision ICT system, to all end users as they pass through pre-determined "nodes" that guide them towards safe exit points. As these delays increase, we see that the emergency exit times of passengers are often substantially increased.
In future research we plan to take into account the delaying factors in advance, so as to design novel decentralized emergency navigation systems for guiding passengers to safety, that pre-locate advisory data to passengers in key system intermediate nodes, and also combine centralized decisions together with individual user decision aids with hand-held mobile devices [16].
|
2308.13082 | Benchmarking Data Efficiency and Computational Efficiency of Temporal
Action Localization Models | In temporal action localization, given an input video, the goal is to predict
which actions it contains, where they begin, and where they end. Training and
testing current state-of-the-art deep learning models requires access to large
amounts of data and computational power. However, gathering such data is
challenging and computational resources might be limited. This work explores
and measures how current deep temporal action localization models perform in
settings constrained by the amount of data or computational power. We measure
data efficiency by training each model on a subset of the training set. We find
that TemporalMaxer outperforms other models in data-limited settings.
Furthermore, we recommend TriDet when training time is limited. To test the
efficiency of the models during inference, we pass videos of different lengths
through each model. We find that TemporalMaxer requires the least computational
resources, likely due to its simple architecture. | Jan Warchocki, Teodor Oprescu, Yunhan Wang, Alexandru Damacus, Paul Misterka, Robert-Jan Bruintjes, Attila Lengyel, Ombretta Strafforello, Jan van Gemert | 2023-08-24T20:59:55Z | http://arxiv.org/abs/2308.13082v1 | # Benchmarking Data Efficiency and Computational Efficiency
###### Abstract
In temporal action localization, given an input video, the goal is to predict which actions it contains, where they begin, and where they end. Training and testing current state-of-the-art deep learning models requires access to large amounts of data and computational power. However, gathering such data is challenging and computational resources might be limited. This work explores and measures how current deep temporal action localization models perform in settings constrained by the amount of data or computational power. We measure data efficiency by training each model on a subset of the training set. We find that TemporalMaxer outperforms other models in data-limited settings. Furthermore, we recommend TriDet when training time is limited. To test the efficiency of the models during inference, we pass videos of different lengths through each model. We find that TemporalMaxer requires the least computational resources, likely due to its simple architecture.
## 1 Introduction
Temporal action localization (TAL) is concerned with automatically recognizing an action and its start and end in a video [29]. TAL has found potential use in domains such as video summarization [14] and public video surveillance [27, 29]. Various algorithms are proposed for TAL, and deep learning models such as such as TriDet [23], TemporalMaxer [24], and ActionFormer [33] outperform models based on hand-crafted features [29]. These deep learning models require large datasets to train on, such as THUMOS'14 [11] or ActivityNet [9]. However, curating, annotating and storing datasets of such scale is difficult, expensive, and time-consuming [20, 29, 30]. To save data, in this work we explore data efficiency of deep learning-based TAL models.
In addition to data efficiency, we also evaluate compute efficiency. Compute efficiency is particularly relevant when the success of Transformers [26] in natural language processing (NLP) [12, 26], is employed in TAL [18, 33]. Transformers are known to be computationally expensive [13, 25]. To save computing resources, in this work, we explore how computationally efficient deep learning-based TAL methods are.
Our analysis of data- and compute-efficiency focuses on ActionFormer [33], STALE [19], TemporalMaxer [24], and TriDet [23], as they represent the current state-of-the-art in temporal action localization. The contributions of this paper are four-fold, as detailed below.
First, we test the data efficiency of the TAL models. Inspired by Ding [5] and Henaff [10], we train each model multiple times on a percentage of the training set and report the average mean average precision (mAP). By applying this method on both THUMOS'14 [11] and ActivityNet [9] datasets, we find that the TemporalMaxer [24] performs the best in a data-limited setting.
Second, we evaluate the effect of score fusion [28, 32, 33] on data efficiency. Score fusion combines the outputs of an evaluated model with the outputs of an auxiliary model, often UntrimmetNet [28, 33]. We find that score fusion can significantly increase the performances of the models. We thus recommend that when choosing a model for a custom dataset, the options both with and without score fusion should be considered.
Third, we test the computational efficiency of each model during training. We measure training performance by analyzing the trade-off between training time and obtained average mAP. We find that the TriDet model [23] is the best choice in training time-limited settings, because it requires the least amount of training time but still obtains the best average mAP.
Fourth, we test the computational efficiency of each model during inference. We expand on the approach of measuring the computational complexity of the model by passing to it a video of a specific size [23, 24, 33]. We
evaluate each model on videos of increasing lengths and report the number of floating point operations, the memory consumed, and the inference time. We find that TemporalMaxer requires the least computational resources, while STALE [19] requires the most.
## 2 Related work
**Action recognition.** The survey by Xia and Zhan [29] identifies five different tasks in video understanding: untrimmed video classification, trimmed action recognition, temporal action proposals, temporal action localization, and dense-captioning events in videos. This work focuses on temporal action localization (TAL) for its potential uses in video summarization [14] and public surveillance [27]. In TAL, the goal is to predict which actions happen in a video stream, where they begin, and where they end. The deep learning models created for this problem can be divided into two categories [29]: two-stage and one-stage. Two-stage models [17, 7, 16] attempt to first locate the actions and then classify them. One-stage models [23, 24, 33, 18] locate and classify the actions at the same time. This work analyzes ActionFormer [33], STALE [19], TemporalMaxer [24], and TriDet [23], all of which are one-stage models.
**Testing for data efficiency.** This problem involves assessing how well a given model performs with limited training data available. A common approach is to use \(n\)-shot learning [20, 30], which involves training the model on only \(n\) samples per class. However, since a single class can be represented multiple times in a single video [11, 9], it is unclear whether \(n\) should refer to the number of videos the given class appears in or whether it is the total number of instances of the class. Furthermore, representing each class equally would be difficult as the number of instances of a class per video varies. An alternative approach involves training on a given percentage \(p\) of the samples from the training dataset [10, 5]. In this work, we use this approach.
**Optimizing for data efficiency.** As collecting and annotating datasets is expensive [29], related works have proposed few-shot TAL methods [20, 30]. These models use meta-learning and require all of the support videos to be input into the model at once. This makes their architecture incompatible with the architecture of current state-of-the-art models, which only expect a single video as input [23, 24, 33]. This work, therefore, analyzes the data efficiency of some of the current state-of-the-art models.
**Testing for computational efficiency.** The term 'computational efficiency' is often used to mean the number of floating point operations [23, 24, 25, 33, 8], the memory used [13, 25], or the training [15] or inference time [23, 24, 33]. In the task of temporal action localization, TriDet [23], TemporalMaxer [24], and ActionFormer [33] all report the number of floating point operations as the amount of multiply-accumulate (MAC) operations and the time it takes to forward a single video of a fixed length through the model. However, no experiments have been performed that would show how these models scale with an increase in video length. This is relevant, as models that scale linearly, will asymptotically outperform models that scale _e.g_. quadratically. Hence, even if a quadratic model outperforms a linear model on short videos, it will perform worse on longer videos. Thus, in this work, the inference performance of each of the tested models is measured on videos of increasing lengths.
Furthermore, motivated by [15], this work reports the training time and the achieved mean average precision of each of the TAL models. This is done to better understand the suitability of each model for settings where the training time is limited.
**Optimizing for computational efficiency.** Both TriDet [23] and TemporalMaxer [24] aim to lower the required computational cost of ActionFormer [33]. In TriDet, this is achieved by replacing the multi-head self-attention module with an efficient Scalable-Granularity Perception layer [23]. TemporalMaxer, on the other hand, replaces the entire transformer module with a max-pooling block [24]. This work compares the computational efficiencies of ActionFormer, STALE [19], TemporalMaxer, and TriDet.
## 3 Models
**ActionFormer.** ActionFormer [33] was one of the first models that showed a successful use of Transformers [26] in temporal action localization. The model uses an encoder-decoder architecture with a Transformer encoder and a convolutional decoder. At the time of its proposal, the model reached state-of-the-art performance on the THUMOS'14 dataset obtaining an average mAP of 66.8%. The model also showed promising results on both the ActivityNet [9] and EPIC-Kitchens 100 [4] datasets. We also selected this model for evaluation, as the architectures of newer models, TriDet [23] and TemporalMaxer [24], are inspired by the architecture of the ActionFormer.
**STALE.**_Zero-Shot Temporal Action Detection via Vision-Language Prompting_ (STALE) [19] is the most recent and state-of-the-art method in zero-shot temporal action localization. Inspired by CLIP [21], STALE uses a temporal vision transformer [6] to encode videos into video embeddings and a text transformer [26] to encode class prompts into text embeddings. STALE attempts to learn an inter-relationship of vision-language via cross attention [26]. The model achieved average mAP of 52.9% and 36.4% on the THUMOS'14 and ActivityNet datasets respectively, outperforming similar models. We selected this model for evaluation, to compare it against methods that were not designed for a zero-shot learning scenario.
**TemporalMaxer.** The TemporalMaxer [24] model was constructed to require a low computational cost without
sacrificing localization performance. Instead of employing a computationally-heavy backbone, such as a Transformer [26, 33], the model uses a basic, parameter-free max pooling block on top of a pre-trained 3D CNN. This model currently represents the state-of-the-art on the MultiTHUMOS dataset [31] obtaining an average mAP of 29.9%. Importantly, the model also has a lower computational complexity compared to other models. On a video of a length of around 5 minutes from the THUMOS'14 dataset, the inference time of the TemporalMaxer was observed to be 3x shorter than that of the ActionFormer.
**TriDet.** The TriDet model [23] bases its architecture on the ActionFormer. Instead of using a multi-head self-attention mechanism, the model replaces it with an efficient Scalable-Granularity Perception (SGP) layer. The resulting model improves on the performance of the ActionFormer, obtaining an average mAP of 69.3% on the THUMOS'14 dataset. Furthermore, the TriDet model represents the current state-of-the-art for the EPIC-Kitchens 100 dataset. Finally, the model was also shown to require less time and fewer floating point operations than the ActionFormer when performing inference on a 5 minute video from the THUMOS'14 dataset.
## 4 Evaluation setup
### Data efficiency
**Evaluation metrics.** Following common practice [1, 23, 24, 33], the models were evaluated by reporting the achieved mean average precision (mAP) on different tIoU thresholds. Intersection over union (tIoU) is a 1-dimensional temporal Jaccard similarity metric and is thus computed as the ratio of the intersection of the predicted and actual duration of an action to their union. Given a tIoU threshold \(\mu\) and a class \(c\), correct predictions are those, whose tIoU \(\geq\mu\) and the predicted class is the class \(c\). Precision is then the ratio of the number of correct predictions to the total number of made predictions for the class \(c\). As there can be multiple videos for each class \(c\), average precision is the average of the precisions obtained in each of those videos. Finally, mean average precision is the average AP over all of the classes \(c\). Thus, in general, given a fixed tIoU threshold \(\mu\), the higher the mAP, the better the model performs.
**Testing procedure.** In this setup, it is assumed that a dataset \(\mathcal{D}\) has a predefined split into a training set \(\mathcal{D}_{\text{train}}\) and a testing set \(\mathcal{D}_{\text{test}}\). Following works by Ding _et al._[5] and Henaff [10], a percentage \(p\) of the training set \(\mathcal{D}_{\text{train}}\) was randomly and uniformly sampled to create a subset \(\mathcal{D}_{\text{s}}\). The models were then trained on the set \(\mathcal{D}_{\text{s}}\) and evaluated on the set \(\mathcal{D}_{\text{test}}\). During the evaluation, mean average precision was calculated at different tIoU thresholds. The sampling, training, and testing procedure was repeated 5 times [2, 5] with different random splits. The mAP for each threshold was then averaged and the standard deviation was reported. The entire procedure was repeated for multiple percentages \(p\). Algorithm 1 describes the exact testing procedure in the form of pseudocode.
In the pseudocode, the function \(\operatorname*{sample}\) randomly samples videos from the training set, such that:
\[|\mathcal{D}_{\text{s}}|=\operatorname{round}\left(|\mathcal{D}_{\text{train} }|\cdot\frac{p}{100\%}\right) \tag{1}\]
with \(\operatorname{round}\) rounding the value to the nearest integer. Additionally, the function \(\operatorname*{sample}\) needs to ensure that each action class is represented at least once in the resulting set \(\mathcal{D}_{\text{s}}\). In practice, this was realized by repeatedly sampling from the set \(\mathcal{D}_{\text{train}}\) until a split, where all classes are represented, was found. The function \(\operatorname*{calculate-mAP}\) evaluates the model, that is, it calculates the mean average precision at different tIoU thresholds the model achieved on the test set \(\mathcal{D}_{\text{test}}\).
```
\(\mathcal{D}_{\text{train}}=\{(\mathbf{X_{i}},\hat{\mathbf{Y}_{i}})\}_{i=1}^{N}\) \(\mathcal{D}_{\text{test}}=\{(\mathbf{X_{i}},\hat{\mathbf{Y}_{i}})\}_{i=1}^{M}\) for\(p=10\%,\dots,100\%\)do mAPs \(\leftarrow\) empty list for\(i=1,\dots,5\)do \(\mathcal{D}_{\text{s}}\leftarrow\operatorname*{sample}(\mathcal{D}_{\text{ train}},p)\) Train on \(\mathcal{D}_{\text{s}}\) mAP \(\leftarrow\) calculate-mAP(\(\mathcal{D}_{\text{test}}\)) mAPs.append(mAP) Report \(\operatorname*{avg}(\text{mAPs})\) and \(\operatorname*{std}(\text{mAPs})\)
```
**Algorithm 1** Data efficiency testing procedure
To understand the results between different datasets, for each percentage \(p\) the expected number of instances per class is reported. This will help in investigating how many instances per class each model requires. Given a dataset \(\mathcal{D}_{\text{train}}\) containing \(N\) samples, having \(M\) action instances in total, and \(C\) action classes, the expected number of instances per class for each percentage \(p\) is calculated as:
\[\text{\#/class}=\frac{p}{100\%}\cdot\frac{N}{C}\cdot\frac{M}{N}=\frac{p}{100 \%}\cdot\frac{M}{C} \tag{2}\]
It should be noted that the value computed in Equation (2) is an estimate. The exact values would depend on the splits \(\mathcal{D}_{\text{s}}\) used in the testing procedure. Nonetheless, this approximation was found to be useful in practice when comparing the models on different datasets.
**Score fusion.** Score fusion is a commonly used technique in TAL [19, 23, 33] to improve the performance of a model. Although the exact implementations vary between models, the general rule is that the final predictions made by a model are combined with the output of UntrimmedNet [19, 23, 28, 33]. UntrimmedNet [28] is a weakly-supervised action recognition model which only predicts video-level classes without temporal localization. It should be noted that UntrimmedNet is trained on the full ActivityNet and THUMOS datasets, respectively, while in practice limited training data would also apply to UntrimmedNet. Thus, in this work, the setups that use score fusion by default are also evaluated without score fusion.
### Computational efficiency
#### 4.2.1 Training performance
Inspired by Li _et al_. [15], the training time of each of the models is reported alongside the average mAP achieved on the test set. The training time is measured without initialization, that is, only the time spent in the training loop is measured. In this way, only the model performance is measured, without the time taken by external methods such as PyTorch data loaders. The training and testing procedure is repeated 5 times using different random seeds, each time measuring the time spent and the average mAP achieved.
#### 4.2.2 Inference performance
**Evaluation metrics.** Following the works on TriDet [23], TemporalMaxer [24], and ActionFormer [33] the models were evaluated by reporting the total number of multiply-accumulate (MAC) floating point operations, memory consumed, and the inference time when fed an input video. To count the number of multiply-accumulate operations, the fvcore library [22] was used. As Transformer-based methods are known to require large amounts of memory [13, 25], we additionally report the total GPU memory (VRAM) footprint of each model, which is measured using the max_memory_allocated method from PyTorch.
**Testing procedure.** The models were evaluated on randomly generated tensors, whose shapes correspond to videos of differing lengths. To guarantee the independence of results, the experiments for inference time, memory consumption, and number of MACs were run independently. Before each inference time measurement, the random tensor was passed through the model once as a warm-up procedure. Without this procedure, it was observed for the ActionFormer model that the inference time would be constant for all lengths of the input tensor. This was most likely caused by memory allocations happening as the tensor was being passed through the model. As such, the warm-up procedure was applied to all models to ensure a fair evaluation for all input sizes. Furthermore, the experiments for inference time were repeated 5 times [23] with different random tensors.
Additional setup was also required by the ActionFormer model. Given a dataset \(\mathcal{D}\), the ActionFormer model is parameterized by a value max_seq_len indicating the maximum length of a video in \(\mathcal{D}\) expressed in the number of features [33]. During inference, all videos are padded with zeros to the max_seq_len length, which results in the same amount of computation done regardless of video length. To alleviate this issue, the value max_seq_len was configured to the lowest allowable value, which would be found through an inspection of the code. It should be noted that the value max_seq_len is only used during training and changing it during inference does not influence the output of the model, which was verified with one of the authors of the ActionFormer.
## 5 Experiments
**Datasets.** The models were evaluated on two datasets, commonly used to assess temporal action localization algorithms [19, 23, 33]: THUMOS'14 [11] and ActivityNet [9]. THUMOS'14 contains 413 untrimmed videos and 20 action classes. This dataset is further split into a validation set, containing 213 videos and a test set containing 200 videos. In total, the validation set contains 3,007 action instances. We follow the configuration from the authors of the tested models and hence train the models on the validation set and test on the test set [19, 23, 24, 33]. ActivityNet contains around 20,000 videos with 200 action classes. The dataset is further split into a training set (10,024 videos), a validation set (4,926 videos), and a test set (5,044 videos). Using the approach from [19, 23, 33], the models are trained on the training set and evaluated on the validation set. As some of the videos from the ActivityNet dataset have become unavailable over time, it should be noted that the exact size of the dataset varies when using different models or features.
**Features.** We take into consideration all features that were made available by the authors of a model for the given dataset. Hence, on the THUMOS'14 dataset, ActionFormer [33], TemporalMaxer [24], and TriDet [23] are all evaluated using the Inflated 3D (I3D) features [3]. On the ActivityNet dataset, the ActionFormer was evaluated using both I3D and TSP [1] features. STALE [19] was tested with the CLIP [21] features. The STALE model was not evaluated using I3D features due to limited compute availability. Finally, the performance of the TriDet model on the ActivityNet dataset was measured using the TSP features.
**Experimental setup.** All of the models were trained and tested using a single NVIDIA Tesla V100S 32GB located on an HPC cluster. All of the training and testing
hyperparameters were left unchanged for the models unless otherwise stated in the subsequent sections. During data efficiency experiments, we therefore also use score fusion implemented by the ActionFormer, STALE, and TriDet models on the ActivityNet dataset [19, 23, 33]. We reflect on the impact of the score fusion on the performance of the models in Section 5.1.1.
### Data efficiency
**Results on THUMOS'14.** The results on the THUMOS'14 dataset can be seen in Figure 0(a). Firstly, we note that at \(p=100\%\), the average performance for each of the models is slightly lower than in the original works. We find an average mAP of \(66.57\pm 0.22\)\([\%]\) compared to \(66.8\%\) for the ActionFormer, \(66.79\pm 0.16\)\([\%]\) contrary to \(67.7\%\) for TemporalMaxer, and \(68.07\pm 0.42\)\([\%]\) instead of \(69.3\%\) for TriDet. As noticed by [33], however, different hardware setups may lead to different results, which might explain the differences observed in this work. Furthermore, we see that all models follow a similar learning curve. This is most likely caused by the fact that the models share a similar architecture, inspired by the architecture of the ActionFormer [23, 24, 33]. Moreover, as can be observed, at the low percentages \(p\), the TemporalMaxer [24] model performs the best. This can be explained by the simpler architecture employed by the model, which would require less training data than the other models. We also note that for all models, the incline in performance noticeably slows down above \(p=60\%\), which corresponds to around \(90\)-\(100\) action instances per class. We can thus see that each model saturates at around \(100\) action instances per class and does not gain much from additional data. We also see that the TriDet model begins to outperform the TemporalMaxer around the same mark.
**Results on ActivityNet.** As can be seen in Figure 0(b), both the ActionFormer and the TriDet models outperform the STALE model on all tested percentages \(p\). Furthermore, we observe that ActionFormer and TriDet saturate around the 40-60% mark and do not gain from additional training data. This corresponds to around 30-40 action instances per class. We also notice that the STALE model does not visibly gain from an increase in training data. The model achieves an average mAP of \(19.06\pm 0.22\)\([\%]\) at \(p=10\%\) compared to \(19.53\pm 0.22\)\([\%]\) at \(p=100\%\). This flat learning curve is caused by the score enhancement, as is shown experimentally in Section 5.1.1.
**Discussion.** From Figure 0(a), we can observe that the TemporalMaxer should likely be chosen in settings where the amount of data is limited. The simple architecture of that model allows it to show the best data efficiency out of
Figure 1: Reported average mAP@tIoU for the tested models on the THUMOS’14 [11] and ActivityNet [9] datasets. For each model, only the average mAP is shown. The width of each line corresponds to two standard deviations obtained by repeating the procedure 5 times for each \(p\). Additionally, the expected average number of instances per class (#/class) is reported as a secondary x-axis. We find that all of the models reach their near-best performance with less than or around 100 action instances per class.
the tested models. Figure 0(b) suggests that the ActionFormer or TriDet models should be chosen in favor of STALE. Based on the combined results in Figure 1, it is difficult to put an exact bound on the number of action instances per class required by the models. On both of the datasets, however, we can observe that the models reach their near-best performance with less than or around 100 action instances per class. This suggests making datasets larger will not further improve the performance of the tested models.
#### 5.1.1 Score fusion
In the default configuration, the score fusion techniques are used by the ActionFormer [33], STALE [19], and TriDet [23] on the ActivityNet dataset [9]. We repeat the data experiments without score fusion for these models on the ActivityNet dataset and report the results. We use the default features for these models for these experiments, hence, ActionFormer and TriDet use the TSP features [1], and STALE uses CLIP [21]. The results can be seen in Figure 2. Score fusion improves the performance of the models for all tested values of \(p\). The largest impact can be seen at low percentages \(p\) in the small data regime.
**Discussion.** Unsurprisingly, score fusion has a significant influence on the performances of the models. It should be noted that score fusion is based on the assumption that UntrimmedNet class predictions are readily available, which in practice may not be the case. One should therefore be aware that performance on ActivityNet or Thumos does not always directly translate to true performance on a custom dataset, which may be lower. Alternatively, employing score fusion on custom data requires additional compute for retrieving the UntrimmedNet predictions. We argue that when choosing a model on a custom dataset, it is important to decide on the applicability of score fusion and evaluate the model both with and without score fusion.
### Computational efficiency
**Concurrent jobs on the HPC cluster.** By default, the GPU nodes are shared between users in our compute cluster. This setup could lead to a dependence of the training or inference time on the other jobs running on the cluster. To alleviate this issue, the training and inference time experiments were performed five times sequentially, such that the experiment jobs did not overlap. Therefore, the results for training and inference times are averaged over a total of 25 runs. We measure the remaining variance in training time, assuming that a low variance means that there are no important unmeasured confounding factors stemming from the concurrent use of the cluster.
#### 5.2.1 Training efficiency
**Results.** We present the results in Table 1. On THUMOS'14, TriDet achieves the best performance while requiring the least amount of training time on average. Interestingly, we find that the training time of the TemporalMaxer varies greatly between runs: from 1216.56 to 6829.95 seconds. This variance might come from the early stopping criterion employed in the training script of the model. Nonetheless, even in its fastest training run, TemporalMaxer is still the slowest of the tested models. On ActivityNet, ActionFormer and TriDet train for around five times as long as STALE, but also achieve much better performance. Finally, we note that the variance in training times was low for all models, except for the TemporalMaxer, thus the concurrent jobs on the cluster likely did not interfere with the experiment jobs.
**Discussion.** From the results obtained in Table 1 we find that the TriDet model should be chosen in settings where the training time is limited. This is due to the fact that the model was found to not only train for the least amount of time on THUMOS'14 but also achieve the best performance on both datasets. If the choice of the model is between ActionFormer and STALE, we find that the latter could be used in limited training time settings. Choosing STALE over the ActionFormer would, however, likely come with a decrease in TAL performance.
#### 5.2.2 Inference efficiency
**Additional experimental setup.** For the ActionFormer [33], TemporalMaxer [24], and TriDet [23] models, we obtained inference efficiency results by creating ran
Figure 2: Performance of ActionFormer (AF), STALE, and TriDet with score fusion (SF) and without score fusion (w/o SF). As we can observe, the performance of the models drops when score fusion is not used.
dom tensors corresponding to I3D features [3] extracted from videos from the THUMOS'14 dataset [11]. We obtained results for the STALE model by creating random tensors corresponding to CLIP features [21] on the ActivityNet dataset [9]. The lengths of the tensors vary from 200 to 3000 in 200 increments. This range is dictated by the ActionFormer model, where the lowest allowable value of max_seq_len is 576, so videos of lengths longer than 3456 cannot be passed through the model without further changes to the configuration.
**Results.** As can be seen in Figure 3, the TemporalMaxer model consistently achieves the lowest inference time, number of floating point operations, and memory consumption. This is because of the simple architecture of the model, which contains fewer parameters than other models [24]. Conversely, we find that the STALE model is the most computationally expensive in all three tested metrics and on all tested lengths of the input video. Furthermore, we observe that the number of floating point operations and the memory consumption increase in steps for the ActionFormer model. This is because the model architecture requires padding input videos to multiples of 576. Nonetheless, the model scales linearly with respect to the input size. This thus matches the claims of the original work [33]. We see that TriDet and TemporalMaxer both also scale linearly with respect to the input size. As can be seen in Figure 2(b), the computational complexity of the STALE model does not increase linearly. A similar pattern is observable for memory consumption of STALE in Figure 2(c). Interestingly, we find a linear pattern in the inference time of STALE in Figure 2(a).
**Discussion.** In case of limited compute resources, TemporalMaxer should be chosen. TemporalMaxer requires the least amount of computational power on all tested video lengths. STALE should not be chosen in such settings, not only due to higher computational complexity but also because it scales non-linearly with respect to input video length. Hence, even if a configuration would be found that causes STALE to be more efficient on short videos, asymptotically it will still be worse than any other linear model.
## 6 Conclusion
In this work, we ask how well state-of-the-art temporal action localization models perform in settings limited by the amount of training data or computational resources available. We find that in a data deficient setting the TemporalMaxer model [24] works the best, likely due to its sim
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model** & **Time [s]** & **Avg. mAP [\%]** \\ \hline \multicolumn{3}{c}{**THUMOS’14**} \\ \hline AF & 887 \(\pm\) 54 & 65.89 \(\pm\) 0.09 \\ TemporalMaxer & 2957 \(\pm\) 1660 & 66.96 \(\pm\) 0.37 \\ TriDet & 646 \(\pm\) 26 & 68.07 \(\pm\) 0.42 \\ \hline \multicolumn{3}{c}{**ActivityNet**} \\ \hline ActionFormer (I3D) & 1945 \(\pm\) 61 & 35.9 \(\pm\) 0.14 \\ ActionFormer (TSP) & 1932 \(\pm\) 232 & 36.4 \(\pm\) 0.05 \\ STALE & 401 \(\pm\) 6 & 19.37 \(\pm\) 0.16 \\ TriDet & 2236 \(\pm\) 224 & 36.57 \(\pm\) 0.18 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Training performance of the compared models on the ActivityNet [9] and THUMOS’14 [11] datasets. Both average training time and obtained average mAP are reported. On THUMOS’14, TriDet is the fastest and performs the best. On ActivityNet, the ActionFormer and TriDet models take longer to train STALE but also achieve better performance.
Figure 3: Inference time, number of floating point operations, and memory consumption for ActionFormer [33], TemporalMaxer [33], and STALE [19]. Most notably, we find that the TemporalMaxer model requires the least computational resources during inference, while STALE requires the most.
ple architecture, which consists of fewer parameters compared to other models and does not use a Transformer backbone. Additionally, we find that performance barely improves when adding data beyond 100 action instances per class. This suggests making datasets larger will not further improve the performance of the tested models. The use of score fusion was shown to improve the performances of the models, hence when training a model on a custom dataset, options with and without score fusion should be considered. Furthermore, we test computational efficiency during training and inference. We find that TriDet [23] offers the lowest training time as well as the best performance. Additionally, we find that TemporalMaxer requires the least computational resources at inference time, again likely due to its simple architecture without a Transformer backbone.
**Limitations.** It should be noted that the method for measuring data efficiency is limited as ActionFormer and TriDet are the only models that were evaluated on both datasets. Furthermore, the procedures for testing training and inference efficiency have limitations. The models have only been so far evaluated on the THUMOS'14 and ActivityNet datasets. The results on different datasets could lead to different conclusions. Furthermore, the timing experiments have been performed on a shared HPC cluster. It was however observed that the variance in training and inference times was small, which indicates that the concurrent jobs did not interfere with the experimental jobs.
**Future work.** This work provides insights that will help in developing future data or computationally efficient TAL models. Based on the results of ActionFormer [33] and STALE [19], we see that self-attention should not be the mechanism of choice if the training data or computational resources are limited. We find that replacing such modules with custom layers, such as SGP [23] or replacing transformer modules with max pooling [24] improves the efficiency of the model. Finally, we note that future work in evaluating current models in terms of data or computational efficiency is possible. More models could be evaluated or the models could be evaluated on more datasets.
**Acknowledgements.** This project is (partly) financed by the Dutch Research Council (NWO) (project VI.Vidi.192.100).
|
2303.16288 | Spontaneous vacuum decay in low-energy collisions of heavy nuclei beyond
the monopole approximation | The problem of spontaneous vacuum decay in low-energy collisions of heavy
nuclei is considered beyond the scope of the monopole approximation. The
time-dependent Dirac equation is solved in a rotating coordinate system with
$z$-axis directed along the internuclear line and the origin placed at the
center of mass. The probabilities of electron-positron pair creation and the
positron energy spectra are calculated in the approximation neglecting the
rotational coupling. The two-center potential is expanded over spherical
harmonics and the convergence with respect to the number of terms in this
expansion is studied. The results show that taking into account the two-center
potential instead of its spherically symmetric part preserves all the
signatures of the transition to the supercritical regime that have been found
in the framework of the monopole approximation and even enhances some of them. | Popov R. V., Shabaev V. M., Maltsev I. A., Telnov D. A., Dulaev N. K., Tumakov D. A | 2023-03-28T20:05:37Z | http://arxiv.org/abs/2303.16288v2 | # Spontaneous vacuum decay in low-energy collisions of heavy nuclei beyond the monopole approximation
###### Abstract
The problem of spontaneous vacuum decay in low-energy collisions of heavy nuclei is considered beyond the scope of the monopole approximation. The time-dependent Dirac equation is solved in a rotating coordinate system with \(z\)-axis directed along the internuclear line and the origin placed at the center of mass. The probabilities of electron-positron pair creation and the positron energy spectra are calculated in the approximation neglecting the rotational coupling. The two-center potential is expanded over spherical harmonics and the convergence with respect to the number of terms in this expansion is studied. The results show that taking into account the two-center potential instead of its spherically symmetric part preserves all the signatures of the transition to the supercritical regime that have been found in the framework of the monopole approximation and even enhances some of them.
## I Introduction
Quantum electrodynamics (QED) in the presence of superstrong electromagnetic fields predicts a number of nonlinear and nonperturbative effects such as light-by-light scattering, vacuum birefringence and production of electron-positron pairs (see, e.g., reviews [1; 2; 3; 4]). Experimental observation of these effects is complicated by extremely high requirements on the field strength needed for their manifestation. One of the ways to attain such fields relies on ever evolving laser technologies. Although laser facilities in the near future might meet requirements for some of the effects, vacuum pair production is still far from being experimentally accessible. An alternative approach suggests to use heavy nuclei as a source of strong electric field.
In a pioneering work [5] it was shown that the \(1s\) level of a hydrogen-like ion with an extended nucleus continuously goes down with increasing nuclear charge until at a certain value \(Z_{\rm cr}\) it reaches the border of the negative-energy continuum. It raised the question of what happens to a bound state when it joins the positron continuum. In works of Soviet and German physicists [6; 7] it was conjectured that the diving of an initially empty bound state into the negative-energy continuum can result in spontaneous reconstruction of the QED vacuum accompanied with creation of electron-positron pairs (for details see, e.g., Refs. [8; 9; 10; 11; 12; 13; 14; 15; 16]). A realistic scenario for observation of this process can be realized in low-energy collision of two heavy nuclei with the total charge exceeding the critical value \(Z_{1}+Z_{2}>Z_{\rm cr}\)[6]. When during such collisions the nuclei get sufficiently close to each other, \(1s\sigma\) state of the quasimolecule, formed by them, enters the negative-energy continuum as a resonance. As a result, if \(1s\sigma\) state was unoccupied, an additional hole enters the lower continuum. Initially localized near the nuclei, this hole can escape to infinity as a free positron, and the initially neutral vacuum becomes charged. This process is known as the spontaneous vacuum decay.
Spontaneous vacuum decay in heavy-ion collisions was a subject of intense theoretical and experimental investigations (see, e.g., reviews [17; 18; 19; 20; 21; 22] and references therein). The first theoretical calculations of pair creation in the supercritical collisions were carried out in the static approximation, according to which the pair-creation probability is proportional to the time integral of the resonance width \(\Gamma(R)\) taken along the nuclear trajectory \(R(t)\)[23; 24; 25]. Within this approximation, the total probability of spontaneous pair creation, associated with the resonance decay, energy spectra of the emitted positrons as well as their angular distributions were obtained. In Ref. [25], a correction for the nonadiabaticity of the tunneling process was also considered. However, the static approach does not take into account the dynamical pair creation induced by the time-dependent potential of the moving nuclei. It turns out that the supercritical resonance has a rather long lifetime, compared to the duration of the supercritical regime \(\tau_{\rm cr}\). For example, in collisions of uranium nuclei at the energies near the Coulomb barrier (when the nuclei touch each other) the resonance lifetime is about two orders of magnitude larger than \(\tau_{\rm cr}\). This makes the probability of spontaneous pair creation quite small. Moreover, the additional width \(\Gamma_{\rm dyn}\sim\hbar/\tau_{\rm cr}\), caused by the uncertainty principle, prevents appearance of narrow resonance structures in the energy distribution of the emitted positrons, predicted in the static approximation. Therefore, in order to verify the possibility to observe the signal from the vacuum decay, one needs to take into account the dynamical pair production.
Both the spontaneous and the dynamical mechanisms were investigated by the Frankfurt group (see, e.g., [26; 27; 28]). From the obtained results it was eventually concluded that experimental observation of spontaneous vacuum decay is possible only if the colliding nuclei would stick to each other for some time due to nuclear forces [21; 22]. However, since no evidence of such sticking have been registered to date, this scenario also does not seem promising.
In view of the upcoming experimental facilities in Ger
many (GSI/FAIR) [29; 30], China (HIAF) [31], and Russia (NICA) [32] the interest to this problem was renewed. New investigations concerned both static and dynamic aspects of spontaneous positron emission. The properties of the supercritical resonance were addressed for spherically symmetric [33; 34; 35; 36]1 and non-symmetric [37; 38; 39] field configurations. The behaviour of the vacuum polarization energy for supercritical Coulomb fields was examined in a series of papers, see, e.g., [40; 41] and references therein. Dynamic consideration of pair-creation in heavy-nuclei collisions was targeted in the framework of the monopole approximation [42; 43; 44] and beyond [45; 46; 47]. Relativistic semiclassical approach was applied to the vacuum instability problem in Ref. [48].
Footnote 1: Although we acknowledge calculations of supercritical resonances in Refs. [34; 36], we disagree with the conclusion made by the authors about absence of spontaneous pair creation.
Recently there was proposed a new way to see the signs indicating the transition to the supercritical regime, where spontaneous electron-positron pair creation becomes possible [49; 50]. The method suggests to consider collisions along trajectories corresponding to different energies but having the same distance of the closest approach, \(R_{\rm min}\). As the parameters that define the specific trajectory, it is convenient to use \(R_{\rm min}\) and the ratio \(\eta=E/E_{0}\in[1,\infty)\) of the collision energy \(E\) to the energy of the head-on collision with the same \(R_{\rm min}\). The idea behind this is the opposite dependence of the dynamic and spontaneous contributions to the pair-creation probability on the nuclear velocity, characterized here by the parameter \(\eta\). Indeed, it is clear that the contribution of the spontaneous mechanism is determined by the time \(\tau_{\rm cr}\) the nuclei spend in the region \(R_{\rm min}\leq R(t)<R_{\rm cr}\), where \(R(t)\) is the internuclear distance and \(R_{\rm cr}\) is the distance at which the \(18\sigma\) state of the quasimolecule reaches the negative-energy continuum, i.e., \(E_{1s\sigma}(R_{\rm cr})=-m_{e}c^{2}\) with \(m_{e}\) being the electron mass. This time monotonically decreases with the increase of collision energy, i.e., \(\eta\), and so does the contribution of the spontaneous mechanism. On the contrary, the dynamical pair production should increase with the increase of \(\eta\). Therefore, the raise of the pair-creation probability with \(\eta\to 1\) is to be attributed to the transition to the supercritical regime and activation of the spontaneous mechanism. More details are to be found in Ref. [50].
By employing the aforementioned approach, the detailed investigation of the \(\eta\)-dependence of the pair-production probabilities and positron energy spectra was carried out in Ref. [50] and later independently confirmed in Ref. [51]. The calculations were conducted within the monopole approximation, where only spherically symmetric part of the two-center nuclear potential is taken into account. The evidence of the transition to the supercritical regime have been found in both the pair-creation probabilities and positron spectra. Although it has been shown that the monopole approximation works rather well for description of the pair-creation process [45; 46; 47], it is important to study how consideration of the two-center potential would affect the signs of the transition to the supercritical regime mentioned above. Also, calculations beyond the monopole approximations are necessary to get access to other important aspects of nuclei collisions, e.g., the angular resolved positron spectra. To this end, in this work we performed the calculations taking into account higher-order terms in the decomposition of the nuclear potential over spherical harmonics. The calculations are performed in the coordinate system with \(z\)-axis directed along the internuclear line and the origin located at the center of mass. The rotational-coupling term that appears in the time-dependent Dirac equation due to the transition to this noninertial reference frame (see, e.g., Refs. [52]) as well as the magnetic field of the nuclei were not taken into account. As it was shown in Ref. [53; 54; 55; 56], the influence of these effects on the total probability and positron energy spectra is negligible. It should be noted, however, that the rotational and magnetic terms can have some impact on the positron angular distributions which are not the subject of study of the present work.
The relativistic units (\(\hbar=c=1\)) and the Heaviside charge unit (\(\alpha=e^{2}/(4\pi)\), \(e<0\)) are used throughout the paper.
## II Theory
The calculations are based on the formalism of quantum electrodynamics with unstable vacuum developed in Ref. [57]. The nuclei are treated classically as finite-size particles moving along the hyperbolic Rutherford trajectories. The vector part of the 4-potential created by the nuclei is neglected.
The pair-creation probabilities and positron energy spectra can be expressed in terms of one-electron transition amplitudes. To calculate the amplitudes, one has to solve the time-dependent Dirac equation,
\[i\partial_{t}\psi_{i}(\mathbf{r},t)=H(t)\psi_{i}(\mathbf{r},t) \tag{1}\]
with
\[H(t)=\mathbf{\alpha}\cdot\mathbf{p}+\beta m_{e}+V(\mathbf{r},t), \tag{2}\]
where \(\mathbf{\alpha}\), \(\beta\) are the Dirac matrices, the subscript \(i\) specifies the initial condition, and \(V(\mathbf{r},t)\) is the total two-center potential generated by the colliding nuclei,
\[V(\mathbf{r},t)=V_{\rm A}\left(\left|\mathbf{r}-\mathbf{R}_{\rm A}(t)\right|\right)+V_{ \rm B}\left(\left|\mathbf{r}-\mathbf{R}_{\rm B}(t)\right|\right). \tag{3}\]
Here \(\mathbf{R}_{\rm A/B}(t)\) denotes the nuclear coordinates. In our calculations we utilize an expansion of the time-dependent wave function over a finite static basis set \(\{u_{j}(\mathbf{r})\}_{j=1}^{N}\):
\[\psi_{i}(\mathbf{r},t)=\sum_{j}a_{ji}(t)u_{j}(\mathbf{r}). \tag{4}\]
The basis set \(\{u_{j}(\mathbf{r})\}_{j=1}^{N}\) consists of a number of subsets \(\{u_{j}^{s}(\mathbf{r})\}_{j=1}^{n}\) containing functions of certain angular symmetry described by the angular-momentum-parity quantum number \(\kappa\). Functions \(u_{j}^{\kappa}\) are bispinors with radial parts represented by B-splines in accordance with the dual kinetic balance (DKB) approach [58]. Each subset of \(u_{j}^{\kappa}\), pertaining to certain \(\kappa\), is split into two parts. The first part with \(1\leq j\leq n/2\) is defined as
\[u_{j}^{\kappa}(\mathbf{r})=\frac{1}{r}\left(\begin{array}{c}B_{j}(r)\Omega_{\kappa \mu}(\mathbf{n})\\ \frac{1}{2m_{\kappa}}\left(\frac{d}{dr}+\frac{\kappa}{r}\right)B_{j}(r)\Omega_ {-\kappa\mu}(\mathbf{n})\end{array}\right) \tag{5}\]
and the second one with \(n/2<j\leq n\) reads
\[u_{j}^{\kappa}(\mathbf{r})=\frac{1}{r}\left(\begin{array}{c}\frac{1}{2m_{\kappa }}\left(\frac{d}{dr}-\frac{\kappa}{r}\right)B_{j}(r)\Omega_{\kappa\mu}(\mathbf{n} )\\ B_{j}(r)\Omega_{-\kappa\mu}(\mathbf{n})\end{array}\right). \tag{6}\]
Here \(B_{j}(r)\) is the \(j\)th B-spline, \(\Omega_{\kappa\mu}(\mathbf{n})\) is the spherical spinor, and \(\mathbf{n}=\mathbf{r}/r\). This choice of basis functions is highly advantageous in the case of symmetric collisions, where the odd harmonics in the multipole expansion of the two-center potential,
\[V(\mathbf{r},t)=\sum_{L=0}^{\infty}\sum_{M=-L}^{L}\sum_{\alpha=\mathrm{A,B}}V_{LM} ^{\alpha}\left(\mathbf{r},\mathbf{R}_{\alpha}(t)\right)Y_{LM}(\mathbf{n}), \tag{7}\]
where
\[V_{LM}^{\alpha}\left(\mathbf{r},\mathbf{R}_{\alpha}(t)\right)=\int d\mathbf{n}\ Y_{LM}^{ \ast}(\mathbf{n})\ V_{\alpha}\left(|\mathbf{r}-\mathbf{R}_{\alpha}(t)|\right), \tag{8}\]
cancel out in the center-of-mass frame. Thus, the states with opposite spatial parity become decoupled and can be propagated independently. This, in turn, reduces the size of matrices describing the discretized version of Eq. (1) (see below) by almost a half, which significantly facilitates the computations.
When using a finite basis set, the initial Eq. (1) is transformed to a system of ordinary differential equations:
\[iS\frac{\partial\mathbf{a}_{i}(t)}{\partial t}=H(t)\mathbf{a}_{i}(t), \tag{9}\]
where \(\mathbf{a}_{i}=\{a_{1i},\dots,a_{Ni}\}\) denotes the array of expansion coefficients, \(S_{jk}=\langle u_{j}|u_{k}\rangle\) is the overlap matrix, and \(H_{jk}(t)=\langle u_{j}|H(t)|u_{k}\rangle\) is the Hamiltonian matrix. The set of equations (9) is subsequently solved with the aid of the Crank-Nicolson scheme [59]. This scheme imposes the following relation on the coefficients \(\mathbf{a}_{i}(t)\) taken at adjacent time steps separated by interval \(\Delta t\):
\[\left[S+\frac{i\Delta t}{2}H(t+\Delta t/2)\right]\mathbf{a}_{i}(t+ \Delta t)=\] \[\left[S-\frac{i\Delta t}{2}H(t+\Delta t/2)\right]\mathbf{a}_{i}(t). \tag{10}\]
To further simplify the calculations we use the coordinate system, whose \(z\)-axis is tied to the internuclear line and rotates together with it. Meanwhile, the rotational-coupling term - \(\mathbf{j}\cdot\mathbf{\omega}\) (\(\mathbf{j}\) is the electronic angular momentum and \(\mathbf{\omega}\) is the angular velocity vector) that appears in the Hamiltonian upon the transformation [52] is neglected. In this coordinate system we can use the eigenfunctions \(\varphi_{i}\) of \(H(t_{\mathrm{in}})=H(t_{\mathrm{out}})\) as the initial and final states. These eigenfunctions are found from the matrix version of the stationary Dirac equation. Using the expansion of \(\varphi_{i}\) similar to Eq. (4) with the coefficients \(c_{k}\), one arrives at the following generalized eigenvalue problem:
\[H\mathbf{c}=\epsilon S\mathbf{c}, \tag{11}\]
where \(H_{jk}=\langle u_{j}|H(t_{\mathrm{in}})|u_{k}\rangle\) and \(\mathbf{c}=\{c_{1},\dots,c_{N}\}\). Solving Eq. (11) yields a set of eigenvalues \(\epsilon_{i}\) and eigenvectors \(\mathbf{c}_{i}\) (\(i=1,\dots,N\)) which represent a discretized version of the \(H(t_{\mathrm{in}})\) spectrum. The initial conditions for Eq. (9) are then set as
\[\mathbf{a}_{i}(t_{\mathrm{in}})=\mathbf{c}_{i}. \tag{12}\]
The one-electron transition amplitudes attain the form
\[A_{fi} =\langle\varphi_{f}|\psi_{i}(t_{\mathrm{out}})\rangle\] \[=\mathbf{c}_{f}^{\dagger}S\mathbf{a}_{i}(t_{\mathrm{out}}). \tag{13}\]
Finally, the mean number of positrons created in the \(m\)th energy state is [57; 18]
\[\overline{n}_{m}=\sum_{\epsilon_{j}>-1}|A_{mj}|^{2}. \tag{14}\]
The calculations of positron energy spectra were performed with the modified Stieltjes procedure [60; 46]:
\[\frac{dP}{d\varepsilon}\Big{(}\frac{\varepsilon_{p}+\varepsilon_{p +N_{s}-1}}{2}\Big{)}\] \[=\frac{1}{\varepsilon_{p+N_{s}-1}-\varepsilon_{p}}\left(\frac{ \overline{n}_{p}+\overline{n}_{p+N_{s}-1}}{2}+\sum_{i=1}^{N_{s}-2}\overline{n}_ {p+i}\right)\,. \tag{15}\]
Here \(N_{s}\) determines the number of energy eigenvalues involved in the calculation of one point in the spectrum. With \(N_{s}=2\) Eq. (15) turns into the regular Stieltjes formula. We used \(N_{s}\) equal to a multiple of the number of the utilized \(\kappa\) channels.
## III Results
Following the method described above, we performed calculations of the pair-creation probabilities and positron energy spectra for collisions of bare nuclei with various charge numbers. The nuclei were treated classically as homogeneously charged spheres of radius \(R_{\mathrm{n}}=1.2A^{1/3}\) fm, where \(A\) is the atomic mass number. Their motion was described by the hyperbolic trajectories. As was demonstrated in Ref. [46], when the rotation of the
internuclear axis is neglected, the dominant contribution to the probability comes from states with angular momentum projections \(|\mu|=\frac{1}{2}\). Therefore, only states with \(\mu=\frac{1}{2}\) were included into the basis set and the results were doubled. The basis functions (5), (6) were constructed with B-splines of the 9th order generated on the grid of size \(R_{\rm box}=68.5\) r.u. The nodes were distributed polynomially with \(r_{i}=R_{\rm box}\left(i/(N-1)\right)^{4}\). The initial and final internuclear distance was taken to be \(R(t_{\rm in})=R(t_{\rm out})\equiv R_{0}=5000\) fm. The number of propagated electron states was reduced by introducing a cutoff energy \(\varepsilon_{\rm c}=6\) r.u. Only states with energy \(\varepsilon\in(-1,\ \varepsilon_{\rm c}]\) were taken into account in Eq. (14), providing the relative inaccuracy of the sum on the level of \(10^{-4}\).
### Pair-creation probabilities
First, we studied the dependence of the pair-creation probability on the number of the \(\kappa\) channels included in the expansion (4) of the time-dependent wave function. For this purpose we considered collisions of bare uranium nuclei at the energy of 6.218 MeV/u. Table 1 contains the total pair-creation probability \(P_{\rm t}\) and the contributions of the ground (\(P_{\rm g}\)) and all bound states (\(P_{\rm b}\)) obtained for several impact parameters in the range from 0 to 30 fm. For comparison the values calculated in Ref. [46] are also presented. The table shows a rather fast convergence of the total probability with respect to the number of the \(\kappa\) channels. For example, the basis with \(|\kappa|_{\rm max}=3\) already provides a deviation from the converged results of less than 1%. Thus, in further calculation only functions with \(|\kappa|\leq 3\) were included in the basis.
Henceforth we consider the total pair-creation probability and denote it with \(P\) omitting the subscript. It was shown in Refs. [49; 50] that in the scope of the monopole approximation the pair-creation probability as a function of \(\eta\) increases as \(\eta\to 1\), when \(R_{\rm min}\) and \(Z_{\rm t}=Z_{1}+Z_{2}\) enter deeply enough into the supercritical domain of collision parameters. This increase can serve as an indication of the transition to the supercritical regime. In this work we studied how the dependence of the probability \(P\) on \(\eta\) changes when higher-order terms in the potential decomposition are brought into consideration. For \(R_{\rm min}=17.5\) fm, the results obtained in the basis with \(|\kappa|_{\rm max}=3\) for symmetric collisions of bare nuclei with subcritical (\(Z=84\)) and supercritical (\(Z=88,\ 92,\ 96\)) charge numbers are displayed in Fig. 1 in comparison with the monopole-approximation results. The comparison shows that the effects associated with higher-order terms somewhat enhance the manifestation of the increase of \(P\) as \(\eta\to 1\) for supercritical charge numbers. For instance, in the case of the U\({}^{92+}\)-U\({}^{92+}\) collisions, the probability obtained with \(|\kappa|_{\rm max}=3\) exhibits a shallow minimum near \(\eta=1\), which is absent in the monopole approximation.
The influence of the nonmonopole terms becomes more apparent when considering the derivative of the pair-creation probability with respect to the parameter \(\eta\), \(dP/d\eta\), at \(\eta=1\). Figure 2 represents the contributions of odd (\(\mathcal{P}=-1\)) and even (\(\mathcal{P}=1\)) states to \(dP/d\eta\big{|}_{\eta=1}\) as functions of \(Z\). As can be seen from Fig. 2, the deviation from the monopole results is hardly visible until the corresponding channel becomes supercritical, which happens at \(Z\approx 87.3\) for \(\mathcal{P}=1\) and \(Z\approx 94.8\) for \(\mathcal{P}=-1\). In the supercritical region the values of \(dP/d\eta\) obtained with \(|\kappa|_{\rm max}=3\) lie lower than the monopole ones. This behavior of \(dP/d\eta\) aligns with the findings of Refs. [38; 39], where the supercritical-resonance parameters were examined beyond the monopole approximation. According to Refs. [38; 39], inclusion of higher-order terms in the potential decomposition results in about 20% increase in the resonance width of U\({}^{183+}_{2}\) quasimolecule at the internuclear distance of 16 fm. Furthermore, this increase in width turns out to be larger for larger internuclear separations. Note that supercritical resonance width is exclusively due to the spontaneous pair creation while in collisions of heavy nuclei both spontaneous and dynamic mechanisms contribute to the total pair-creation probability. As seen in Table 1, the overall increase in the pair-creation probability for head-on collisions of uranium nuclei at the energy of 6.218 MeV/u (which corresponds to the internuclear distance of 16.47 fm) amounts to approximately 5%. This may indicate that the relative contribution of the spontaneous mechanism to the total pair production became larger, although the electron-positron pairs are predominately created by the dynamic mechanism. As a result one may observe an enhancement of the signal indicating the transition to the supercritical regime found in \(dP/d\eta\), namely the sign change from positive to negative. Another factor that can play a role is the extended duration of the supercritical regime, \(\tau_{\rm cr}\), due to the increase in the critical internuclear distance \(R_{\rm cr}\).
Figure 1: Total pair-creation probability as a function of \(\eta\) with \(R_{\rm min}=17.5\) fm. Solid blue lines depict results obtained with \(|\kappa|_{\rm max}=3\), dashed orange curves correspond to the monopole-approximation results.
### Positron spectra
Another signature of the transition to the supercritical regime found in Ref. [50] concerns the \(\eta\)-dependence of the maximum of the positron energy spectra obtained in collisions with fixed \(R_{\rm min}\). It was shown in Ref. [50] in the monopole approximation that in the case of subcritical collisions the spectra corresponding to larger \(\eta\) possess higher peak values, whereas for supercritical collisions the dependence is inverted and peak values decrease with increasing \(\eta\). In this work we examined whether this behavior remains valid beyond the monopole approximation. At first, we regarded collisions of bare uranium
\begin{table}
\begin{tabular}{r c c c c c c c} \hline \hline \multicolumn{2}{c}{\(|\kappa|_{\rm max}\)} & \multicolumn{6}{c}{Impact parameter (fm)} \\ \cline{2-7} & \multicolumn{2}{c}{0} & \multicolumn{2}{c}{5} & \multicolumn{2}{c}{10} & \multicolumn{2}{c}{15} & \multicolumn{2}{c}{20} & \multicolumn{2}{c}{25} & \multicolumn{2}{c}{30} \\ \hline & 1 & \(1.04\times 10^{-2}\) & \(8.80\times 10^{-3}\) & \(6.02\times 10^{-3}\) & \(3.84\times 10^{-3}\) & \(2.41\times 10^{-3}\) & \(1.51\times 10^{-3}\) & \(9.50\times 10^{-4}\) \\ & 3 & \(1.09\times 10^{-2}\) & \(9.24\times 10^{-3}\) & \(6.41\times 10^{-3}\) & \(4.15\times 10^{-3}\) & \(2.64\times 10^{-3}\) & \(1.68\times 10^{-3}\) & \(1.07\times 10^{-3}\) \\ & 5 & \(1.11\times 10^{-2}\) & \(9.46\times 10^{-3}\) & \(6.58\times 10^{-3}\) & \(4.27\times 10^{-3}\) & \(2.73\times 10^{-3}\) & \(1.74\times 10^{-3}\) & \(1.11\times 10^{-3}\) \\ \(P_{\rm g}\) & 7 & \(1.10\times 10^{-2}\) & \(9.34\times 10^{-3}\) & \(6.50\times 10^{-3}\) & \(4.23\times 10^{-3}\) & \(2.70\times 10^{-3}\) & \(1.73\times 10^{-3}\) & \(1.11\times 10^{-3}\) \\ & 9 & \(1.08\times 10^{-2}\) & \(9.24\times 10^{-3}\) & \(6.42\times 10^{-3}\) & \(4.18\times 10^{-3}\) & \(2.67\times 10^{-3}\) & \(1.71\times 10^{-3}\) & \(1.10\times 10^{-3}\) \\ & 11 & \(1.08\times 10^{-2}\) & \(9.19\times 10^{-3}\) & \(6.39\times 10^{-3}\) & \(4.16\times 10^{-3}\) & \(2.66\times 10^{-3}\) & \(1.70\times 10^{-3}\) & \(1.09\times 10^{-3}\) \\ & Ref. [46] & \(1.09\times 10^{-2}\) & \(9.30\times 10^{-3}\) & \(6.47\times 10^{-3}\) & \(4.21\times 10^{-3}\) & \(2.73\times 10^{-3}\) & \(1.72\times 10^{-3}\) & \(1.11\times 10^{-3}\) \\ \hline & 1 & \(1.25\times 10^{-2}\) & \(1.05\times 10^{-2}\) & \(7.03\times 10^{-3}\) & \(4.39\times 10^{-3}\) & \(2.70\times 10^{-3}\) & \(1.66\times 10^{-3}\) & \(1.03\times 10^{-3}\) \\ & 3 & \(1.32\times 10^{-2}\) & \(1.12\times 10^{-2}\) & \(7.63\times 10^{-3}\) & \(4.85\times 10^{-3}\) & \(3.03\times 10^{-3}\) & \(1.89\times 10^{-3}\) & \(1.19\times 10^{-3}\) \\ & 5 & \(1.32\times 10^{-2}\) & \(1.11\times 10^{-2}\) & \(7.62\times 10^{-3}\) & \(4.86\times 10^{-3}\) & \(3.05\times 10^{-3}\) & \(1.91\times 10^{-3}\) & \(1.21\times 10^{-3}\) \\ \(P_{\rm b}\) & 7 & \(1.31\times 10^{-2}\) & \(1.11\times 10^{-2}\) & \(7.59\times 10^{-3}\) & \(4.84\times 10^{-3}\) & \(3.04\times 10^{-3}\) & \(1.91\times 10^{-3}\) & \(1.21\times 10^{-3}\) \\ & 9 & \(1.31\times 10^{-2}\) & \(1.11\times 10^{-2}\) & \(7.58\times 10^{-3}\) & \(4.83\times 10^{-3}\) & \(3.03\times 10^{-3}\) & \(1.90\times 10^{-3}\) & \(1.21\times 10^{-3}\) \\ & 11 & \(1.31\times 10^{-2}\) & \(1.11\times 10^{-2}\) & \(7.58\times 10^{-3}\) & \(4.83\times 10^{-3}\) & \(3.03\times 10^{-3}\) & \(1.90\times 10^{-3}\) & \(1.20\times 10^{-3}\) \\ & Ref. [46] & \(1.32\times 10^{-2}\) & \(1.12\times 10^{-2}\) & \(7.64\times 10^{-3}\) & \(4.87\times 10^{-3}\) & \(3.07\times 10^{-3}\) & \(1.93\times 10^{-3}\) & \(1.23\times 10^{-3}\) \\ \hline & 1 & \(1.29\times 10^{-2}\) & \(1.08\times 10^{-2}\) & \(7.26\times 10^{-3}\) & \(4.51\times 10^{-3}\) & \(2.75\times 10^{-3}\) & \(1.69\times 10^{-3}\) & \(1.04\times 10^{-3}\) \\ & 3 & \(1.36\times 10^{-2}\) & \(1.15\times 10^{-2}\) & \(7.83\times 10^{-3}\) & \(4.95\times 10^{-3}\) & \(3.08\times 10^{-3}\) & \(1.92\times 10^{-3}\) & \(1.20\times 10^{-3}\) \\ & 5 & \(1.36\times 10^{-2}\) & \(1.15\times 10^{-2}\) & \(7.81\times 10^{-3}\) & \(4.96\times 10^{-3}\) & \(3.10\times 10^{-3}\) & \(1.94\times 10^{-3}\) & \(1.22\times 10^{-3}\) \\ \(P_{\rm t}\) & 7 & \(1.35\times 10^{-2}\) & \(1.14\times 10^{-2}\) & \(7.79\times 10^{-3}\) & \(4.95\times 10^{-3}\) & \(3.09\times 10^{-3}\) & \(1.94\times 10^{-3}\) & \(1.22\times 10^{-3}\) \\ & 9 & \(1.35\times 10^{-2}\) & \(1.14\times 10^{-2}\) & \(7.78\times 10^{-3}\) & \(4.94\times 10^{-3}\) & \(3.09\times 10^{-3}\) & \(1.93\times 10^{-3}\) & \(1.22\times 10^{-3}\) \\ & 11 & \(1.35\times 10^{-2}\) & \(1.14\times 10^{-2}\) & \(7.78\times 10^{-3}\) & \(4.94\times 10^{-3}\) & \(3.09\times 10^{-3}\) & \(1.93\times 10^{-3}\) & \(1.22\times 10^{-3}\) \\ & Ref. [46] & \(1.38\times 10^{-2}\) & \(1.16\times 10^{-2}\) & \(8.01\times 10^{-3}\) & \(5.15\times 10^{-3}\) & \(3.
nuclei at the energy of 6.218 MeV/u. The positron spectra calculated for the head-on collision in the framework of the monopole approximation and beyond it are depicted in Fig. 3. The spectrum obtained in the basis with \(|\kappa|_{\rm max}=3\) is in perfect agreement with the one given in Ref. [46]. The inclusion of higher-order harmonics in the calculations leads to the raise of the spectrum near the peak leaving the tail almost unchanged.
After that, we studied the dependence of the positron spectra on \(\eta\) for symmetric collisions with a fixed distance of the closest approach, \(R_{\rm min}\). In Fig. 4 we present the spectra obtained for collisions of nuclei with charge numbers \(Z=84,\ 88,\ 92,\ 96\), \(R_{\rm min}=17.5\) fm, and \(\eta=1,\ 1.1,\ 1.2\). The results show that once the total charge number \(2Z\) exceeds the critical value, the order of the curves near the peak gets reversed. In full accordance with Ref. [50], the subcritical collisions yield higher peak values of the positron spectrum for larger \(\eta\), while in the case of the supercritical collisions the opposite relation between the peak hight and \(\eta\) is established. The same behavior of the spectra with respect to \(\eta\) is found when the supercritical domain of the collision parameters is approached from a different direction, namely when \(Z\) is fixed and \(R_{\rm min}\) is decreasing.
## IV Conclusion
We have examined the possibility to access QED in supercritical Coulomb field that can be attained in low-energy collisions of heavy nuclei. The procedure for solving the time-dependent Dirac equation, previously restricted to the monopole approximation, was extended to take into account higher-order terms in the decomposition of the two-center nuclear potential over spherical harmonics. Using this modified procedure, we performed calculations of the pair-creation probabilities and positron energy spectra for collisions of bare nuclei. The results obtained for collisions with a fixed distance of the closest approach exhibit the same signatures of the transition to the supercritical regime as in the monopole approximation [49; 50]. Inclusion of nonmonopole terms into consideration enhances the manifestation of the signatures found in the behavior of the pair-creation probability as a function of the parameter \(\eta=E/E_{0}\) near \(\eta=1\).
###### Acknowledgements.
The development of the calculation method, the calculations of the total pair-production probabilities and positron energy spectra were supported by the Russian Science Foundation (Grant No. 22-62-00004). The results for the bound-free production probability were independently verified using a different approach by I. A. Maltsev supported by the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS".
|
2308.03291 | SynJax: Structured Probability Distributions for JAX | The development of deep learning software libraries enabled significant
progress in the field by allowing users to focus on modeling, while letting the
library to take care of the tedious and time-consuming task of optimizing
execution for modern hardware accelerators. However, this has benefited only
particular types of deep learning models, such as Transformers, whose
primitives map easily to the vectorized computation. The models that explicitly
account for structured objects, such as trees and segmentations, did not
benefit equally because they require custom algorithms that are difficult to
implement in a vectorized form.
SynJax directly addresses this problem by providing an efficient vectorized
implementation of inference algorithms for structured distributions covering
alignment, tagging, segmentation, constituency trees and spanning trees. This
is done by exploiting the connection between algorithms for automatic
differentiation and probabilistic inference. With SynJax we can build
large-scale differentiable models that explicitly model structure in the data.
The code is available at https://github.com/google-deepmind/synjax | Miloš Stanojević, Laurent Sartran | 2023-08-07T04:20:38Z | http://arxiv.org/abs/2308.03291v3 | # SynJax: Structured Probability Distributions for JAX
###### Abstract
The development of deep learning software libraries enabled significant progress in the field by allowing users to focus on modeling, while letting the library to take care of the tedious and time-consuming task of optimizing execution for modern hardware accelerators. However, this has benefited only particular types of deep learning models, such as Transformers, whose primitives map easily to the vectorized computation. The models that explicitly account for structured objects, such as trees and segmentations, did not benefit equally because they require custom algorithms that are difficult to implement in a vectorized form.
SynJax directly addresses this problem by providing an efficient vectorized implementation of inference algorithms for structured distributions covering alignment, tagging, segmentation, constituency trees and spanning trees. This is done by exploiting the connection between algorithms for automatic differentiation and probabilistic inference. With SynJax we can build large-scale differentiable models that explicitly model structure in the data. The code is available at [https://github.com/google-deepmind/synjax](https://github.com/google-deepmind/synjax).
## 1 Introduction
In many domains, data can be seen as having some structure explaining how its parts fit into a larger whole. This structure is often latent, and it varies depending on the task. For examples of discrete structures in natural language consider Figure 1. The words together form a sequence. Each word in a sequence is assigned a part-of-speech tag. These tags are dependent on each other, forming a linear-chain marked in red. The words in the sentence can be grouped together into small disjoint contiguous groups by sentence segmentation, shown with bubbles. A deeper analysis of language would show that the groupings can be done recursively and thereby produce a syntactic tree structure. Structures can also relate two languages. For instance, in the same figure, a Japanese translation can be mapped to an English source by an alignment.
These structures are not specific to language. Similar structures appear in biology as well. Nucleotides of any two RNA sequences are matched with monotone alignment Needleman and Wunsch (1970); Wang and Xu (2011), genomic data is segmented into contiguous groups Day et al. (2007) and tree-based models of RNA capture the hierarchical nature of the protein folding process Sakakibara et al. (1994); Hockenmaier et al. (2007); Huang et al. (2019).
Most contemporary deep learning models attempt to predict output variables directly from the input without any explicit modeling of the intermediate structure. Modeling structure explicitly could improve these models in multiple ways. First, it could allow for better generalization trough the right inductive biases Dyer et al. (2016); Sartran et al. (2022). This would improve not only sample efficiency but also downstream performance Bastings et al. (2017); Nadejde et al. (2017); Bisk and Tran (2018). Explicit modeling of structure can also enable incorporation of problem specific algorithms (e.g. finding shortest paths; Pogancic et al. (2020); Niepert et al. (2021) or constraints (e.g. enforcing alignment Mena et al. (2018) or enforcing compositional calculation Havrylov et al. (2019). Discrete structure also allows for better interpretability of the model's decisions Bastings
Figure 1: Examples of natural language structures.
et al., 2019). Finally, sometimes structure is the end goal of learning itself - for example we may know that there is a hidden structure of a particular form explaining the data, but its specifics are not known and need to be discovered (Kim et al., 2019; Paulus et al., 2020).
Auto-regressive models are the main approach used for modeling sequences. Non-sequential structures are sometimes linearized and approximated with a sequential structure (Choe and Charniak, 2016). These models are powerful as they do not make any independence assumptions and can be trained on large amounts of data. While sampling from auto-regressive models is typically tractable, other common inference problems like finding the optimal structure or marginalizing over hidden variables are not tractable. Approximately solving these tasks with auto-regressive models requires using biased or high-variance approximations that are often computationally expensive, making them difficult to deploy in large-scale models.
Alternative to auto-regressive models are models over factor graphs that factorize in the same way as the target structure. These models can efficiently compute all inference problems of interest exactly by using specialized algorithms. Despite the fact that each structure needs a different algorithm, we do not need a specialized algorithm for each inference task (argmax, sampling, marginals, entropy etc.). As we will show later, SynJax uses automatic differentiation to derive many quantities from just a single function per structure type.
Large-scale deep learning has been enabled by easy to use libraries that run on hardware accelerators. Research into structured distributions for deep learning has been held back by the lack of ergonomic libraries that would provide accelerator-friendly implementations of structure components - especially since these components depend on algorithms that often do not map directly onto available deep learning primitives, unlike Transformer models. This is the problem that SynJax addresses by providing easy to use structure primitives that compose within JAX machine learning framework.
To see how easy it is to use SynJax consider example in Figure 2. This code implements a policy gradient loss that requires computing multiple quantities - sampling, argmax, entropy, log-probability - each requiring a different algorithm. In this concrete code snippet, the structure is a non-projective directed spanning tree with a single root edge constraint. Because of that SynJax will:
* compute argmax with Tarjan's (1977) maximum spanning tree algorithm adapted for single root edge trees (Stanojevic and Cohen, 2021),
* sample with Wilson's (1996) sampling algorithm for single root trees (Stanojevic, 2022),
* compute entropy with Matrix-Tree Theorem (Tutte, 1984) adapted for single root edge trees (Koo et al., 2007; Zmigrod et al., 2021).
If the user wants only to change slightly the the tree requirements to follow the _projectivity constraint_ they only need to change one flag and SynJax will in the background use completely different algorithms that are appropriate for that structure: it will use Kuhlmann's algorithm (2011) for argmax and variations of Eisner's (1996) algorithm for other quantities. The user does not need to implement any of those algorithms or even be aware of their specifics, and can focus on the modeling side of the problem.
## 2 Structured Distributions
Distributions over most structures can be expressed with factor graphs - bipartite graphs that have random variables and factors between them. We associate to each factor a non-negative scalar, called potential, for each possible assignment of the random variables that are in its neighbourhood. The potential of the structure is a product of its factors:
\[\phi(t)=\prod_{e\in t}\phi(e) \tag{1}\]
where \(t\) is a structure, \(e\) is a factor/part, and \(\phi(\cdot)\) is the potential function. The probability of a struc
Figure 2: Example of implementing policy gradient with self-critical baseline and entropy regularization for spanning trees.
ture can be found by normalizing its potential:
\[p(t)=\frac{\prod_{e\in t}\phi(e)}{\sum_{t^{\prime}\in T}\prod_{e^{\prime}\in t^{ \prime}}\phi(e^{\prime})}=\frac{\phi(t)}{Z} \tag{2}\]
where \(T\) is the set of all possible structures and \(Z\) is a normalization often called partition function. This equation can be thought of as a _softmax_ equivalent over an extremely large set of structured outputs that share sub-structures (Sutton and McCallum, 2007; Mihaylova et al., 2020).
## 3 Computing Probability of a Structure and Partition Function
Equation 2 shows the definition of the probability of a structure in a factor graph. Computing the numerator is often trivial. However, computing the denominator, the partition function, is the complicated and computationally demanding part because the set of valid structures \(T\) is usually exponentially large and require specialized algorithms for each type of structure. As we will see later, the algorithm for implementing the partition function accounts for the majority of the code needed to add support for a structured distribution, as most of the other properties can be derived from it. Here we document the algorithms for each structure.
### Sequence Tagging
Sequence tagging can be modelled with Linear-Chain CRF (Lafferty et al., 2001). The partition function for linear-chain models is computed with the forward algorithm (Rabiner, 1990). The computational complexity is \(\mathcal{O}(m^{2}n)\) for \(m\) tags and sequence of length \(n\). Sarkka and Garcia-Fernandez (2021) have proposed a parallel version of this algorithm that has parallel computational complexity \(\mathcal{O}(m^{3}\log n)\) which is efficient for \(m\!\ll\!n\). Rush (2020) reports a speedup using this parallel method for Torch-Struct, however in our case the original forward algorithm gave better performance both in terms of speed and memory.
The SynJax implementation of Linear-Chain CRF supports having a different transition matrix for each time step which gives greater flexibility needed for implementing models like LSTM-CNN-CRF (Ma and Hovy, 2016) and Neural Hidden Markov Model (Tran et al., 2016).
### Segmentation with Semi-Markov CRF
Joint segmentation and tagging can be done with a generalization of linear-chain called Semi-Markov CRF (Sarawagi and Cohen, 2004; Abdel-Hamid et al., 2013; Lu et al., 2016). It has a similar parametrization with transition matrices except that here transitions can jump over multiple tokens. The partition function is computed with an adjusted version of the forward algorithm that runs in \(\mathcal{O}(sm^{2}n)\) where \(s\) is the maximal size of a segment.
### Alignment Distributions
Alignment distributions are used in time series analysis (Cuturi and Blondel, 2017), RNA sequence alignment (Wang and Xu, 2011), semantic parsing (Lyu and Titov, 2018) and many other areas.
#### 3.3.1 Monotone Alignment
Monotone alignment between two sequences of lengths \(n\) and \(m\) allows for a tractable partition function that can be computed in \(\mathcal{O}(nm)\) time using the Needleman-Wunsch (1970) algorithm.
#### 3.3.2 Ctc
Connectionist Temporal Classification (CTC, Graves et al., 2006; Hannun, 2017) is a monotone alignment model widely used for speech recognition and non-auto-regressive machine translation models. It is distinct from the standard monotone alignment because it requires special treatment of the _blank symbol_ that provides jumps in the alignment table. It is implemented with an adjusted version of Needleman-Wunsch algorithm.
#### 3.3.3 Non-Monotone 1-on-1 Alignment
This is a bijective alignment that directly maps elements between two sets given their matching score. Computing partition function for this distribution is intractable (Valiant, 1979), but we can compute some other useful quantities (see Section 5).
### Constituency Trees
#### 3.4.1 Tree-CRF
Today's most popular constituency parser by Kitaev et al. (2019) uses a global model with factors defined over labelled spans. Stern et al. (2017) have shown that inference in this model can be done efficiently with a custom version of the CKY algorithm in \(\mathcal{O}(mn^{2}+n^{3})\) where \(m\) is number of non-terminals and \(n\) is the sentence length.
#### 3.4.2 Pcfg
Probabilistic Context-Free Grammars (PCFG) are a generative model over constituency trees where each grammar rule is associated with a locally normalized probability. These rules serve as a template
which, when it gets expanded, generates jointly a constituency tree together with words as leaves.
SynJax computes the partition function using a vectorized form of the CKY algorithm that runs in cubic time. Computing a probability of a tree is in principle simple: just enumerate the rules of the tree, look up their probability in the grammar and multiply the found probabilities. However, extracting rules from the set of labelled spans requires many sparse operations that are non-trivial to vectorize. We use an alternative approach where we use _sticky_ span log-potentials to serve as a mask for each constituent: constituents that are part of the tree have sticky log-potentials \(0\) while those that are not are \(-\infty\). With sticky log-potentials set in this way computing log-partition provides a log-probability of a tree of interest.
#### 3.4.3 Td-Pcfg
Tensor-Decomposition PCFG (TD-PCFG, Cohen et al., 2013; Yang et al., 2022) uses a lower rank tensor approximation of PCFG that makes inference with much larger number of non-terminals feasible.
### Spanning Trees
Spanning trees appear in the literature in many different forms and definitions. We take a spanning tree to be any subgraph that connects all nodes and does not have cycles. We divide spanning tree CRF distributions by the following three properties:
**directed or undirected**: Undirected spanning trees are defined over symmetric weighted adjacency matrices i.e. over undirected graphs. Directed spanning trees are defined over directed graphs with special root node.
**projective or non-projective**: Projectivity is a constraint that appears often in NLP. It constrains the spanning tree over words not to have crossing edges. Non-projective spanning tree is just a regular spanning tree - i.e. it may not satisfy the projectivity constraint.
**single root edge or multi root edges**: NLP applications usually require that there can be only one edge coming out of the root (Zmigrod et al., 2020). Single root edge spanning trees satisfy that constraint.
Each of these choices has direct consequences on which algorithm should be used for probabilistic inference. SynJax abstracts away this from the user and offers a unified interface where the user only needs to provide the weighted adjacency matrix and set the three mentioned boolean values. Given the three booleans SynJax can pick the correct and most optimal algorithm. In total, these parameters define distributions over 8 different types of spanning tree structures all unified in the same interface. We are not aware of any other library providing this set of unified features for spanning trees.
We reduce undirected case to the rooted directed case due to bijection. For projective rooted directed spanning trees we use Eisner's algorithm for computation of the partition function (Eisner, 1996). The partition function of Non-Projective spanning trees is computed using Matrix-Tree Theorem (Tutte, 1984; Koo et al., 2007; Smith and Smith, 2007).
## 4 Computing Marginals
In many cases we would like to know the probability of a particular part of structure appearing, regardless of the structure that contains it. In other words, we want to marginalize (i.e. sum) the probability of all the structures that contain that part:
\[p(e)=\sum_{t\in T}\mathbb{I}[e\in t]\;\,p(t)=\sum_{t^{\prime}\in T_{e}}p(t^{ \prime}) \tag{3}\]
where \(\mathbb{I}[\cdot]\) is the indicator function, \(T\) is the set of all structures and \(T_{e}\) is the set of structures that contain factor/part \(e\).
Computing these factors was usually done using specialized algorithms such as Inside-Outside or Forward-Backward. However, those solutions do not work on distributions that cannot use belief propagation like Non-Projective Spanning Trees. A more general solution is to use an identity that relates gradients of factor's potentials with respect to the log-partition function:
\[p(e)=\frac{\partial\log Z}{\partial\phi(e)} \tag{4}\]
This means that we can use any differentiable implementation of log-partition function as a forward pass and apply backpropagation to compute the marginal probability (Darwiche, 2003). Eisner (2016) has made an explicit connection that "Inside-Outside and Forward-Backward algorithms are just backprop". This approach also works for Non-Projective Spanning Trees that do not fit belief propagation framework (Zmigrod et al., 2021).
For template models like PCFG, we use again the _sticky_ log-potentials because usually we are
not interested in marginal probability of the rules but in the marginal probability of the instantiated constituents. The derivative of log-partition with respect to the constituent's _sticky_ log-potential will give us marginal probability of that constituent.
## 5 Computing Most Probable Structure
For finding the score of the highest scoring structure we can just run the same belief propagation algorithm for log-partition, but with the _max-plus semiring_ instead of the log-plus semiring (Goodman, 1999). To get the most probable structure, and not just its potential, we can compute the gradient of part potentials with respect to the viterbi structure potential (Rush, 2020).
The only exceptions to this process are non-monotone alignments and spanning trees because they do fit easily in belief propagation framework. For the highest scoring non-monotone alignment, we use the Jonker-Volgenant algorithm as implemented in SciPy (Crouse, 2016; Virtanen et al., 2020). Maximal _projective_ spanning tree can be found by combining Eisner's algorithm with max-plus semiring, but we have found Kuhlmann's tabulated arc-hybrid algorithm to be much faster (Kuhlmann et al., 2011) (see Figure 4 in the appendix). This algorithm cannot be used for any inference task other than argmax because it allows for spurious derivations. To enforce single-root constraint with Kuhlmann's algorithm we use the Reweighting trick from Stanojevic and Cohen (2021). For _non-projective_ spanning trees SynJax uses a combination of Reweighting+Tarjan algorithm as proposed in Stanojevic and Cohen (2021).
## 6 Sampling a Structure
Strictly speaking, there is no proper sampling semiring because semirings cannot have non-deterministic output. However, we can still use the semiring framework and make some aspect of them non-deterministic. Aziz (2015) and Rush (2020) use a semiring that in the forward pass behaves like a log-semiring, but in the backward pass instead of computing the gradient it does sampling. This is in line of how forward-filtering backward-sampling algorithm works (Murphy, 2012, SS17.4.5).
Non-Projective Spanning Trees do not support the semiring framework so we use custom algorithms for them described in Stanojevic (2022). It contains Colbourn's algorithm that has a fixed runtime of \(\mathcal{O}(n^{3})\) but is prone to numerical issues because it requires matrix-inversion (Colbourn et al., 1996), and Wilson's algorithm that is more numerically stable but has a runtime that depends on concrete values of log-potentials (Wilson, 1996). SynJax also supports vectorized sampling without replacement (SWOR) from Stanojevic (2022).
## 7 Differentiable Sampling
The mentioned sampling algorithms provide unbiased samples of structures useful for many inference tasks, but they are not differentiable because the gradient of sampling from discrete distributions is zero almost everywhere. This problem can be addressed with log-derivative trick from REINFORCE algorithm (Williams, 1992), but that provides high variance estimates of gradients. To address this problem there have been different proposals for differentiable sampling algorithms that are biased but can provide low-variance estimates of gradients. SynJax implements majority of the main approaches in the literature including structured attention (Kim et al., 2017), relaxed dynamic programming (Mensch and Blondel, 2018), Perturb-and-MAP (Corro and Titov, 2019), Gumbel-CRF (Fu et al., 2020), Stochastic Softmax-Tricks (Paulus et al., 2020), and Implicit Maximum-Likelihood estimation (Niepert et al., 2021). It also include different noise distributions for perturbations models, including Sum-of-Gamma noise (Niepert et al., 2021) that is particularly suited for structured distributions.
## 8 Entropy and KL Divergence
To compute the cross-entropy and KL divergence, we will assume that the two distributions factorize in exactly the same way. Like some other properties, cross-entropy can also be computed with the appropriate semirings (Hwa, 2000; Eisner, 2002; Cortes et al., 2008; Chang et al., 2023), but those approaches would not work on Non-Projective Spanning Tree distributions. There is a surprisingly simple solution that works across all distributions that factorize in the same way and has appeared in a couple of works in the past (Li and Eisner, 2009; Martins et al., 2010; Zmigrod et al., 2021). Here
we give a full derivation for cross-entropy:
\[H(p,q) =-\sum_{t\in T}p(t)\log q(t)\] \[=\log Z_{q}-\sum_{t\in T}p(t)\sum_{e\in t}\log\phi_{q}(e)\] \[=\log Z_{q}-\sum_{t\in T}p(t)\sum_{e\in E}\mathbb{1}[e\!\in\!t] \log\phi_{q}(e)\] \[=\log Z_{q}-\sum_{e\in E}p(e)\log\phi_{q}(e) \tag{5}\]
This reduces the computation of cross-entropy to finding marginal probabilities of one distribution, and finding log-partition of the other - both of which can be computed efficiently for all distributions in SynJax. Given the method for computing cross-entropy, finding entropy is trivial:
\[H(p)=H(p,p) \tag{6}\]
KL divergence is easy to compute too:
\[D_{KL}(p||q)=H(p,q)-H(p) \tag{7}\]
## 9 Library Design
Each distribution has different complex shape constraints which makes it complicated to document and implement all the checks that verify that the user has provided the right arguments. The _jaxtyping_ library1 was very valuable in making SynJax code concise, documented and automatically checked.
Footnote 1: [https://github.com/google/jaxtyping](https://github.com/google/jaxtyping)
Structured algorithms require complex broadcasting, reshaping operations and application of semirings. To make this code simple, we took the _einsum_ implementation from the core JAX code and modified it to support arbitrary semirings. This made the code shorter and easier to read.
Most inference algorithms apply a large number of elementwise and reshaping operations that are in general fast but create a large number of intermediate tensors that occupy memory. To speed this up we use checkpointing (Griewank, 1992) to avoid memorization of tensors that can be recomputed quickly. That has improved memory usage _and_ speed, especially on TPUs.
All functions that could be vectorized are written in pure JAX. Those that cannot, like Wilson sampling (1996) and Tarjan's algorithm (1977), are implemented with Numba (Lam et al., 2015).
All SynJax distributions inherit from Equinox modules (Kidger and Garcia, 2021) which makes them simultaneously PyTrees and dataclasses. Thereby all SynJax distributions can be transformed with jax.vmap and are compatible with any JAX neural framework, e.g. Haiku and Flax.
## 10 Comparison to alternative libraries
JAX has a couple of libraries for probabilistic modeling. Distrax (Babuschkin et al., 2020) and Tensorflow-Probability JAX substrate (Dillon et al., 2017) provide continuous distributions. NumPyro (Phan et al., 2019) and Oryx provide probabilistic programming. DynaMax (Chang et al., 2022) brings state space models to JAX and includes an implementation of HMMs.
PGMax (Zhou et al., 2023) is a JAX library that supports inference over arbitrary factor graphs by using loopy belief propagation. After the user builds the desired factor graph, PGMax can do automatic inference over it. For many structured distributions building a factor graph is the difficult part of implementation because it may require a custom algorithm (e.g. CKY or Needleman-Wunsch). SynJax implements those custom algorithms for each of the supported structures. With SynJax the user only needs to provide the parameters of the distribution and SynJax will handle _both_ building of the factor graph and inference over it. For all the included distributions, SynJax also provides some features not covered by PGMax, such as unbiased sampling, computation of entropy, cross-entropy and KL divergence.
Optax (Babuschkin et al., 2020) provides CTC loss implementation for JAX but without support for computation of optimal alignment, marginals over alignment links, sampling alignments etc.
All the mentioned JAX libraries focus on continuous or categorical distributions and, with the exception of HMMs and CTC loss, do not contain distributions provided by SynJax. SynJax fills
\begin{table}
\begin{tabular}{l|r|r|r|r} & \multicolumn{1}{c|}{Torch-Struct} & \multicolumn{1}{c|}{SynJax} & \multicolumn{1}{c}{Speedup} \\ \hline Distribution & Loc & Loc & (relative \%) & \\ \hline Linear-Chain-CRF & \(32\) & \(15\) & \((46\%)\) & \(13\times\) \\ Semi-Markov CRF & \(54\) & \(15\) & \((27\%)\) & \(84\times\) \\ Tree-CRF & \(21\) & \(14\) & \((66\%)\) & \(5\times\) \\ PCFG & \(51\) & \(36\) & \((70\%)\) & \(1\times\) \\ Projective CRF & \(70\) & \(54\) & \((77\%)\) & \(3\times\) \\ Non-Projective CRF & \(60\) & \(8\) & \((16\%)\) & \(71\times\) \\ \end{tabular}
\end{table}
Table 1: Comparison against Torch-Struct with respect to lines of code for log-partition and relative speedup in the computation of marginal probabilities.
this gap in the JAX ecosystem and enables easier construction of structured probability models.
The most comparable library in terms of features is Torch-Struct Rush (2020) that targets PyTorch as its underlying framework. Torch-Struct, just like SynJax, uses automatic differentiation for efficient inference. We will point out here only the main differences that would be of relevance to users. SynJax supports larger number of distributions and inference algorithms and provides a unified interface to all of them. It also provides reproducible sampling trough controlled randomness seeds. SynJax has a more general approach to computation of entropy that does not depend on semirings and therefore applies to all distributions. SynJax is fully implemented in Python and compiled with jax.jit and numba.jit while Torch-Struct does not use any compiler optimizations except a custom CUDA kernel for semiring matrix multiplication. If we compare lines of code and speed (Table 1) we can see that SynJax is much more concise and faster than Torch-Struct (see Appendix A for details).
SynJax also provides the fastest and most feature rich implementation of spanning tree algorithms. So far the most competitive libraries for spanning trees were by Zmigrod et al. and Stanojevic and Cohen. SynJax builds on Stanojevic and Cohen code and annotates it with Numba instructions which makes it many times faster than any other alternative (see Figure 3 in the appendix).
## 11 Conclusion
One of the main challenges in creating deep neural models over structured distributions is the difficulty of their implementation on modern hardware accelerators. SynJax addresses this problem and makes large scale training of structured models feasible and easy in JAX. We hope that this will encourage research into finding alternatives to auto-regressive modeling of structured data.
### Limitations
SynJax is quite fast, but there are still some areas where the improvements could be made. One of the main speed and memory bottlenecks is usage of big temporary tensors in the dynamic programming algorithms needed for computation of log-partition function. This could be optimized with custom kernels written in Pallas.2 There are some speed gains that would conceptually be simple but they depend on having a specialized hardware. For instance, matrix multiplication with semirings currently does not use hardware acceleration for matrix multiplication, such as TensorCore on GPU, but instead does calculation with regular CUDA cores. We have tried to address this with log-einsum-exp trick Peharz et al. (2020) but the resulting computation was less numerically precise than using a regular log-semiring with broadcasting. Maximum spanning tree algorithm would be much faster if it could be vectorized - currently it's executing as an optimized Numba CPU code.
Footnote 2: [https://jax.readthedocs.io/en/latest/pallas](https://jax.readthedocs.io/en/latest/pallas)
### Acknowledgements
We are grateful to Chris Dyer, Aida Nematzadeh and other members of language team in Google DeepMind for early comments on the draft of this work. We appreciate Patrick Kidger's work on Equinox and Jaxtyping that made development of SynJax much easier. We also appreciate that Sasha Rush open-sourced Torch-Struct, a library that influenced many aspects of SynJax.
|
2306.04052 | Nuclear Spin-Depleted, Isotopically Enriched 70Ge/28Si70Ge Quantum Wells | The p-symmetry of the hole wavefunction is associated with a weaker hyperfine
interaction as compared to electrons, thus making hole spin qubits attractive
candidates to implement long coherence quantum processors. However, recent
studies demonstrated that hole qubits in planar germanium (Ge) heterostructures
are still very sensitive to nuclear spin bath. These observations highlight the
need to develop nuclear spin-free Ge qubits to suppress this decoherence
channel and evaluate its impact. With this perspective, this work demonstrates
the epitaxial growth of $^\text{73}$Ge-depleted isotopically enriched
$^\text{70}$Ge/SiGe quantum wells. The growth was achieved by reduced pressure
chemical vapor deposition using isotopically purified monogermane
$^\text{70}$GeH$_\text{4}$ and monosilane $^\text{28}$SiH$_\text{4}$ with an
isotopic purity higher than 99.9 $\%$ and 99.99 $\%$, respectively. The quantum
wells consist of a series of $^\text{70}$Ge/SiGe heterostructures grown on Si
wafers using a Ge virtual substrate and a graded SiGe buffer layer. The
isotopic purity is investigated using atom probe tomography following an
analytical procedure addressing the discrepancies in the isotopic content
caused by the overlap of isotope peaks in mass spectra. The nuclear spin
background in the quantum wells was found to be sensitive to the growth
conditions. The lowest concentration of nuclear spin-full isotopes
$^\text{73}$Ge and $^\text{29}$Si in the heterostructure was established at
0.01 $\%$ in the Ge quantum well and SiGe barriers. The measured average
distance between nuclear spins reaches 3-4 nm in
$^\text{70}$Ge/$^\text{28}$Si$^\text{70}$Ge, which is an order of magnitude
larger than in natural Ge/SiGe heterostructures. | O. Moutanabbir, S. Assali, A. Attiaoui, G. Daligou, P. Daoust, P. Del Vecchio, S. Koelling, L. Luo, N. Rotaru | 2023-06-06T22:53:30Z | http://arxiv.org/abs/2306.04052v2 | # Nuclear Spin-Depleted, Isotopically Enriched \({}^{70}\)Ge/\({}^{28}\)Si\({}^{70}\)Ge Quantum Wells
###### Abstract
The p-symmetry of the hole wavefunction is associated with a weaker hyperfine interaction as compared to electrons, thus making hole spin qubits attractive candidates to implement long coherence quantum processors. However, recent studies demonstrated that hole qubits in planar germanium (Ge) heterostructures are still very sensitive to nuclear spin bath. These observations highlight the need to develop nuclear spin-free Ge qubits to suppress this decoherence channel and evaluate its impact. With this perspective, this work demonstrates the epitaxial growth of \({}^{73}\)Ge-depleted isotopically enriched \({}^{70}\)Ge/SiGe quantum wells. The growth was achieved by reduced pressure chemical vapor deposition using isotopically purified monogermae \({}^{70}\)GeH\({}_{4}\) and monosilane \({}^{28}\)SiH\({}_{4}\) with an isotopic purity higher than 99.9 % and 99.99 %, respectively. The quantum wells consist of a series of \({}^{70}\)Ge/SiGe heterostructures grown on Si wafers using a Ge virtual substrate and a graded SiGe buffer layer. The isotopic purity is investigated using atom probe tomography following an analytical procedure addressing the discrepancies in the isotopic content caused by the overlap of isotope peaks in mass spectra. The nuclear spin background in the quantum wells was found to be sensitive to the growth conditions. The lowest concentration of nuclear spin-full isotopes \({}^{73}\)Ge and \({}^{29}\)Si in the heterostructure was established at 0.01 % in the Ge quantum well and SiGe barriers. The measured average distance between nuclear spins reaches 3-4 nm in \({}^{70}\)Ge/\({}^{28}\)Si\({}^{70}\)Ge, which is an order of magnitude larger than in natural Ge/SiGe heterostructures.
## I Introduction
Although it was quickly relegated behind silicon (Si) because of its relatively low bandgap energy, its lack of a stable oxide, and its large surface state densities, germanium (Ge) is inarguably the material that catalyzed the transition from what W. Pauli and I. Rabi called the 'Physics of Dirt' [1; 2] to modern-day semiconductor physics and technology [3]. Indeed, the ease by which Ge can then be purified and processed led to the demonstration of point contact diode mixers for radar reception [4] and of the point contact and junction transistors [5]. These inventions contributed to laying the groundwork for what was later coined as the first quantum revolution. In recent years, there has been a revived interest in Ge-based materials for integrated photonic circuits [6; 7], sensing [6], high-mobility electronic s [8], and solid-state quantum computing [9]. The latter, for instance, aims at capitalizing on the advantageous quantum environment of holes in Ge, their inherently large and tunable spin-orbit interaction (SOI), and their reduced hyperfine coupling with nuclear spins to implement increasingly robust and reliable spin qubits [10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. Indeed, these quantum devices are now considered forefront candidates for scalable quantum processors [9]. This recent surge in developing Ge qubits makes one think that Ge may also be a key material in shaping the anticipated second quantum revolution.
From a fundamental standpoint, it is expected that the hyperfine interaction to be weaker for holes than electrons due to the p-symmetry of the hole wavefunction. However, theoretical investigations suggested a hyperfine coupling that is only one order of magnitude smaller than that of electrons [20; 21] or of equal strength as in Si [21]. Moreover, the p-symmetry and d-orbital hybridization of the hole wavefunction leads to an anisotropic hyperfine coupling that is non-existent for electron spins [20]. Interestingly, recent experimental studies hint at the sensitivity of hole spin qubits in planar Ge/SiGe heterostructure to nuclear spin bath reporting an amplitude of the fluctuating Overhauser field of 34.4 kHz, which is suggested to limit spin dephasing times [22]. Although charge noise is believed to be the dominant decohering process, these observations call for the development of nuclear spin-free Ge qubits to elucidate their sensitivity to hyperfine coupling. Undertaking this research direction requires Ge-based quantum devices that are depleted of \({}^{73}\)Ge, which is the only Ge nuclear spin-full stable isotope. This work addresses this very issue and provides a demonstration of the epitaxial growth of isotopically purified \({}^{70}\)Ge quantum wells (QWs). Note that enriched \({}^{70}\)Ge, \({}^{74}\)Ge, and \({}^{76}\)Ge isotopes were employed in the past to grow superlattices and self-assembled quantum dots by solid-source molecular beam epitaxy [23; 24; 25]. Herein, the growth of \({}^{73}\)Ge-depleted QWs is achieved by hydride precursors using the chemical vapor deposition (CVD) method, which is broadly adopted in Ge device research besides being compatible with the processing standards in the semiconductor industry [9].
## II Experimental
The epitaxial growth of isotopically engineered Ge/SiGe QW heterostructures was carried out on
hydrogen-passivated 4-inch (001)-oriented Si wafers in a reduced-pressure CVD reactor using isotopically purified monogermane \({}^{70}\)GeH\({}_{4}\) (isotopic purity \(>\)99.9 %) and monosilane \({}^{28}\)SiH\({}_{4}\) (isotopic purity \(>\)99.99 %). The precursors were enriched in a centrifugal setup using natural monogermane (\({}^{nat}\)GeH\({}_{4}\)) and SiF\({}_{4}\) as starting gases [26]. After purification, \({}^{70}\)GeH\({}_{4}\) contains traces (\(<\)0.006 at.%) of other Ge isotopes: \({}^{72}\)Ge, \({}^{73}\)Ge, \({}^{74}\)Ge, and \({}^{76}\)Ge. Moreover, chemical contaminants including other hydrides are also negligible, with an average content being \(<\)0.06 umol/mol. Reference Ge/SiGe QW heterostructures were also prepared following the same growth protocol using conventional precursors with natural isotopic abundance (\({}^{nat}\)GeH\({}_{4}\) and disilane \({}^{nat}\)Si\({}_{2}\)H\({}_{6}\)). After annealing in hydrogen, a 3 um-thick Ge interlayer, commonly known as Ge virtual substrate (Ge-VS), was grown on Si using \({}^{nat}\)GeH\({}_{4}\) and a two-step growth process in the 450-600 \({}^{\circ}\)C temperature range. Then follows a thermal cyclic annealing step (725-875 \({}^{\circ}\)C) to improve the Ge-VS quality. A reverse-graded 1um-thick Si\({}_{1\times}\)Ge\({}_{x}\) layer was then grown at 600 \({}^{\circ}\)C using \({}^{nat}\)GeH\({}_{4}\) and \({}^{nat}\)Si\({}_{2}\)H\({}_{6}\) until a uniform Si content of 18 at.% is reached. Without interrupting the growth, the \({}^{nat}\)GeH\({}_{4}\) supply was switched to the purified \({}^{70}\)GeH\({}_{4}\) to grow the first Si\({}_{1\times}\)Ge\({}_{x}\) barrier layer (BR1), while keeping all the other growth parameters unchanged. Thickness and composition of BR1 were varied in the 0.3-1um range and x = 0.15-0.18 range to investigate the effect of the growth time on the isotopic purity of the epilayers. After that the growth of BR1 was completed, the reactor was then purged in hydrogen for 90 s before growing the \({}^{70}\)Ge QW layer using \({}^{70}\)GeH\({}_{4}\) supply for a variable growth time of up to 40 s. Next, the reactor was purged in hydrogen for 90 s prior to the growth of the Si\({}_{1\times}\)Ge\({}_{x}\) BR2 layer under identical growth conditions as BR1. Lastly, a Si capping layer with a few nm thickness was grown. Fig. 1a illustrates the grown stacks. Ge/Si\({}_{0.18}\)Ge\({}_{0.82}\) (A), \({}^{70}\)Ge/Si\({}_{0.18}\)Ge\({}_{0.82}\) (B), and \({}^{70}\)Ge/Si\({}_{0.15}\)Ge\({}_{0.85}\) (C) QWs were grown using this protocol. The \({}^{70}\)Ge/\({}^{283}\)Si\({}_{0.15}\)\({}^{70}\)Ge\({}_{0.85}\) (D) QW was grown following a similar protocol except for the growth of BR1-2 that was performed by changing from \({}^{nat}\)Si\({}_{2}\)H\({}_{6}\) to \({}^{283}\)SiH\({}_{4}\) and adjusting the growth conditions to accommodate the
Figure 1: **Epitaxial growth of isotopically engineered Ge/SiGe heterostructures**. (a) Schematic illustration of four sets of the grown Ge/SiGe heterostructures. (b) A cross-sectional STEM image of an as-grown Ge/SiGe QW showing the entire stack consisting of Ge-Vs and SiGe buffer layers grown on a Si substrate. (c) A close-up cross-sectional STEM image of an isotopically enriched \({}^{70}\)Ge/SiGe QW. (d) A cross-sectional TEM image showing that the QW region is defect-free and that the extended defects are confined at Ge-VS/SiGe interface. (e) The (224) XRD-RSM of a representative CVD-grown isotopically purified \({}^{70}\)Ge QW heterostructure.
change in the precursor decomposition.
Several characterization techniques were employed to elucidate the basic properties of the as-grown heterostructures and investigate their isotopic content. Lattice strain and average content in Ge/SiGe heterostructures were evaluated from X-ray diffraction (XRD) measurements including reciprocal space map (RSM) analysis. The microstructure of the grown materials was investigated by transmission electron microscopy (TEM) and scanning TEM (STEM). The quality of interfaces, the atomic-level composition, and the isotopic purity were investigated using atom probe tomography (APT). Additional insights into the chemical and isotopic compositions are also obtained using secondary ion mass spectrometry (SIMS). Raman scattering spectroscopy was employed to evaluate the effects of the isotopic content on phonon scattering in Ge QWs. Additionally, the uniformity of the growth thickness as well as the optical signature of quantum confinement were investigated using spectroscopic ellipsometry (SE).
## III Results and Discussion
A cross-sectional STEM image of a representative isotopically-engineered Ge QW heterostructures is shown in Fig. 1b, while the enlarged view of the \({}^{70}\)Ge/Si\({}_{0.15}\)\({}^{70}\)Ge\({}_{0.85}\) QW region is displayed in Fig. 1c. The figure shows an 18 nm-thick \({}^{70}\)Ge QW together with BR1 and BR2 layers with thicknesses of 290 nm and 28 nm, respectively. The transition between SiGe barrier layers and \({}^{70}\)Ge QW is of the order of 1-2 nm. To evaluate the structural quality of the heterostructures, cross-sectional TEM images were acquired (Fig. 1d). The extended defects are confined to the Si/Ge-VS and Ge-VS/Si\({}_{1\times\text{x}}\)Ge\({}_{\text{x}}\) interfaces, with no defects being detected in the QW region at the TEM imaging scale. XRD-RSM (224) analysis of the as-grown heterostructures demonstrates sharp peaks for the SiGe/Ge substrate, barriers as well as the signature of the strained 18 nm-thick \({}^{70}\)Ge layer, thus suggesting an excellent degree of crystallinity across the structure (Fig. 1(e)). Here, the variation in composition between natural and purified SiGe layers (Si\({}_{0.18}\)Ge\({}_{0.82}\) vs. Si\({}_{0.15}\)Ge\({}_{0.85}\) as determined by APT) is related to the difference in composition between the germane precursor supplies.
A first glimpse into the isotopic content of the as-grown QWs was obtained from Raman spectroscopy studies. Fig. 2(a) shows Raman spectra around the Ge-Ge LO mode recorded for a set of QWs grown at a variable growth time between 4 and 40 s corresponding to a 3-30 nm thickness range. The spectra indicate the presence of two distinct modes. The first is centered around 293.7 cm-1 corresponding to Ge-Ge LO mode in the SiGe barrier, whereas the second peak at 305.3 cm-1 is attributed to the same mode but in the \({}^{70}\)Ge QW. This assessment is consistent with the observed increase in the second peak intensity as the QW thickness increases. Note that the Ge-Ge mode in \({}^{\text{nat}}\)Ge QW is detected at 300.1 cm-1, as demonstrated in Fig. 2(b) comparing two identical \({}^{\text{nat}}\)Ge and \({}^{70}\)Ge QW samples. The observed shift between the two samples is analyzed based on the quasi-harmonic approximation, which is a valid approximation for semiconductors at room temperature [27; 28]. According to the virtual crystal approximation, a simple harmonic analysis predicts that the energy of a phonon mode is inversely proportional to the square root of the average isotopic mass. The average isotopic mass is given by \(\langle m\rangle\) = \(\Sigma_{\text{i}}\)ci\({}_{\text{m}}\)\({}_{\text{i}}\), with ci being the fractional composition of an isotope of mass \(m_{\text{i}}\). Knowing that the atomic mass of \({}^{\text{nat}}\)Ge is 72.63 amu, the measured wavenumbers of Ge-Ge LO mode in the sets of QWs yield an average atomic mass in \({}^{70}\)Ge QW lattice of 70.17 amu corresponding to at least 99.6% enrichment in \({}^{70}\)Ge isotopes. As discussed below, the growth protocol has a strong effect on the isotopic purity of the QW.
It is important to mention that the limited spectral resolution ( 1 cm-1) of the used Raman setup does not allow addressing the effect of isotopic purification on lattice disorder [28]. Nevertheless, it is reasonable to conclude that the similarity observed in the full width at the half maximum of the Ge-Ge peaks in \({}^{\text{nat}}\)Ge and \({}^{70}\)Ge is indicative of a similar crystalline quality, which is consistent with XRD and TEM studies. To further assess the quality of the grown QWs, SE studies were carried out on \({}^{70}\)Ge QW samples. For these studies, reference samples consisting of the same grown layers but without BR2 were also prepared and investigated. Fig. 2(c) displays the measured spectra for 18 nm \({}^{70}\)Ge QW and the associated reference material. The figure shows the imaginary dielectric function (left) and the critical point (CP) analysis of the measured dielectric function. The nature of the lineshape of the dielectric function of both heterostructures conceals insights into the quantum confinement in the \({}^{70}\)Ge QW. Note that the penetration depth of the incident excitation near the E\({}_{1}\) CP is around 20-35 nm for Ge bulk. If one considers a limited spectral range between 1.5-3 eV, the effect of the underlying materials (SiGe buffer, Ge-VS, and Si substrate) can be negated as the incident light will not reach and excite them. Consequently, only the top three layers (\({}^{70}\)Ge QW, BR2, and Si cap) should in principle contribute to the measured dielectric function (Fig. 2(c)). Moreover, the contribution of the 3-5 nm-thick Si cap should be excluded in the analysis as the E\({}_{1}\) CP of Si is located around 3.4 eV [29], which is outside the measured spectral range.
The second derivative of the dielectric function of the two samples (with and without the top barrier BR2) is displayed in Fig. 2(d). To unravel the electronic structure of the analyzed heterostructure, the measured data were fitted using a generic critical point parabolic band model [29; 30; 31; 32]. The CP energy of the \({}^{70}\)Ge layer without a barrier is evaluated at 2.156 eV, which is close to the Ge bulk CP of 2.134 eV [31], whereas for \({}^{70}\)Ge QW sample a blueshift is noted yielding a CP energy of 2.233 eV. More importantly, the qualitative difference between
both dielectric functions at 2.17 eV is clear. Indeed, the CP lineshape changes drastically from 2D Van Hove singularities in the reference structure (green dots) to a discrete excitonic lineshape in \({}^{70}\)Ge QW (blue dots). This observed change in CP lineshape and energy is indicative of quantum confinement and its associated narrowing of the optical transition in Ge [32].
In the following, the isotopic content of the grown QWs is discussed based on APT studies. Fig. 3(a) shows a representative 3D 30 \(\times\) 30 \(\times\) 30 nm\({}^{3}\) atom-by-atom APT map of a \({}^{70}\)Ge QW. The map indicates that the QW region contains mainly the \({}^{70}\)Ge isotope, but traces of other isotopes can also be seen. Before quantifying and discussing the level of these contaminants, the recorded mass spectra are described first, as shown in Fig. 3(b,c). The figures exhibit the mass spectra recorded for a set of four QW samples labeled A, B, C, and D, as illustrated in Fig. 1(a). These samples were grown under different conditions. In sample A, the QW was grown using \({}^{70}\)GeH\({}_{4}\), whereas the SiGe barriers were grown using \({}^{\mathrm{nat}}\)GeH\({}_{4}\). In the other three samples, the growth of both barriers and QWs was conducted using \({}^{70}\)GeH\({}_{4}\). However, the change from \({}^{\mathrm{nat}}\)GeH\({}_{4}\) to \({}^{70}\)GeH\({}_{4}\) occurred during the growth of the underlying SiGe layer at a variable thickness from the interface with the QW: 290 nm (B), 1000 nm (C), and 1890 nm (D). This means that the changes from \({}^{\mathrm{nat}}\)GeH\({}_{4}\) to \({}^{70}\)GeH\({}_{4}\) took place at different times during the growth of SiGe buffer layer prior to the QW growth in these samples (B: 8 min, C: 24 min, and D: 29 min). In the case of sample D, the growth of SiGe barriers was conducted using the isotopically purified precursor \({}^{28}\)SiH\({}_{4}\) instead of \({}^{\mathrm{nat}}\)Si\({}_{2}\)H\({}_{6}\). The growth rate was higher for this sample due to a higher GeH\({}_{4}\) supply required for the growth optimization using \({}^{28}\)SiH\({}_{4}\) precursor. The obtained APT mass spectra are compared in Fig. 3(b,c) showing the spectra of doubly charged Ge ions (Fig. 3(b)) and doubly charged Si ions (Fig. 3(c)). Each spectrum contains 10 million atoms from the selected region which includes most of the top barrier, the full QW and its interfaces, and a part of the bottom barrier. Note that this includes the QW interfaces and the local fluctuations in the isotopic purity observed near these interfaces, as shown in Fig. 4.
The mass spectrum of sample A shows peaks associated with all five Ge isotopes at intensities close to the natural abundance of each isotope as most of the signal originates from the barriers grown with \({}^{\mathrm{nat}}\)GeH\({}_{4}\) (Fig. 3(b)). However, in samples B, C, and D, the APT spectra clearly show enrichment in \({}^{70}\)Ge isotope as the peaks related to other isotopes have significantly diminished. Interestingly, the level of this contamination from other isotopes is intimately related to the growth protocol. Indeed, the level of Ge isotope cross-contamination becomes lower the longer the time, relative to the moment of the QW growth, of the transition from \({}^{\mathrm{nat}}\)GeH\({}_{4}\) to \({}^{70}\)GeH\({}_{4}\). This indicates that the detection of \({}^{70}\)Ge\({}^{++}\), \({}^{73}\)Ge\({}^{++}\), \({}^{74}\)Ge\({}^{++}\), and \({}^{76}\)Ge\({}^{++}\) peaks is a manifestation of the reservoir effect, meaning that \({}^{\mathrm{nat}}\)GeH\({}_{4}\) used to grow the much thicker Ge-VS and SiGe-VS still resides in the growth reactor for an extended period of time. This leads to the undesired incorporation of the nuclear spin-full \({}^{73}\)Ge isotope into the growing QW structure. Herein, it is shown that an early introduction of \({}^{\mathrm{nat}}\)GeH\({}_{4}\) can eliminate this contamination to a great extent. Ideally, the growth of the entire stack Ge-VS/SiGe-VS/BR1/Ge/BR2 should be done using \({}^{70}\)GeH\({}_{4}\), but the process can be costly. Similarly, Fig. 3(c) shows that the use of \({}^{28}\)SiH\({}_{4}\) to grow the SiGe barriers leads to a significant reduction, more than 30-fold, of the amount of \({}^{29}\)Si isotope in the heterostructure. Since the hole wavefunction in Ge QW is expected to leak to the SiGe barriers, it is also important to suppress the hyperfine interactions that may result from the presence of \({}^{29}\)Si isotope.
The local isotopic purity and 3D distribution of iso
Figure 2: **Vibrational and optical properties of \({}^{70}\)Ge QWs**. (a) Ge-Ge LO vibrational mode recorded for a set of \({}^{70}\)Ge QW grown with a variable growth time between 4 and 40 s corresponding to a 3-30 nm thickness range. (b) Ge-Ge LO vibrational mode in two identical \({}^{\mathrm{nat}}\)Ge and \({}^{70}\)Ge QW samples. (c) The imaginary dielectric function of an 18 nm-thick \({}^{70}\)Ge QW structure (green) and for a reference material consisting of the same layer but without BR2 (blue). (d) CP analysis of the corresponding dielectric function. The CP energy position is extracted after fitting simultaneously the real and imaginary part of the second derivative of the dielectric function.
topes can be obtained from APT. However, since the peaks of heavier isotopes are embedded in the tails of the lighter isotopes (Fig. 3(b)), it is important to carefully analyze and model the mass spectra to separate the tails and the peaks to accurately quantify the isotopic content in the heterostructures. Herein, SIMS analyses were carried out to validate the APT isotope mapping method. Since all non-\({}^{70}\)Ge isotopes originate from a natural Ge source, one can use the content measured for each isotope to estimate the \({}^{70}\)Ge purity by making a projection of the overall contamination based on the natural distribution of isotopes. As shown in Fig. 4(a), SIMS data provide
Figure 3: **Atom probe tomography of \({}^{70}\)Ge QWs**. (a) 3D atom-by-atom APT map of a \({}^{70}\)Ge/SiGe heterostructure. The 30 \(\times\) 30 \(\times\) 30 nm\({}^{3}\) map contains approximately 900,000 atoms. (b) Ge-Ge LO vibrational mode in two identical \({}^{\mathrm{nat}}\)Ge and \({}^{70}\)Ge QW samples. Mass spectra of the investigated QW samples showing doubly charged Ge ions (b) and Si ions (c) of 10 million atoms.
estimates derived from \({}^{72}\)Ge, \({}^{74}\)Ge, and \({}^{76}\)Ge signals coinciding almost perfectly which each other and the estimate for the \({}^{70}\)Ge purity gained from considering the signal from all Ge isotopes. For APT, however, a difference was observed (data not shown) when estimating based on doubly charged \({}^{72}\)Ge, \({}^{74}\)Ge, or all of the isotopes. This discrepancy is caused by the aforementioned overlap of isotope peaks in the mass spectra (Fig. 3(b)). To address this issue, a Monte-Carlo approach is implemented where the tails are fitted locally around the peak region, and the peak and tail are decomposed multiple 100s or 1000s times to find the average content of the peak and the error created by the decomposition. The resulting estimate using the tail-corrected data for doubly-charge \({}^{72}\)Ge, \({}^{74}\)Ge, and \({}^{76}\)Ge match SIMS data, as shown in Fig. 4a (solid line).
Using the same Monte Carlo approach, we can quantify the \({}^{70}\)Ge purity in all samples. The result is shown in Fig. 4(b) highlighting once more the differences between the samples in terms of isotopic purity near the QW caused by the difference in time passed between the onset of \({}^{70}\)GeH\({}_{4}\) growth and the QW growth. Furthermore, both SIMS and APT data consistently show that the 90 s growth interruption at the QW interfaces, introduced to promote the growth of sharper interfaces, leads to an accumulation in \({}^{\mathrm{nat}}\)Ge at the interface. For the growth of sample D, the top barrier was grown without interruption thus suppressing the isotopic cross-contamination at the interface. Maintaining the \({}^{70}\)Ge purity is important to achieve a nuclear spin-depleted interface and BR1.
A more accurate evaluation of the nuclear spin background is obtained from APT analyses displayed in Fig. 4(c). The figure outlines the total concentration profiles of nuclear spin-full isotopes \({}^{73}\)Ge and \({}^{29}\)Si across the investigated heterostructures. It is noticeable that in \({}^{\mathrm{nat}}\)Si\({}^{\mathrm{nat}}\)Ge/\({}^{70}\)Ge/\({}^{\mathrm{nat}}\)Si\({}^{\mathrm{nat}}\)Ge (sample A) the nuclear spin concentration drops from 6 at.% in the SiGe barriers down to 0.1 at.% in the QW. This background is further reduced to 0.02 at.% in QWs of samples B and C and even below 0.01 at.% in sample D consisting of \({}^{28}\)Si\({}^{70}\)Ge/\({}^{70}\)Ge/\({}^{28}\)Si\({}^{70}\)Ge. Besides providing the isotopic composition profiles, APT also allows extracting
Figure 4: **Isotopic purity and nuclear spin background in \({}^{70}\)Ge QWs. (a) The evolution of \({}^{70}\)Ge content across sample B as measured by SIMS (dotted lines) and APT with tail-corrected analysis (solid lines) from \({}^{72}\)Ge, \({}^{74}\)Ge, and \({}^{76}\)Ge signals. (b) The depth profile of the isotopic purity across the investigated heterostructures. (c) The concentration profiles of nuclear spin-full species \({}^{73}\)Ge and \({}^{29}\)Si in the investigated heterostructures. (d) The evolution as a function of the thickness of the average distance between nuclear spins in the investigated heterostructures.**
the atomic-level spatial distribution of individual nuclear spin-full species \({}^{73}\)Ge and \({}^{29}\)Si, as displayed in Fig. 4(d), The figure shows the depth evolution of the average distance between neighboring nuclear spins across the investigated heterostructures. To obtain these profiles, a model of SiGe lattice was generated from APT maps [33] on which the distribution of each isotope was imprinted thus allowing the calculation of the distance between nuclear spins in a lattice plane-by-lattice plane fashion. The uncertainty in these calculations was assessed by sampling 10 different models. The obtained result demonstrates that the average distance between nuclear spins is the lowest in the QW for all samples, but it remains sensitive to the growth conditions. For instance, in \({}^{\mathrm{nat}}\)Si\({}^{\mathrm{nat}}\)Ge barriers (A) the obtained average distance is 0.3-0.4 nm, whereas it increases by one order of magnitude to 3-4 nm in isotopically pure\({}^{28}\)Si\({}^{70}\)Ge/\({}^{70}\)Ge/\({}^{28}\)Si\({}^{70}\)Ge heterostructure (D).
## IV Conclusion
In summary, this work demonstrates the epitaxial growth of nuclear spin-depleted, isotopically enriched \({}^{70}\)Ge QWs. The growth was achieved on Si wafers using enriched precursors \({}^{70}\)GeH\({}_{4}\) and \({}^{28}\)SiH\({}_{4}\) in a reduced-pressure CVD system. The crystalline quality of the grown heterostructures was confirmed by XRD and electron microscopy studies. The critical point of the grown QWs exhibits a discrete excitonic lineshape at 2.233 eV indicative of quantum confinement. The isotopic purity and the distribution of the nuclear spin background were investigated using APT. In this regard, a Monte Carlo approach was introduced to solve the discrepancies in APT analyses caused by the overlap of isotope peaks in the recorded mass spectra. These analyses demonstrate that the isotopic content is very sensitive to the growth conditions including any growth interruption. The latter was found to induce an accumulation of natural Ge isotopes at the growth interface leading to lower \({}^{70}\)Ge content. To evaluate the distribution of the residual nuclear spin background, a lattice model was constructed to map the average distance between the two nuclear spin-full isotopes \({}^{73}\)Ge and \({}^{29}\)Si. These studies showed that the distance between nuclear spins reaches 3-4 nm in \({}^{70}\)Ge/\({}^{28}\)Si\({}^{70}\)Ge, which is an order of magnitude higher than in natural Ge/SiGe heterostructure. Additionally, the lowest concentration of \({}^{73}\)Ge and \({}^{29}\)Si contaminants in the heterostructure was established at 0.01% in both QW and barriers of \({}^{70}\)Ge/\({}^{28}\)Si\({}^{70}\)Ge heterostructure. These insights constitute a valuable input to improve the design and theoretical modeling of spin qubits by providing quantitative, atomic-level details on nuclear spin distribution.
**METHODS**.
X-ray diffraction (XRD) measurements performed using a Bruker Discover D8. A 3 bounces Ge(220) 2-crystals analyzer was placed in front of the XRD detector during the XRD (004) and (224) reciprocal space map (RSM) analysis. The microstructure of the grown materials was investigated by transmission electron microscopy (TEM). TEM specimens were prepared in a Thermo Fisher Helios Nanolab 660 dual-beam scanning electron microscope using a gallium-focused ion beam (FIB) at 30, 16, and 5 kV. Electron beam-induced carbon and platinum were locally deposited on the sample to protect the imaged region from being damaged by the ion-beam milling during the thinning of the TEM lamella. TEM and scanning TEM (STEM) analyses were carried out on a Thermo Scientific Talos F200X S/TEM system with an acceleration voltage of 200 kV.
Insights into the quality of interfaces, the atomic-level composition, and the isotopic purity were obtained using atom probe tomography (APT). APT specimens were prepared in a FEI Helios Nanolab 660 dual-beam scanning electron microscope using a gallium-focused ion beam (FIB) at 30, 16, and 5 kV. A 120-150 nm-thick chromium capping layer was deposited on the samples before FIB irradiation to minimize the implantation of gallium ions into the imaged region. APT studies were performed in a LEAP 5000XS tool. The LEAP 5000XS utilizes a picosecond laser to generate pulses at a wavelength of 355 nm. For the analysis, all samples were cooled to a temperature of 25 K. The experimental data were collected at laser powers of 3-6 pJ. Additional insights into the chemical and isotopic compositions are also obtained using secondary ion mass spectrometry (SIMS).
Raman scattering analyses were performed at room temperature using a 633 nm excitation laser. Additionally, the uniformity of the growth thickness as well as the optical signature of quantum confinement were investigated using spectroscopic ellipsometry (SE). SE measurements were carried out at room temperature, using a variable angle spectroscopic RC2-XI ellipsometer manufactured by J. A. Woollam Co. The variable angle spectroscopic ellipsometer system covers the 0.5-6 eV range. All heterostructures were measured between 70\({}^{\circ}\) and 80\({}^{\circ}\) angles of incidence with a 1\({}^{\circ}\) step. A noticeable increase in the sensitivity of the SE parameters (\(\Psi\) and \(\Delta\)) was observed around 76-77\({}^{\circ}\), which is very close to the Brewster angle for Si and Ge. Thus, during the optical modeling, special care was accorded to the modeling near this angle.
**ACKNOWLEDGEMENTS**. The authors thank J. Bouchard for the technical support with the CVD system. O.M. acknowledges support from NSERC Canada (Discovery Grants, Alliance International Quantum, and CQS2Q Consortium), Canada Research Chairs, Canada Foundation for Innovation, Mitacs, PRIMA Quebec, and Defense Canada (Innovation for Defense Excellence and Security, IDEaS), the European Union's Horizon Europe research and innovation programme under grant agreement No 101070700 (MIRAQLS), and the US Army Research Office Grant No. W911NF-22-1-0277. |
2310.10919 | Taking apart squeezed light | We develop a formalism to describe squeezed light with large
spectral-temporal correlations. This description is valid in all regimes, but
is especially applicable in the long pulse to continuous-wave limit where the
photon density at any particular time is small, although the total number of
photons can be quite large. Our method relies on the Whittaker-Shannon
interpolation formula applied to the joint temporal amplitude of squeezed
light, which allows us to "take apart" the squeezed state. This provides a
local description of the state and its photon statistics, making the underlying
physics more transparent than does the use of the Schmidt decomposition. The
formalism can easily be extended to more exotic nonclassical states where a
Schmidt decomposition is not possible. | C. Drago, J. E. Sipe | 2023-10-17T01:25:59Z | http://arxiv.org/abs/2310.10919v1 | # Taking apart squeezed light
###### Abstract
We develop a formalism to describe squeezed light with large spectral-temporal correlations. This description is valid in all regimes, but is especially applicable in the long pulse to continuous-wave limit where the photon density at any particular time is small, although the total number of photons can be quite large. Our method relies on the Whittaker-Shannon interpolation formula applied to the joint temporal amplitude of squeezed light, which allows us to "take apart" the squeezed state. This provides a local description of the state and its photon statistics, making the underlying physics more transparent than does the use of the Schmidt decomposition. The formalism can easily be extended to more exotic nonclassical states where a Schmidt decomposition is not possible.
## I Introduction
Squeezed light is of interest for applications in quantum sensing and imaging [1; 2], and as a resource for quantum computing [3]. For light propagating in one direction in a quasi-1D structure, such as an optical fiber or a channel waveguide in an integrated photonic structure [4], or even light propagating in free space under the approximation that diffraction is negligible, a squeezed state can be written as
\[\ket{\Psi}=e^{\frac{\beta}{2}\int d\omega_{1}d\omega_{2}\gamma(\omega_{1}, \omega_{2})a^{\dagger}(\omega_{1})a^{\dagger}(\omega_{2})-\text{h.c.}}\ket{ \text{vac}},\] (I.1)
where for simplicity only one polarization and one transverse mode is considered. We label the lowering operator at a frequency shifted by \(\omega\) from a center reference frequency \(\omega_{o}\) by \(a(\omega)\) (see Appendix A),
\[[a(\omega_{1}),a^{\dagger}(\omega_{2})]=\delta(\omega_{1}-\omega_{2}),\] (I.2)
and \(\ket{\text{vac}}\) is the vacuum state. Here \(\gamma(\omega_{1},\omega_{2})\) is the joint spectral amplitude,
\[\int|\gamma(\omega_{1},\omega_{2})|^{2}\,d\omega_{1}d\omega_{2}=1,\] (I.3)
and \(\beta\) is the squeezing amplitude; unless otherwise indicated, we take integrals to range from \(-\infty\) to \(\infty\).
Often the properties of interest can be captured by simple functions of frequencies, such as correlation functions of the form
\[G^{(1)}(\omega)=\big{\langle}\Psi|a^{\dagger}(\omega)a(\omega)|\Psi \big{\rangle}\] (I.4a) \[G^{(2)}(\omega_{1},\omega_{2})=\big{\langle}\Psi|a^{\dagger}( \omega_{1})a^{\dagger}(\omega_{2})a(\omega_{2})a(\omega_{1})|\Psi\big{\rangle}\,,\] (I.4b)
etc. For a pulse of light where \(|\beta|\ll 1\), the state is only slightly different from the vacuum state,
\[\ket{\Psi}\approx\ket{\text{vac}}+\frac{\beta}{2}\int d\omega_{1}d\omega_{2} \gamma(\omega_{1},\omega_{2})a^{\dagger}(\omega_{1})a^{\dagger}(\omega_{2}) \ket{\text{vac}}\] (I.5)
where higher order terms in \(\beta\) have been neglected, and there is only a small probability amplitude for a two-photon state. Then
\[G^{(1)}(\omega)\to\left|\beta\right|^{2}\int\left|\gamma( \omega,\omega^{\prime})\right|^{2}d\omega^{\prime},\] (I.6a) \[G^{(2)}(\omega_{1},\omega_{2})\to\left|\beta\right|^{2}\left| \gamma(\omega_{1},\omega_{2})\right|^{2}.\] (I.6b)
In this paper we consider the evaluation of quantities such as these, even when \(|\beta|\) is not much less than one. A standard strategy in such a situation is to decompose the joint spectral amplitude in terms of Schmidt modes. If there is only one or a few Schmidt modes, as might occur for squeezed light generated by a short pump pulse, the expressions for correlation functions of the squeezed light in terms of Schmidt modes can be easily evaluated even if \(|\beta|\) is large, and they immediately identify much of the physics. But large values of \(|\beta|\) can also arise for squeezed light generated by pump pulses that are very long, and even if their intensities are very weak. Here, although the rate at which pairs of photons are generated may be quite small, the total number of pairs of photons generated diverges as the pump pulse approaches CW excitation, and thus both \(|\beta|\) and the Schmidt number diverge. Our goal is to identify strategies that allow for the calculation of quantities such as correlation functions to be done quickly for such states, and in a way that makes the physics of the squeezed light clear.
We begin by introducing the temporal representation of the joint spectral amplitude \(\overline{\gamma}(t_{1},t_{2})\) and some of its general features in Section II. Then in Section III we consider a natural first approach, which is to use the Schmidt decomposition of \(\overline{\gamma}(t_{1},t_{2})\) even if the Schmidt number is very large. We find that this approach does not directly make the physics of the state \(\ket{\Psi}\) apparent, and this motivates our search for other ways to "take apart" the joint amplitude of the squeezed light that better elucidate the physics. In section IV we introduce a new approach suggested by the time correlation functions of the light; it works well if there is significant degeneracy in the amplitudes of the Schmidt decomposition. In section V we generalize this, based on the Whittaker-Shannon interpolation formula, in a scheme that is applicable even if there is no such degeneracy. We argue that this new
way of "taking apart" the joint amplitude does make the physics of the state more apparent, and in section VI we compare it to the Schmidt decomposition. Then using our formalism we provide a "local decomposition" of the squeezed state, and demonstrate its use in calculations in section VII. We give a discussion of the "strongly squeezed limit" within our framework in section VIII, and end in section IX with a realistic example of a joint spectral amplitude for a squeezed state generated in a ring resonator structure. Our conclusions and suggestions for future work are presented in section X.
## II Joint Temporal Amplitude
Besides the expression (I.1) for a squeezed state that is based on an integral over frequencies (or wavenumbers), it will be useful to have an expression based on integrals over time (or position). For simplicity we assume group velocity dispersion can be neglected and that light propagates with a velocity \(v\); then putting
\[\overline{a}(t)=\int\frac{d\omega}{\sqrt{2\pi}}a(\omega)e^{-i\omega t},\] (II.1)
and with
\[\overline{\gamma}(t_{1},t_{2})\equiv\int\frac{d\omega_{1}d\omega_{2}}{2\pi} \gamma(\omega_{1},\omega_{2})e^{-i\omega_{1}t_{1}}e^{-i\omega_{2}t_{2}}\] (II.2)
identifying the "joint temporal amplitude," we can write Eq. (I.1) as
\[\ket{\Psi}=e^{\frac{\mathrm{d}}{2}\int dt_{1}dt_{2}\overline{\gamma}(t_{1},t_ {2})\overline{a}^{\dagger}(t_{1})\overline{a}^{\dagger}(t_{2})-\mathrm{h.c.} }\ket{\mathrm{vac}}.\] (II.3)
We take \(\gamma(\omega_{1},\omega_{2})\) and \(\overline{\gamma}(t_{1},t_{2})\) to identify the spectral and temporal representations of a "joint amplitude" and refer to their absolute squares \(|\gamma(\omega_{1},\omega_{2})|^{2}\) and \(|\overline{\gamma}(t_{1},t_{2})|^{2}\) as the spectral and temporal representations of a "joint intensity." While due to time-ordering corrections we would expect the joint amplitude to change as the pump intensity and thus \(\beta\) is increased [5], here we neglect such effects for simplicity, and take the joint amplitude to be fixed when we consider varying \(\beta\) below.
Of course, the ket \(\ket{\Psi}\) (Eq. (I.1) or (II.3)) is a Schrodinger ket at a particular time, say \(t=0.\) The variables \(t_{1},t_{2}\) can be thought of as surrogates for position, where \(\overline{a}(t_{1})\) is identified with the electric field at \(z=-vt_{1}\); equivalently, if the ket \(\ket{\Psi}\) were allowed to evolve in time, \(\overline{a}(t)\) would identify the field operator at \(z=0\) at time \(t\) (see Appendix A).
Corresponding to the frequency correlation functions (Eq. (I.4)) we can also introduce time dependent first- and second-order correlation functions [6]
\[\overline{G}^{(1)}(t_{1},t_{2})=\left\langle\Psi|\overline{a}^{ \dagger}(t_{1})\overline{a}(t_{2})|\Psi\right\rangle,\] (II.4a) \[\overline{G}^{(2)}(t_{1},t_{2})=\left\langle\Psi|\overline{a}^{ \dagger}(t_{1})\overline{a}^{\dagger}(t_{2})\overline{a}(t_{2})\overline{a}(t _{1})|\Psi\right\rangle.\] (II.4b)
The "equal-time" first-order correlation function, given by \(\overline{G}^{(1)}(t)\equiv\overline{G}^{(1)}(t,t)\), is the "photon-density" of the pulse of light, and is used to predict the counting rate of an ideal photo-detector; indeed, if we integrate over all time, then
\[N_{\mathrm{pulse}}=\int\overline{G}^{(1)}(t)dt\] (II.5)
is the expected photon number in the pulse. The second-order correlation function \(\overline{G}^{(2)}(t_{1},t_{2})\) has a similar interpretation, and is used to predict the probability of detection coincidences at the indicated times.
We will primarily be interested in joint intensities of the form shown schematically in Fig. 1, where there are large spectral-temporal correlations; however, the formalism we introduce is valid for a general joint amplitude. We characterize the joint intensity by two quantities, \(\mathcal{T}_{p}\) and \(\mathcal{B}_{c}\), indicated in Fig. 1.
The quantity \(\mathcal{T}_{p}\) is an effective measure of the duration of the generated pulse of squeezed light; in practice this will depend on the pump pulse duration if a nonresonant structure is used to generate the light, or by the resonance time if a resonant structure is used.
The second quantity, \(\mathcal{B}_{c}\), is an effective measure of the bandwidth of generated photons; in practice this is set by phase-matching constraints if a nonresonant structure is used to generate the light, or by the resonator linewidth if a resonant structure is used. Associated with this bandwidth we define a time \(\mathcal{T}_{c}=1/\mathcal{B}_{c}\), which is on the order of the narrow width of the joint temporal intensity; see Fig. 1. The time \(\mathcal{T}_{c}\) identifies the "coherence time," the typical range of \(\abs{t_{2}-t_{1}}\) over which \(\abs{\overline{\gamma}(t_{1},t_{2})}^{2}\) is nonnegligible.
## III Schmidt Modes
A natural approach to try to elucidate the physics of a squeezed state and calculate the correlation functions
Figure 1: Schematic of a joint intensity represented in time on the left and frequency on the right. The horizontal width of the joint temporal (spectral) amplitude is denoted by \(\mathcal{T}_{p}\), (\(\mathcal{B}_{c}\), the bandwidth) is the effective pulse duration (bandwidth). The narrow horizontal width at \(t_{2}=0\) is denoted by \(\mathcal{T}_{c}=1/\mathcal{B}_{c}\) and is the coherence time of photon pairs.
given by Eq. (II.4) is to employ a Schmidt decomposition of the joint amplitude, since the squeezed state can then be written as a direct product of squeezed states associated with the supermodes introduced via the Schmidt decomposition. That is the strategy we explore in this section.
In writing Eq. 1.1 for the squeezed state we took the origin of \(\gamma(\omega_{1},\omega_{2})\) to indicate the same frequency \(\omega_{o}\) with respect to which both \(\omega_{1}\) and \(\omega_{2}\) are referenced - this is the case of so-called "degenerate" squeezing - and here without loss of generality the joint amplitude can be taken as symmetric in its variables (\(\gamma(\omega_{2},\omega_{1})=\gamma(\omega_{1},\omega_{2})\), or equivalently \(\overline{\gamma}(t_{2},t_{1})=\overline{\gamma}(t_{1},t_{2})\)). From a Takagi factorization [7] we can then construct the Schmidt decomposition,
\[\begin{split}\gamma(\omega_{1},\omega_{2})&=\sum_{n }\sqrt{p_{n}}f_{n}(\omega_{1})f_{n}(\omega_{2}),\\ \overline{\gamma}(t_{1},t_{2})&=\sum_{n}\sqrt{p_{n} }\overline{f}_{n}(t_{1})\overline{f}_{n}(t_{2}),\end{split}\] (III.1)
where the Schmidt weights \(p_{n}\geq 0\) sum to unity; the elements of the sets \(\{f_{n}(\omega)\}\) and \(\{\overline{f}_{n}(t)\}\) of mutually orthogonal and normalized functions are related by
\[\overline{f}_{n}(t)=\int\frac{d\omega}{\sqrt{2\pi}}f_{n}(\omega)e^{-i\omega t}.\] (III.2)
As usual, we take the set \(\{f_{n}(\omega)\}\) of elements with \(p_{n}\neq 0\) to be expanded to be a complete set of functions, assigning \(p_{n}=0\) to the added elements, and correspondingly for the \(\{\overline{f}_{n}(t)\}\).
The Schmidt number \(K\) of the expansion (III.1), which characterizes the effective number of spectral or temporal modes in the sum (III.1), is given by
\[K=\left(\sum_{n}p_{n}^{2}\right)^{-1}.\] (III.3)
It serves as an effective measure of the correlation in the dependence of the joint amplitude on its two variables; in the limit of a Schmidt number of unity the joint amplitude is simply a product of the same function of each variable, and there is no correlation at all in its dependence on those variables.
We can introduce another measure of the correlation of the dependence of the joint amplitude on its two variables by
\[\mathcal{K}=\mathcal{T}_{p}\mathcal{B}_{c}=\frac{\mathcal{T}_{p}}{\mathcal{T} _{c}}.\] (III.4)
Clearly when \(\mathcal{T}_{p}\) is much greater than \(\mathcal{T}_{c}\) then the dependence of the joint amplitude on its two variables is highly correlated (see Fig. 1), and then \(\mathcal{K}\gg 1\). Of course, at the moment we have only defined \(\mathcal{T}_{p}\) and \(\mathcal{B}_{c}\) in a "rough-and-ready" way, and such then is our definition of \(\mathcal{K}\); we will make the definition more precise below. Often similarly defined quantities are introduced and referred to as the "time-bandwidth product." So to avoid confusion we henceforth refer to \(\mathcal{K}\) as an "effective Schmidt number."
Confusions with the phrase "time-bandwidth product" can arise because of different definitions used for "time" and "bandwidth." For example, Fedorov et al. calculate a time-bandwidth product by taking the width of the "unconditional" ("single-particle") and "conditional" ("coincidence") widths of the joint intensity; for a general double-Gaussian joint amplitude (considered below), they show that the time-bandwidth product is exactly equal to the Schmidt number [8; 9; 10]. Alternatively, Brecht and Silberhorn calculate the time-bandwidth product taking the conditional and unconditional widths of the "chrone-cyclic Wigner function," which for their double-Gaussian model is equal to the Schmidt number, even when a chirp is included [11]. The agreement - either exact or approximate - between the "time-bandwidth product" and the Schmidt number suggest a deeper connection between the two quantities.
However, there is an older and more rigorous meaning of the term "time-bandwidth product" from classical information theory: For a one-dimensional bandlimited signal, it is the number of orthogonal functions optimally concentrated within a given timewidth needed to describe the signal [12], and in this context is referred to as the "Shanon number" [13; 14]. Generalizations of the Shannon number exist for higher dimensional signals where one calculates the time-bandwidth product using the bandwidth and timewidth area, volume, etc. [13; 14; 15; 16]. Recent work has made the connection between the Shannon number from classical information theory to the Schmidt number in quantum information theory; see for example Pires et al. [16] or Pors et al. [17] for both a theoretical and experimental investigation.
In this spirit we define \(\mathcal{K}\) more precisely by defining \(\mathcal{T}_{p}\) and \(\mathcal{B}_{c}\) more precisely. We assume that those quantities are chosen as small as possible but subject to the condition that, to within the level of approximation adopted in calculations, they cover the range of the joint amplitude in time and frequency respectively. This means that the typical ways used to measure bandwidth or frequency - such as the standard deviation, full-width-at-half-max etc. - are too narrow. In particular, we choose \(\mathcal{B}_{c}\) to be large enough that frequencies larger than \(2\pi\mathcal{B}_{c}/2\) can be completely neglected, and we can treat the joint spectral amplitude as effectively bandwidth limited. For this reason the effective Schmidt number \(\mathcal{K}\) will generally be larger than, but typically on the order of, other conventions used for the "time-bandwidth product."
In a later section we argue that generally the Schmidt number \(K\) and effective Schmidt number \(\mathcal{K}\) satisfy the inequality
\[K\leq\mathcal{K}.\] (III.5)
One might generally expect this to be true based on a physical argument: Suppose we have squeezed light propagating with an arbitrary joint amplitude with some Schmidt number \(K\) and effective Schmidt number \(\mathcal{K}\). Now
if the squeezed light is sent through a dispersive medium, the joint spectral amplitude is multiplied by a complex but _separable_ phase that does _not_ change the Schmidt number. However, we know that in a dispersive medium the bandwidth remains constant but the pulse duration broadens, and so \(\mathcal{K}\) will generally increase. Thus, in general one might indeed expect that \(K\leq\mathcal{K}\). Below we will discuss when the near exact equality holds.
We will find that we can use the effective Schmidt number \(\mathcal{K}\) to introduce a "weak squeezing" regime characterized by the condition
\[\frac{\left|\beta\right|}{\sqrt{\mathcal{K}}}\ll 1,\] (III.6)
and a "strong squeezing" regime characterized by the condition
\[\frac{\left|\beta\right|}{\sqrt{\mathcal{K}}}\gg 1.\] (III.7)
If neither of these conditions are satisfied we refer to the squeezing as "moderate."
Now as a first example of the use of the Schmidt decomposition to evaluate the correlation functions, consider a general normalized two-photon state,
\[\left|\Pi\right\rangle =\frac{1}{\sqrt{2}}\int dt_{1}dt_{2}\overline{\gamma}(t_{1},t_{2} )\overline{a}^{\dagger}(t_{1})\overline{a}^{\dagger}(t_{2})\left|\text{vac}\right\rangle\] (III.8) \[=\frac{1}{\sqrt{2}}\sum_{n}\sqrt{p_{n}}A_{n}^{\dagger}A_{n}^{ \dagger}\left|\text{vac}\right\rangle,\]
where we defined operators associated with the supermodes as
\[A_{n}^{\dagger}\equiv\int dt\overline{J}_{n}(t)\overline{a}^{\dagger}(t),\] (III.9)
and so
\[\overline{a}(t)=\sum_{n}\overline{J}_{n}(t)A_{n}.\] (III.10)
To write the inverted form (Eq. (III.10)) we have taken the adjoint of Eq. (III.9) and used the completeness relation of the set of functions \(\left\{\overline{J}_{n}(t)\right\}\). The set of operators \(\left\{A_{n}\right\}\) and their adjoints satisfy the usual harmonic oscillator commutation relations. Just as the expressions (III.1) can be written in time or frequency form, so the first of (III.8) and (III.9) can also be written involving integrals over frequency of the corresponding quantities. We find
\[\begin{split}\overline{G}^{(1)}(t)\Big{|}_{\left|\Pi\right\rangle }&\equiv\left\langle\Pi|\overline{a}^{\dagger}(t)\overline{a}(t) |\Pi\right\rangle\\ &=2\sum_{n}p_{n}\left|\overline{J}_{n}(t)\right|^{2},\end{split}\] (III.11)
and
\[\begin{split}\overline{G}^{(2)}(t_{1},t_{2})\Big{|}_{\left|\Pi \right\rangle}&\equiv\left\langle\Pi|\overline{a}^{\dagger}(t_{1} )\overline{a}^{\dagger}(t_{2})\overline{a}(t_{2})\overline{a}(t_{1})|\Pi \right\rangle\\ &=2\left|\sum_{n}\sqrt{p_{n}}\overline{J}_{n}(t_{1})\overline{J }_{n}(t_{2})\right|^{2}.\end{split}\] (III.12)
In the first we have a contribution of \(\left|\overline{J}_{n}(t)\right|^{2}\) from each Schmidt mode with a weight \(p_{n}\), while in the second the amplitudes associated with each of the Schmidt modes add; the "2" of course arises because we have pairs of photons.
Moving to a squeezed state, in the limit \(\left|\beta\right|\ll 1\) we can write (II.3) as
\[\left|\Psi\right\rangle\rightarrow\left|\text{vac}\right\rangle+\frac{\beta}{ \sqrt{2}}\left|\Pi\right\rangle+.....,\] (III.13)
(cf. (I.5)), and we find
\[\begin{split}\overline{G}^{(1)}(t)\rightarrow\sum_{n}\left| \beta_{n}\right|^{2}\left|\overline{J}_{n}(t)\right|^{2}=\left|\beta\right|^{2 }\int\left|\overline{\gamma}(t,t^{\prime})\right|^{2}dt^{\prime}\\ \overline{G}^{(2)}(t_{1},t_{2})\rightarrow\left|\sum_{n}\beta_{n} \overline{J}_{n}(t_{1})\overline{J}_{n}(t_{2})\right|^{2}=\left|\beta\right|^{ 2}\left|\overline{\gamma}(t_{1},t_{2})\right|^{2},\end{split}\] (III.14)
where \(\beta_{n}=\beta\sqrt{p_{n}}\) (cf. (I.6)). Treating \(|\gamma(t_{1},t_{2})|^{2}\) as a normalized probability distribution, for \(N_{\text{pulse}}=|\beta|^{2}\ll 1\) we find \(\overline{G}^{(1)}(t)\) is the probability distribution reduced by integrating over the second time variable, and \(\overline{G}^{(2)}(t_{1},t_{2})\) is proportional to \(|\overline{\gamma}(t_{1},t_{2})|^{2}\) itself, the norm squared of the joint temporal amplitude at the two corresponding times.
More generally, using the Schmidt decomposition (III.1) and the supermode operators (III.9), which are all independent, we can write the squeezed ket (II.3) as
\[\left|\Psi\right\rangle=\bigotimes_{n}S_{n}\left|\text{vac}\right\rangle_{n},\] (III.15)
where \(\left|\text{vac}\right\rangle_{n}\) is the vacuum state for the corresponding supermode and
\[S_{n}=e^{\frac{\beta_{n}}{4}A_{n}^{\dagger}A_{n}^{\dagger}-\text{h.c.}}.\] (III.16)
With the standard result [18]
\[S_{n}^{\dagger}A_{n}S_{n}=c_{n}A_{n}+e^{i\theta}s_{n}A_{n}^{\dagger},\] (III.17)
where we have put \(\beta=\left|\beta\right|e^{i\theta}\) and
\[c_{n} \equiv\cosh\left(\left|\beta_{n}\right|\right),\] (III.18) \[s_{n} \equiv\sinh\left(\left|\beta_{n}\right|\right),\] (III.19)
using the expression (III.10) to write \(\overline{a}(t)\) in terms of the \(\left\{\overline{J}_{n}(t)\right\}\) - and using (III.17) to evaluate \(\left\langle\Psi|A_{n}^{\dagger}A_{m}^{\dagger}A_{p}A_{q}|\Psi\right\rangle\) - from (II.4) we have
\[\overline{G}^{(1)}(t)=\sum_{n}\left|\overline{J}_{n}(t)\right|^{2}s _{n}^{2},\] (III.20) \[\overline{G}^{(2)}(t_{1},t_{2})=\overline{G}^{(2)}_{\text{coh}}(t_{ 1},t_{2})+\overline{G}^{(2)}_{\text{incoh}}(t_{1},t_{2}),\]
where
\[\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})=\left|\sum_{n}s_{n}c_{n} \overline{f}_{n}(t_{2})\overline{f}_{n}(t_{1})\right|^{2}\] (III.21a) \[\overline{G}^{(2)}_{\rm incoh}(t_{1},t_{2})\] (III.21b) \[\qquad=\frac{1}{2}\sum_{n,m}\left|s_{n}s_{m}(\overline{f}_{n}(t_ {1})\overline{f}_{m}(t_{2})+\overline{f}_{n}(t_{2})\overline{f}_{m}(t_{1})) \right|^{2},\]
and the average photon number is
\[N_{\rm pulse}=\sum_{n}s_{n}^{2}.\] (III.22)
Of the two contributions to \(\overline{G}^{(2)}(t_{1},t_{2})\), the "coherent" term \(\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})\)[19], which involves the square of a sum, is the generalization to a squeezed state of the term \(\overline{G}^{(2)}(t_{1},t_{2})\) for the two-photon state (III.12), and the only term that survives in \(\overline{G}^{(2)}(t_{1},t_{2})\) in the limit \(|\beta|\ll 1\), cf. (III.14). The "incoherent" term \(\overline{G}^{(2)}_{\rm incoh}(t_{1},t_{2})\)[19] involves the sum of squares, and will only be significant at larger values of \(|\beta|\). Note that the corresponding expressions for \(G^{(1)}(\omega)\) and \(G^{(2)}(\omega_{1},\omega_{2})\) take the same form as Eq. (III.20), with \(\overline{f}_{n}\) replaced by \(f_{n}\), \(t_{1}\) by \(\omega_{1}\), etc.
### Example 1: The double-Gaussian
A simple model for the joint amplitude is a double-Gaussian function,
\[\gamma(\omega_{1},\omega_{2})=\sqrt{\frac{1}{\pi\sigma_{p}\sigma_ {c}}}e^{-\frac{(\omega_{1}-\omega_{2})^{2}}{4\sigma_{p}^{2}}}e^{-\frac{( \omega_{1}+\omega_{2})^{2}}{4\sigma_{p}^{2}}},\] (III.23a) \[\overline{\gamma}(t_{1},t_{2})=\sqrt{\frac{\sigma_{p}\sigma_{c}} {\pi}}e^{-\frac{\sigma_{p}^{2}(t_{1}-t_{2})^{2}}{4}}e^{-\frac{\sigma_{p}^{2}( t_{1}+t_{2})^{2}}{4}},\] (III.23b)
where \(\sigma_{p}\) and \(\sigma_{c}\) are the two parameters. The Schmidt modes are harmonic oscillator wave functions. This can be seen by noting that the reduced density operator of "particle 1" is equal to the density operator of a harmonic oscillator in thermal equilibrium; its eigenfunctions are the Schmidt modes, and they are obviously the harmonic oscillator wave functions. The details can be worked out from this, or more mathematically from the Mehler kernal [20; 21]. For \(\sigma_{c}\geq\sigma_{p}\), the Schmidt decompositions are given by (III.1), with
\[\gamma(\omega_{1},\omega_{2})=\sum_{n\geq 0}\sqrt{p_{n}} \mathcal{H}_{n}(\omega_{1})\mathcal{H}_{n}(\omega_{2}),\] (III.24a) \[\overline{\gamma}(t_{1},t_{2})=\sum_{n\geq 0}\sqrt{p_{n}} \overline{\mathcal{H}}_{n}(t_{1})\overline{\mathcal{H}}_{n}(t_{2}),\] (III.24b)
and
\[p_{n}=\frac{4\sigma_{c}\sigma_{p}}{(\sigma_{c}+\sigma_{p})^{2}} \left(\frac{\sigma_{c}-\sigma_{p}}{\sigma_{c}+\sigma_{p}}\right)^{2n},\] (III.25)
with a Schmidt number
\[K=\frac{\sigma_{c}^{2}+\sigma_{p}^{2}}{2\sigma_{c}\sigma_{p}}.\] (III.26)
Here \(\overline{f}_{n}(t)=\overline{\mathcal{H}}_{n}(t)\), where
\[\overline{\mathcal{H}}_{n}(t)=\frac{H_{n}(\frac{t}{t_{0}})e^{-t^{2}/(2t_{0}^{2 })}}{\sqrt{2^{n}n!\pi^{1/2}t_{0}}},\] (III.27)
with \(H_{n}(x)\) the Hermite polynomials, is the standard coordinate-representation harmonic oscillator energy eigenfunction, but with \(t\) playing the role of \(x\) and
\[t_{0}\equiv\sqrt{\frac{1}{\sigma_{p}\sigma_{c}}}\] (III.28)
playing the role of a reference length \(x_{0}\) that is often introduced [22]; \(f_{n}(\omega)=\mathcal{H}_{n}(\omega)\), where
\[\mathcal{H}_{n}(\omega)=(-i)^{n}\sqrt{\frac{t_{0}}{2^{n}n!\pi^{1/2}}}H_{n}( \omega t_{0})e^{-\omega^{2}t_{0}^{2}/2}\] (III.29)
is the standard momentum-representation harmonic oscillator energy eigenfunction [22], with \(\omega\) playing the role of \(p\).
In Fig. 2 we plot the joint intensities \(|\overline{\gamma}(t_{1},t_{2})|^{2}\) and \(\left|\gamma(\omega_{1},\omega_{2})\right|^{2}\) for \(\sigma_{c}/\sigma_{p}=50\), as well as the Schmidt weights \(p_{n}\) as a function of \(n\). For \(\sigma_{c}\gg\sigma_{p}\), the Schmidt number (III.26) is approximately given by \(K\approx\sigma_{c}/(2\sigma_{p})=25\), indeed we find numerically that \(K_{\rm DG}=25.01\). The joint intensities in Fig. 2 vary over the widths \(\mathcal{T}_{p}^{\rm DG}\) and \(\mathcal{B}_{c}^{\rm DG}\) which we set to be
\[\mathcal{T}_{p}^{\rm DG}=\frac{a}{\sqrt{2}\sigma_{p}},\hskip 14.226378pt \mathcal{B}_{c}=\frac{a}{2\pi}\frac{\sigma_{c}}{\sqrt{2}},\hskip 14.226378pt \mathcal{T}_{c}^{\rm DG}=\frac{2\pi\sqrt{2}}{a\sigma_{c}},\] (III.30)
and choose \(a=2\sqrt{2\pi}\) for convenience. This choice of \(a\) is large enough that the joint temporal and spectral amplitudes can essentially be taken to be confined within the ranges of \(\mathcal{T}_{p}^{\rm DG}\) and \(\mathcal{B}_{c}^{\rm DG}\) respectively, as can be gleaned from Fig. 2 and will in fact be confirmed by our calculations in later sections. Then the effective Schmidt number is
\[\mathcal{K}_{\rm DG}=\frac{a^{2}}{2\pi}\frac{\sigma_{c}}{2\sigma_{p}}\approx 4K_{ \rm DG}=100.\] (III.31)
To illustrate the behaviour of the photon statistics for a large range of photon numbers, we consider the three values of \(\beta=0.1,5\), and \(10\), corresponding to \(|\beta|/\sqrt{\mathcal{K}_{\rm DG}}=0.01,0.5,1\), and so ranging from weak to moderate squeezing; we dedicate section VIII to the discussion of the strongly squeezed limit.
In Fig. 3 we plot \(\overline{G}^{(1)}(t)\) for the three values of \(\beta\). The expectation value of the number of photons is determined by (III.22), and for \(\beta=0.1\), \(5\), and \(10\) we have respectively \(N_{\rm pulse}\approx 0.01,35\), and \(383\); we take an effective photon flux
(photons per unit time) to be given by \(\Phi=N_{\rm pulse}/\mathcal{T}_{p}^{\rm DG}\). We also plot the contribution to each \(\overline{G}^{(1)}(t)\) from a number of the Schmidt modes (see (III.20)). For each value of \(\beta\), the photon density at any particular time \(t\) involves contributions from many Schmidt modes, and we cannot associate it with one or even a few Schmidt modes. Clearly as \(\beta\) increases the contribution from the \(n=0\) Schmidt mode increases and the shape of the photon density narrows. This occurs because the scaling of each contribution with \(|\beta_{n}|\) is nonlinear and depends on the quantities \(c_{n}\) and \(s_{n}\) (III.18); since for the double-Gaussian the Schmidt amplitudes \(p_{n}\) decrease as \(n\) increases, \(|\beta_{0}|\) has the largest contribution. This behavior suggests that in the strongly squeezed limit the photon statistics can be well approximated by the first few Schmidt modes, a point to which we return in section VIII.
In Fig. 4 we turn to \(\overline{G}^{(2)}(t_{1},t_{2})\) with again the three values of \(\beta\) considered above. In the top row we show this function for the different values of \(\beta\); at low \(\beta\) the result is proportional to the square of the absolute value of the joint temporal amplitude (see Fig. 2, Eq. (III.14)), while for larger \(\beta\) there are significant corrections to this. In the bottom panel we plot \(\overline{G}^{(2)}(t/2,-t/2)\), which corresponds to moving along a diagonal that runs from the upper-left to the lower-right of the plots in the first row; we give the coherent and incoherent contributions separately.
The situation here is of course more complicated than that for \(\overline{G}^{(1)}(t)\), because the expression (III.20) for \(\overline{G}^{(2)}(t_{1},t_{2})\) is more complicated than a simple sum over contributions from the individual Schmidt modes. But we see that at least in the weak squeezing regime the coherent contribution to \(\overline{G}^{(2)}(t/2,-t/2)\) dominates, and it is nonzero only over a range of \(t\) much less than the range of \(t\) over which the individual Schmidt modes are nonzero; clearly the interference terms between the different Schmidt modes in (III.20) play a critical role in the
Figure 3: For the double-Gaussian, from left to right we plot \(\overline{G}^{(1)}(t)/\Phi\) and a few contributions from different Schmidt modes in Eq. (III.20) with the horizontal axis normalized by \(\mathcal{T}_{p}^{\rm DG}\) for \(\beta=0.1,5,\) and \(10\).
Figure 2: For the double-Gaussian, from left to right we plot the: joint temporal intensity divided by its maximum value with the axes normalized by \(2\pi\mathcal{B}_{c}^{\rm DG}\); Schmidt amplitudes \(p_{n}\) up to \(n=34\).
result for \(\overline{G}^{(2)}(t_{1},t_{2})\).
Further, the incoherent contribution at \(\beta=5\) has a structure that consists partly of a broad background and partly of a contribution that mirrors the coherent contribution. Now note that the expression (III.21) for \(\overline{G}^{(2)}_{\rm incoh}(t_{1},t_{2})\) can also be very generally written as
\[\overline{G}^{(2)}_{\rm incoh}(t_{1},t_{2})=\overline{G}^{(1)}(t_{1})\overline{ G}^{(1)}(t_{2})+|\overline{G}^{(1)}(t_{1},t_{2})|^{2},\] (III.32)
and the broad background in \(\overline{G}^{(2)}_{\rm incoh}(t/2,-t/2)\) can be understood as arising from the first term on the right-hand-side. The contribution that mirrors the coherent contribution can be understood as arising from the second term, and in fact it is absent if we consider nondegenerate squeezed light, where signal and idler frequencies are well separated with different center frequencies [23]. The corresponding contribution to \(G^{(2)}_{\rm incoh}(\omega_{1},\omega_{2})\) is often discussed and referred to as the "autocorrelation" [24]. In any case, clearly the different features of \(\overline{G}^{(2)}(t/2,-t/2)\) certainly do not follow in any simple way from the features of individual Schmidt modes, but arise from the interference of many of these modes.
As \(\beta\) increases, the range of \(\overline{G}^{(2)}(t,t)\) narrows, as does the range of the photon density. However, the coherent and incoherent contribution along \(\overline{G}^{(2)}(t/2,-t/2)\) broaden. Thus as the squeezing parameter is increased, the photon statistics begin to look uncorrelated. This behavior matches that of the photon density, in that as \(\beta\) increases fewer Schmidt modes are important in calculating the correlation functions. We also note that the incoherent contribution is approximately twice the coherent contribution, a point to which we return to in section VIII.
While there are many features of interest here, we emphasize that both the value of \(\overline{G}^{(1)}(t)\) at a particular \(t\), and that of \(\overline{G}^{(2)}(t_{1},t_{2})\) at a particular \(t_{1}\) and \(t_{2}\), receive contributions from _many_ of the Schmidt modes. The same holds for the corresponding functions \(G^{(1)}(\omega)\) and \(G^{(2)}(\omega_{1},\omega_{2})\). The structure of the Schmidt modes themselves, in and of itself, does not help us understand the structure of the correlation functions.
### Example 2: The sinc-hat
This difference between the features of the Schmidt modes and the features of \(\overline{G}^{(1)}(t)\) and \(\overline{G}^{(2)}(t_{1},t_{2})\) is not specific to the double-Gaussian joint amplitude. Consider another form,
where
\[\alpha(\omega)=\frac{1}{\sqrt{\Omega_{p}}}\text{sinc}\left(\frac{ \pi\omega}{\Omega_{p}}\right),\] (III.35) \[\phi(\omega)=\frac{1}{\sqrt{\Omega_{c}}}\text{ for }-\frac{\Omega_{c}}{2}\leq \omega\leq\frac{\Omega_{c}}{2},\] (III.36) \[=0,\text{ otherwise},\] \[\bar{\alpha}(t)=\frac{1}{\sqrt{T_{p}}}\text{ for }-\frac{T_{p}}{2}\leq t \leq\frac{T_{p}}{2},\] (III.37) \[=0,\text{ otherwise},\] \[=0,\text{ otherwise},\] \[\overline{\phi}(t)=\frac{1}{\sqrt{T_{c}}}\text{sinc}\left(\frac {\pi t}{T_{c}}\right)\] (III.38)
with \(T_{p}=2\pi/\Omega_{p}\) and \(T_{c}=2\pi/\Omega_{c}\), and as usual \(\alpha(\omega)\) and \(\overline{\alpha}(t)\), and \(\phi(\omega)\) and \(\overline{\phi}(t)\), are Fourier transform pairs (cf. (III.2)). Since both \(\gamma(\omega_{1},\omega_{2})\) and \(\overline{\gamma}(t_{1},t_{2})\) are products of a "sinc" function and "top-hat" function we refer to this example as the "sinc-hat" joint amplitude.
Figure 5: For the sinc-hat, from left to right we plot the: joint temporal intensity divided by its maximum value with the axes normalized by \(\mathcal{T}_{p}^{\text{SH}}\); joint spectral intensity divided by its maximum value with the axes normalized by \(2\pi\mathcal{B}_{c}^{\text{SH}}\); Schmidt amplitudes \(p_{n}\) up to \(n=34\).
For the sinc-hat joint amplitude, the Schmidt modes must be found numerically. In Fig. 5 we plot the joint intensities \(|\overline{\gamma}(t_{1},t_{2})|^{2}\) and \(|\gamma(\omega_{1},\omega_{2})|^{2}\), as well as the Schmidt weights \(p_{n}\) as a function of \(n\), for \(T_{p}/T_{c}=24\). We will see below that for large \(T_{p}/T_{c}\) we have \(K\approx T_{p}/T_{c}\), and indeed here we numerically find \(K_{\rm SH}=24.78\).
Evaluating \(\overline{\gamma}(t_{1},t_{2})\) along the line \(t_{1}=t_{2}\) those variables range from \(-T_{p}/2\) to \(T_{p}/2\), and similarly along \(\omega_{1}=-\omega_{2}\), \(\gamma(\omega_{1},\omega_{2})\) ranges from \(-\Omega_{c}/2\) to \(\Omega_{c}/2\). Naively one would guess that we should set \(\mathcal{T}_{p}^{\rm SH}\to T_{p}\) and \(2\pi\mathcal{B}_{c}^{\rm SH}\to\Omega_{c}\) or equivalently \(\mathcal{T}_{c}^{\rm SH}\to T_{c}\), however, this is only the range of \(t\) along the _diagonal_ (or anti-diagonal in frequency) and the joint amplitude exists beyond it. In Appendix B we show that
\[\mathcal{T}_{p}^{\rm SH}=T_{p}+\frac{T_{c}}{2},\] (III.39)
and
\[\mathcal{B}_{c}^{\rm SH}=\frac{\Omega_{c}}{2\pi}+\frac{1}{2}\frac{\Omega_{p}} {2\pi},\ \ \ \ \mathcal{T}_{c}^{\rm SH}=\frac{T_{c}T_{p}}{T_{p}+T_{c}/2}.\] (III.40)
This leads to an effective Schmidt number
\[\mathcal{K}_{\rm SH}=\frac{(T_{p}+\frac{T_{c}}{2})^{2}}{T_{p}T_{c}}=1+\frac{T_ {p}}{T_{c}}+\frac{T_{c}}{4T_{p}},\] (III.41)
and for \(T_{p}/T_{c}=24\), \(\mathcal{K}_{\rm SH}\approx 25\) to very good approximation. In section V we discuss this near equality.
In Fig. 6 we plot \(\overline{G}^{(1)}(t)\) for the same three values of \(\beta\) used in the example above, corresponding here to photon numbers \(N_{\rm pulse}\approx 0.01,35\) and \(335\); we take an effective photon flux to be given by \(\Phi=N_{\rm pulse}/\mathcal{T}_{p}^{\rm SH}\). Then for the three values of \(\beta\) we have \(|\beta|/\sqrt{\mathcal{K}_{\rm SH}}\approx 0.02,1,2\), which again corresponds to weak to moderate squeezing. In Fig. 6 we also include a few of the contributions from the Schmidt modes; we see that typically those contributions range over the whole duration of the pulse, analogous to what we saw for the double-Gaussian example. In the top row of Fig. 7 we plot \(\overline{G}^{(2)}(t_{1},t_{2})\) for the indicated values of \(|\beta|\), and in the bottom row the coherent and incoherent contributions to \(\overline{G}^{(2)}(t/2,-t/2).\) Again, the range over which the individual Schmidt modes extend is much larger than these contributions, and so they must be understood as arising from a number of interfering Schmidt modes.
So just as for squeezed states described by the double-Gaussian joint amplitude, the behavior of correlation functions of squeezed states described by the sinc-hat joint amplitude cannot be linked in a simple way to the behavior of the individual Schmidt modes. Quantitatively there are differences between the correlation functions resulting from those two joint amplitudes: The relative amplitudes of the Schmidt modes of a given \(n\) in Fig. 6 (sinc-hat joint amplitude) are roughly independent of \(|\beta|\), while the relative amplitudes in of those in Fig. 3 (double-Gaussian) are not, and the two parts of the structure of \(\overline{G}^{(2)}_{\rm inch}(t/2,-t/2)\) we noticed for the double-Gaussian joint amplitude are even more pronounced, and persist to larger \(\beta\) than they did for that amplitude. These differences arise because the Schmidt weights of the sinc-hat joint amplitude are nearly identical in the \(T_{p}/T_{c}\gg 1\) limit (see Fig. 5), and thus each Schmidt mode contributes roughly equally to the resulting correlation functions even in the large \(\beta\) limit.
To see how this near-degeneracy in the Schmidt weights arises, note that in general the Schmidt modes \(\overline{f}_{n}(t)\) of a joint temporal amplitude \(\overline{\gamma}(t_{1},t_{2})\) are eigenfunctions of the operator
\[M(t_{1},t_{2})\equiv\int\overline{\gamma}(t_{1},t)\overline{\gamma}^{*}(t,t_{ 2})dt,\] (III.42)
with eigenvalue \(p_{n}\),
\[\int M(t_{1},t_{2})\overline{f}_{n}(t_{2})dt_{2}=p_{n}\overline{f}_{n}(t_{1}),\] (III.43)
which follows immediately from constructing \(M(t_{1},t_{2})\) using (III.1). Now for \(T_{p}/T_{c}\gg 1\) we can approximate the joint temporal amplitude (III.34) as
\[\overline{\gamma}(t_{1},t_{2}) =\overline{\alpha}\left(\frac{t_{1}+t_{2}}{2}\right)\overline{ \phi}(t_{1}-t_{2})\] (III.44) \[\approx\overline{\alpha}(t_{1})\overline{\phi}(t_{1}-t_{2}),\] (III.45)
and so
\[M(t_{1},t_{2}) \approx\int\overline{\alpha}(t_{1})\overline{\alpha}(t_{2}) \overline{\phi}(t_{1}-t)\overline{\phi}(t-t_{2})dt\] (III.46) \[=\sqrt{T_{c}}\overline{\alpha}(t_{1})\overline{\alpha}(t_{2}) \overline{\phi}(t_{1}-t_{2})\] \[=\frac{T_{c}}{T_{p}}\frac{\sin\left(\frac{\Omega_{c}}{2}(t_{1}-t_{ 2})\right)}{\pi(t_{1}-t_{2})},\ {\rm for}\ -\frac{T_{p}}{2}\leq t_{1},t_{2}\leq\frac{T_{p}}{2},\] \[=0,\ {\rm otherwise}.\]
For specified \(\Omega_{c}T_{p}/4\), the functions \(\overline{\psi}_{n}(t^{\prime})\) satisfying the eigenvalue equation
\[\int\limits_{-\overline{T}_{p}/2}^{T_{p}/2}\frac{\sin\left(\frac{\Omega_{c}}{2} (t-t^{\prime})\right)}{\pi(t-t^{\prime})}\overline{\psi}_{n}(t^{\prime})dt^{ \prime}=\lambda_{n}\overline{\psi}_{n}(t),\] (III.47)
with the label \(n=0,1,.....\), are related to the angular prolate spheroidal functions [12; 13; 14; 25; 26], and are defined with the normalization
\[\int\limits_{-\infty}^{\infty}\left|\overline{\psi}_{n}(t)\right|^{2}dt=1.\] (III.48)
The \(\lambda_{n}\) are close to unity for small \(n\), and fall off quickly to zero for \(n>T_{p}/T_{c}\equiv K_{\rm app}\); we approximate them as equal to unity for \(n<K_{\rm app}\) and zero for \(n\geq K_{\rm app}\). So within the approximation (III.46) we have
\[\int M(t_{1},t_{2})\overline{\psi}_{n}(t_{2})dt_{2}\to\frac{T_{c}}{T_{p}}\lambda _{n}\overline{\psi}_{n}(t_{1}),\] (III.49)
and comparing with (III.43) we can identify
\[\overline{f}_{n}(t)\to\overline{\psi}_{n}(t),\] (III.50) \[p_{n}\to\sqrt{\frac{T_{c}}{T_{p}}}=\frac{1}{\sqrt{K_{\rm app}}},\]
for \(n<K_{\rm app}\). That is, the Schmidt modes are approximately given by the angular prolate spheroidal functions, and so
\[\overline{\gamma}(t_{1},t_{2})\to\sum_{n=0}^{K_{\rm app}-1}\sqrt{\frac{T_{c}}{ T_{p}}}\overline{\psi}_{n}(t_{1})\overline{\psi}_{n}(t_{2}),\] (III.51)
exhibiting a huge degeneracy of Schmidt mode amplitudes, with a approximate Schmidt number (III.3) \(K_{\rm app}=T_{p}/T_{c}\approx K_{\rm SH}\), as expected.
While (III.50,III.51) are only approximate (cf. Fig. 5), they do indicate that in the limit \(T_{p}/T_{c}\gg 1\), which for the sinc-hat function we can characterize as the "long pulse" limit, a large near-degeneracy of Schmidt mode amplitudes can be expected. Like the exact Schmidt modes, the angular prolate spheroidal functions range over the whole duration \(T_{p}\) associated with the joint temporal amplitude. However, were the degeneracy exact it would imply that the Schmidt modes are not unique and that various superpositions of them could be constructed. While this freedom is only approximate for near degeneracy, we will see below that we can use it to "take apart" the joint amplitude in a different way by constructing approximate Schmidt modes that more explicitly reflect the properties of the light.
## IV An approximate Schmidt decomposition
Focusing on the sinc-hat model (III.33,III.35) in the limit where \(T_{p}\gg T_{c}\) and \(\mathcal{T}_{p}^{\rm SH}\to T_{p}\) and \(\mathcal{T}_{p}^{\rm SH}\to T_{c}\), note that while in the second row of Fig. 7 we have plotted \(\overline{G}^{(2)}(\overline{t}+t/2,\overline{t}-t/2)\) for \(\overline{t}=0\), we would expect such plots to be similar for values of \(\overline{t}\equiv(t_{1}+t_{2})/2\) ranging over the pulse duration, especially in the limit of small \(|\beta|\). In that limit \(\overline{G}^{(2)}(t_{1},t_{2})\) reflects the behavior of the joint temporal amplitude itself (see (III.14)), and generally \(\overline{G}^{(2)}(\overline{t}+t/2,\overline{t}-t/2)\) will be nonzero for \(t\) ranging on a time scale of the order of \(T_{c}\); in our example of Fig. 7 that is \(T_{p}/24\). This suggests that if we want to capture the behavior of the joint temporal amplitude as a function of \((t_{1}-t_{2})\) in each of the terms of an approximate Schmidt decomposition, rather than just when they are all used together, we should look for approximate Schmidt modes that vary over a range of \(T_{c}\). Since, roughly speaking, frequency components between \(-\Omega_{c}/2\) and \(\Omega_{c}/2\) are then available, one such function can easily be constructed by
taking
\[\begin{split}\overline{\eta}(t)&\equiv\frac{1}{\sqrt{ \Omega_{c}}}\int\limits_{-\Omega_{c}/2}^{\Omega_{c}/2}\frac{d\omega}{\sqrt{2 \pi}}e^{-i\omega t}\\ &=\frac{1}{\sqrt{T_{c}}}\text{sinc}\left(\frac{\pi t}{T_{c}} \right),\end{split}\] (IV.1)
where the prefactor is chosen so the function is normalized (see (IV.3) below). However, we need a set of such functions that are orthonormal to serve as approximate Schmidt functions; the way to do that is to take the set of functions
\[\overline{\eta}_{n}(t)=\overline{\eta}(t-nT_{c}),\] (IV.2)
where \(n\) is an integer, for then we have
\[\int\overline{\eta}_{n}^{*}(t)\overline{\eta}_{m}(t)dt=\delta_{nm}.\] (IV.3)
Note that were time variables replaced by position variables, then the \(\{\overline{\eta}_{n}(t)\}\) would correspond to a set of Wannier functions of the lowest band in a one-dimensional crystal of lattice spacing corresponding to \(T_{c}\), when the potential of the lattice is neglected ("empty-lattice approximation") [27]. We can then seek an approximate expression \(\overline{\gamma}_{\text{app}}(t_{1},t_{2})\) for the sinc-hat joint temporal amplitude \(\overline{\gamma}(t_{1},t_{2})\) of (III.33) by writing
\[\overline{\gamma}_{\text{app}}(t_{1},t_{2})=T_{c}\sum_{n}u(nT_{c})\overline{ \eta}_{n}(t_{1})\overline{\eta}_{n}(t_{2}),\] (IV.4)
which clearly takes the form of a Schmidt decomposition, with \(u(nT_{c})\) playing the role of an "envelope function"; \(\gamma_{\text{app}}(t_{1},t_{2})\) is normalized as the exact function (III.33) as long as
\[T_{c}^{2}\sum_{n}|u(nT_{c})|^{2}=1.\] (IV.5)
The introduction of the functions \(\overline{\eta}_{n}(t)\) allows us to work with "pseudo-Schmidt" modes that are mutually orthogonal (like the real Schmidt modes), but are localized in time and range over different center times. We refer to the approximate Schmidt decomposition (IV.4) we construct as the "pseudo-Schmidt decomposition."
Taking the Fourier transform of the approximate joint temporal amplitude (IV.4) gives the approximate joint spectral amplitude
\[\gamma_{\text{app}}(\omega_{1},\omega_{2}) \equiv\int\frac{dt_{1}dt_{2}}{2\pi}\overline{\gamma}_{\text{app }}(t_{1},t_{2})e^{i\omega_{1}t_{1}}e^{i\omega_{2}t_{2}}\] (IV.6) \[=\frac{T_{c}^{2}}{2\pi}\widehat{u}(\omega_{1}+\omega_{2})s( \omega_{1})s\left(\omega_{2}\right),\]
where
\[\widehat{u}(\omega)\equiv\sum_{n}u(nT_{c})e^{in\omega T_{c}}\] (IV.7)
and
\[s(\omega) \equiv\frac{1}{\sqrt{T_{c}}}\int_{-\infty}^{\infty}\overline{ \eta}(t)e^{i\omega t}dt\] (IV.8) \[=1,\text{ for }-\frac{\Omega_{c}}{2}<\omega<\frac{\Omega_{c}}{2},\] \[=0,\text{ otherwise.}\]
We need to set \(\widehat{u}(\omega)\), which satisfies \(\widehat{u}(\omega+mT_{c})=\widehat{u}(\omega)\) for any integer \(m\). To ensure that \(\overline{\gamma}_{\text{app}}(t_{1},t_{2})\) is a good approximation to \(\overline{\gamma}(t_{1},t_{2})\) for the sinc-hat model, we will want \(u(nT_{c})\) to be independent of \(n\) over an appropriate range. Choosing \(\mathcal{N}\) to be a large odd integer, we put
\[\begin{split} u(nT_{c})&=\frac{1}{T_{c}\sqrt{ \mathcal{N}}},\text{ for }-\left(\frac{\mathcal{N}-1}{2}\right)\leq n\leq\left(\frac{\mathcal{N}-1}{2} \right),\\ &=0,\text{ otherwise.}\end{split}\] (IV.9)
Then from (IV.7) this leads to
\[\widehat{u}(\omega)=\frac{1}{T_{c}\sqrt{\mathcal{N}}}\frac{\sin\left(\frac{ \mathcal{N}\omega T_{c}}{2}\right)}{\sin\left(\frac{\omega T_{c}}{2}\right)}.\] (IV.10)
Now for the sinc-hat model we want the joint temporal amplitude \(\overline{\gamma}(t_{1},t_{2})\) to be nonvanishing for \(t_{1},t_{2}\) varying from \(-T_{p}/2\) to \(T_{p}/2\). From the approximate form \(\overline{\gamma}_{\text{app}}(t_{1},t_{2})\) that we are trying to construct (IV.4), this implies that we should set
\[T_{c}\left(\frac{\mathcal{N}-1}{2}\right)=\frac{T_{p}}{2},\] (IV.11)
Figure 8: Plot of the approximate joint spectral intensity divided by its maximum value with the axes normalized by \(2\pi\mathcal{B}_{c}^{\text{SH}}\). The “false” contributions highlighted by the red dashed circles are due to the periodicity of the function \(\widehat{u}(\omega)\) with a period \(T_{c}\); see the discussion in the paragraph above Eq. (IV.14).
or
\[\mathcal{N}=1+\frac{T_{p}}{T_{c}}=\mathcal{K}_{\text{SH}},\] (IV.12)
where the last equality holds to very good approximation when \(T_{p}\gg T_{c}\). For \(T_{p}\gg T_{c}\), which is the limit we consider here, we can take \(\mathcal{N}\) to be either this or an odd integer close to it. The motivation for calling \(\mathcal{K}\) the "effective Schmidt number" is now clear, at least for this joint amplitude; \(\mathcal{K}_{\text{SH}}\) is the effective number of pseudo-Schmidt modes required for the decomposition. We can then write (IV.10) as
\[\widehat{u}(\omega)=\frac{1}{\sqrt{T_{c}T_{p}}}\frac{\sin\left(\frac{\omega}{ 2}(T_{c}+T_{c})\right)}{\sin\left(\frac{\omega}{2}T_{c}\right)}.\] (IV.13)
Note that with this choice we find \(\gamma_{\text{app}}(0,0)=\sqrt{T_{c}(T_{p}+T_{c})}/(2\pi)\), while for the sinc-hat model we have exactly \(\gamma(0,0)=\sqrt{T_{c}T_{p}}/(2\pi)\); so for \(T_{p}/T_{c}\gg 1\), \(\gamma_{\text{app}}(0,0)\) is certainly equal to \(\gamma(0,0)\) to good approximation.
In Fig. 8 we plot \(\left|\gamma_{\text{app}}(\omega_{1},\omega_{2})\right|^{2}\) for \(T_{p}/T_{c}=24\), taking \(\mathcal{N}=\mathcal{K}_{\text{SH}}=25\). Comparing to the exact plot (middle plot in Fig. 5) we see that there is indeed generally good agreement, with two main differences: First, \(\gamma_{\text{app}}(\omega_{1},\omega_{2})\) is only nonzero for \((\omega_{1},\omega_{2})\) satisfying the bandwidth limiting conditions \(-\Omega_{c}/2\leq\omega_{1}\leq\Omega_{c}/2\) and \(-\Omega_{c}/2\leq\omega_{2}\leq\Omega_{c}/2\), while \(\gamma(\omega_{1},\omega_{2})\) extends beyond that; this is not so apparent in Fig. 5, because outside the bandwidth limiting conditions the true \(\gamma(\omega_{1},\omega_{2})\) is very small. Second, \(\gamma_{\text{app}}(\omega_{1},\omega_{2})\) contains "false contributions" near \((\omega_{1},\omega_{2})=(\Omega_{c}/2,\Omega_{c}/2)\) and \((-\Omega_{c}/2,-\Omega_{c}/2)\), see the contributions highlighted by the red dashed circles in Fig. 8. This is related to the first difference, and arises because the postulated approximate form (IV.4) of \(\overline{\gamma}_{\text{app}}(t_{1},t_{2})\) involves the function \(\widehat{u}(\omega)\) that is periodic in \(\omega\) with a period \(1/T_{c}\).
We now look at the correlation functions that follow for a squeezed state with the approximate joint temporal amplitude identified here,
\[\overline{\gamma}_{\text{app}}(t_{1},t_{2})=\sum_{n=-\frac{\mathcal{N}-1}{2}} ^{\frac{\mathcal{N}-1}{2}}\sqrt{p_{n}}\overline{\eta}_{n}(t_{1})\overline{ \eta}_{n}(t_{2}),\] (IV.14)
where
\[p_{n} =\frac{1}{\mathcal{N}},\text{ for }-\left(\frac{\mathcal{N}-1}{2} \right)\leq n\leq\left(\frac{\mathcal{N}-1}{2}\right)\] \[=0\text{ otherwise}.\] (IV.15)
From this point forward we will denote the sums over \(n\) leaving the bounds implicit. We plot \(|\overline{\gamma}_{\text{app}}(t_{1},t_{2})|^{2}\) for the sinc-hat model with \(T_{p}/T_{c}=24\) in the middle diagram of Fig. 9, repeating the exact \(|\overline{\gamma}(t_{1},t_{2})|^{2}\) for this model in the left-most diagram; we can see the general level of agreement that might be expected from the results shown in Fig. 5 and Fig. 8. In the right-most plot we show \(\overline{\eta}_{n}(t_{1})\overline{\eta}_{n}(t_{2})\) for \(n=0\) and \(n=5\); all such functions are of course well-localized, and from (IV.14) we see that the contribution to \(\overline{\gamma}_{\text{app}}(t_{1},t_{2})\) from each of these is their product with the (pseudo-) Schmidt weight \(p_{n}\).
Using the general expressions (III.20,III.21) for \(\overline{G}^{(1)}(t)\) and \(\overline{G}^{(2)}(t_{1},t_{2})\), we find that for our approximate model (IV.14) we have
\[\overline{G}^{(1)}(t)=s^{2}\sum_{n}\left|\overline{\eta}_{n}(t) \right|^{2},\] (IV.16) \[\overline{G}^{(2)}(t_{1},t_{2})=\overline{G}^{(2)}_{\text{coh}}(t _{1},t_{2})+\overline{G}^{(2)}_{\text{incoh}}(t_{1},t_{2}).\]
Figure 9: From left to right we plot the: sinc-hat joint temporal intensity divided by its maximum value; approximate joint temporal intensity divided by its maximum value; two contributions to the approximate joint temporal intensity divided by its maximum value when \(n=0\) and \(n=5\) with the axes all normalized by \(\mathcal{T}^{\text{SH}}_{p}\).
where
\[\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2}) = s^{2}c^{2}\left|\sum_{p}\overline{\eta}_{p}(t_{2})\overline{\eta}_ {p}(t_{1})\right|^{2},\] (IV.17) \[\overline{G}^{(2)}_{\rm incoh}(t_{1},t_{2}) = \frac{1}{2}s^{4}\sum_{n,m}\left|\overline{\eta}_{n}(t_{1})\overline {\eta}_{m}(t_{2})+\overline{\eta}_{n}(t_{2})\overline{\eta}_{m}(t_{1})\right|^ {2}.\]
with
\[s=\sinh\left(\frac{|\beta|}{\sqrt{\mathcal{N}}}\right),\ \ \ \ c=\cosh\left(\frac{|\beta|}{\sqrt{ \mathcal{N}}}\right).\] (IV.18)
These are of course the same formulas as for the exact Schmidt modes, except that here \(s\) and \(c\) are the same for each pseudo-Schmidt mode. But since the pseudo-Schmidt modes are localized over a time of order \(T_{c}\), we see that this decomposition of \(\overline{G}^{(1)}(t)\) identifies contributions from each pseudo-Schmidt mode that are localized in time windows much less than the width of \(\overline{G}^{(1)}(t)\), which is on the order of \(T_{p}\). We show some of these contributions in Fig. 10. Here the range over which \(n\) varies indicates the overall range of \(\overline{G}^{(1)}(t)\) to very good approximation, and at any given time \(t\) the contributions to \(\overline{G}^{(1)}(t)\) come from at most a very few of the functions \(\overline{\eta}_{n}(t)\) with \(nT_{c}\) close to \(t\).
At least within the approximate pseudo-Schmidt decomposition we can now justify the use of the condition (III.6) to identify the weak squeezing regime. Since in our example here \(\mathcal{N}=\mathcal{K}_{\rm SH}\), the weak squeezing limit can be written as
\[\frac{|\beta|}{\sqrt{\mathcal{N}}}\ll 1,\] (IV.19)
the expectation value of the number of photons in any pseudo-Schmidt mode is given by
\[N_{\rm mode}=s^{2}=\sinh^{2}\left(\frac{|\beta|}{\sqrt{\mathcal{N}}}\right),\] (IV.20)
while the expectation value of the number of photons in the full pulse is
\[N_{\rm pulse}=\mathcal{N}\sinh^{2}\left(\frac{|\beta|}{\sqrt{\mathcal{N}}} \right).\] (IV.21)
So in the weak squeezing limit (IV.19) we have \(N_{\rm mode}\ll 1\); the expected number of photons in any pseudo-Schmidt mode is much less than unity. However, the expected number of photons in the pulse, \(N_{\rm pulse}\) can be arbitrarily large; indeed, in the limit of weak squeezing we have \(N_{\rm pulse}\to|\beta|^{2}.\) Very generally, regardless of the level of squeezing, we can understand the CW limit as \(|\beta|^{2}\to\infty\) and \(\mathcal{N}\to\infty\) such that \(|\beta|^{2}/\mathcal{N}\) is a constant.
In Fig. 11 we plot the approximate \(\overline{G}^{(2)}(t_{1},t_{2})\) given by (IV.16,IV.17), and compare with the exact \(\overline{G}^{(2)}(t_{1},t_{2})\) in Fig. 7 for the sinc-hat model; we see very good agreement. Importantly, the decomposition (IV.16,IV.17) of \(\overline{G}^{(2)}(t_{1},t_{2})\) into pseudo-Schmidt modes immediately illustrates the behavior of that function in a way that the decomposition into the exact Schmidt modes does not.
Considering the first plot when \(\beta=0.1\) (\(|\beta|/\sqrt{\mathcal{N}}\) = 0.02), the dominance of \(\overline{G}^{(2)}_{\rm coh}(t/2,-t/2)\) over \(\overline{G}^{(2)}_{\rm incoh}(t/2,-t/2)\) follows immediately from the prefactors \(s^{2}c^{2}\) and \(s^{4}\) in their expressions (IV.17). And in the sum over \(p\) in the expression for \(\overline{G}^{(2)}_{\rm coh}(t/2,-t/2)\), of the terms \(\overline{\eta}_{p}(t/2)\overline{\eta}_{p}(-t/2)\) both \(\overline{\eta}_{p}(t/2)\) and \(\overline{\eta}_{p}(-t/2)\) must be significant for a contribution to be made, which requires \(p\) to be close to \(0\) and \(|t|\lesssim T_{c}\). The same behavior extends for the larger values of \(\beta\) as in the exact sinc-hat Schmidt decomposition. Thus the largest contribution to \(\overline{G}^{(2)}(t/2,-t/2)\) when \(|t|\lesssim T_{c}\) comes from only a few terms in the pseudo-Schmidt decomposition, while it involves many terms in the Schmidt decomposition, and their interference.
The expression for \(\overline{G}^{(2)}_{\rm incoh}(t/2,-t/2)\), which in the pseudo-Schmidt decomposition (IV.17) involves sums over two indices \(n\) and \(m\), contains two types of contributions. In the first, with \(m=n\), we get terms that will only be significant if \(m=n\) is close to \(0\) and \(|t|\lesssim T_{c}\), as in the expression for \(\overline{G}^{(2)}_{\rm coh}(t/2,-t/2)\); this gives the contribution to \(\overline{G}^{(2)}_{\rm incoh}(t/2,-t/2)\) that mirrors the form of \(\overline{G}^{(2)}_{\rm coh}(t/2,-t/2)\). This contribution to \(\overline{G}^{(2)}_{\rm incoh}(t/2,-t/2)\), and the term \(\overline{G}^{(2)}_{\rm coh}(t/2,-t/2)\), can thus be seen to arise from pairs of photons, each photon in a pair associated with the same pseudo-Schmidt mode. But the terms with \(m\neq n\) can give contributions for \(t\) on the order of \(T_{p}\); they give rise to the broad background, which can be understood as arising from pairs of photons, with the photons in a pair associated with different pseudo-Schmidt modes.
In a similar way one can understand the behavior of \(\overline{G}^{(2)}_{\rm coh}(\overline{t}+t/2,\overline{t}-t/2)\) and \(\overline{G}^{(2)}_{\rm incoh}(\overline{t}+t/2,\overline{t}-t/2)\) for \(\overline{t}\neq 0\). Unlike in the decomposition of the joint temporal amplitude in terms of Schmidt modes, the decomposition in terms of pseudo-Schmidt modes immediately reveals the structure of those correlation functions.
And in fact, we can derive an analytic expression for the approximate (IV.16,IV.17) \(\overline{G}^{(1)}(t)\) and \(\overline{G}^{(2)}(t_{1},t_{2})\) in the CW limit. Noting that
\[\sum_{n=-\infty}^{\infty}\overline{\eta}_{n}(t_{2})\overline{\eta}_{n}(t_{1})= \frac{1}{T_{c}}{\rm sinc}\left(\frac{\pi\,\Delta t}{T_{c}}\right),\] (IV.22)
where \(\Delta t\equiv t_{2}-t_{1}\), for \(t_{1}\) and \(t_{2}\) in the center of a pulse of duration \(T_{p}\), when \(T_{p}\to\infty\) we can write
\[\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})\to\frac{s^{2}c^{2}}{T_{c}^{2}}{\rm sinc }^{2}\left(\frac{\pi\,\Delta t}{T_{c}}\right),\] (IV.23)
while
\[\overline{G}^{(1)}(t)\to\frac{s^{2}}{T_{c}}.\] (IV.24)
Finally, noting that
\[\begin{split}\overline{G}^{(1)}(t_{1},t_{2})&=s^{2}\sum_ {n}\overline{\eta}_{n}(t_{2})\overline{\eta}_{n}(t_{1})\\ &\rightarrow\frac{s^{2}}{T_{c}}\text{sinc}\left(\frac{\pi\,\Delta t }{T_{c}}\right),\end{split}\] (IV.25)
then with the alternate form of \(\overline{G}^{(2)}_{\text{incoh}}\) (see Eq. (III.32)) we have
\[\overline{G}^{(2)}_{\text{incoh}}(t_{1},t_{2})\rightarrow\frac{s^{4}}{T_{c}^{ 2}}\left(1+\text{sinc}^{2}\left(\frac{\pi\,\Delta t}{T_{c}}\right)\right),\] (IV.26)
Figure 11: For the sinc-hat calculated from the pseudo-Schmidt decomposition, from left to right we plot \(\overline{G}^{(2)}(t_{1},t_{2})/\Phi^{2}\) (top) and the coherent and incoherent contribution to \(\overline{G}^{(2)}(t/2,-t/2)/\Phi^{2}\) (bottom) with the axes normalized by \(\mathcal{T}^{\text{SH}}_{p}\) for \(\beta=0.1,5,\) and \(10\).
Figure 10: For the sinc-hat calculated from the pseudo-Schmidt decomposition, from left to right we plot \(\overline{G}^{(1)}(t)/\Phi\) and a few contributions from different pseudo-Schmidt modes in Eq. (IV.16) with the horizontal axis normalized by \(\mathcal{T}^{\text{SH}}_{p}\) for \(\beta=0.1,5,\) and \(10\).
and so
\[\overline{G}^{(2)}(t_{1},t_{2})\to\overline{G}^{(2)}(\Delta t)=\frac{s ^{2}c^{2}}{T_{c}^{2}}\text{sinc}^{2}\left(\frac{\pi\,\Delta t}{T_{c}}\right)\] (IV.27) \[\qquad\qquad\qquad\qquad\qquad+\frac{s^{4}}{T_{c}^{2}}\left(1+ \text{sinc}^{2}\left(\frac{\pi\,\Delta t}{T_{c}}\right)\right).\]
We plot this in Fig. 12 (cf. Fig. 7 and 11.)
From the above discussion, we are motivated to think of \(\overline{G}^{(2)}(\overline{t}+t/2,\overline{t}-t/2)\) for times \(|t|\lesssim T_{c}\) when \(n=m\) as arising from one pseudo-Schmidt mode at a time, as it were, on a "mode-by-mode" basis. Since the pseudo-Schmidt decomposition is valid only for joint amplitudes such as the sinc-hat joint amplitude, where there is a high degeneracy in the Schmidt weights, we give this calculation in Appendix F.
## V The Whittaker-Shannon decomposition
The ability to construct approximate pseudo-Schmidt modes for the sinc-hat joint amplitude that are "localized" in time (\(\overline{\eta}(t)\), see (IV.1)) relied on the near-degeneracy of the Schmidt modes. But this cannot be generally expected; see, e.g., Fig. 2 for the Schmidt amplitudes of the double-Gaussian joint amplitude, where there is no such near-degeneracy. Nonetheless, the recognition used above that only a finite frequency range - there from \(-\Omega_{c}/2\) to \(\Omega_{c}/2\) - is important can be employed to construct an extension of a Schmidt decomposition more generally; it shares the feature of the pseudo-Schmidt decomposition above that the functions involved are localized in time.
We consider a joint spectral amplitude \(\gamma(\omega_{1},\omega_{2})\) to be bandwidth limited, in that at least approximately it can be taken to be nonzero only for \(-\Omega/2\leq\omega_{1},\omega_{2}\leq\Omega/2\), where \(\Omega\) is a positive frequency. Then we can construct a Whittaker-Shannon decomposition of the joint amplitude based on the Whittaker-Shannon Sampling Theorem [28; 29; 30]. That theorem states that for a function \(\overline{g}(t)\) that is bandwidth limited in the sense defined above - i.e., involving only constituent frequencies in the range \(-\Omega/2\leq\omega\leq\Omega/2\) - we can write
\[\overline{g}(t)=\sum_{n}\overline{g}(n\tau)\text{sinc}\left(\frac{(t-n\tau) \pi}{\tau}\right),\] (V.1)
where \(n\) ranges over the integers and
\[\tau=\frac{2\pi}{\Omega}.\] (V.2)
Defining
\[\overline{\chi}(t)=\frac{1}{\sqrt{\tau}}\text{sinc}\left(\frac{\pi t}{\tau}\right)\] (V.3)
and putting
\[\overline{\chi}_{n}(t)=\overline{\chi}(t-n\tau),\] (V.4)
we have
\[\int\overline{\chi}_{n}^{*}(t)\overline{\chi}_{m}(t)dt=\delta_{nm},\] (V.5)
and we can write (V.1) as
\[\overline{g}(t)=\sqrt{\tau}\sum_{n}\overline{g}(n\tau)\overline{\chi}_{n}(t).\] (V.6)
The Fourier transform of \(\overline{g}(t)\) is
\[g(\omega)=\int\frac{dt}{\sqrt{2\pi}}\overline{g}(t)e^{i\omega t}=\sqrt{\tau} \sum_{n}\overline{g}(n\tau)\chi_{n}(\omega),\] (V.7)
where
\[\chi_{n}(\omega) =\frac{e^{i\omega n\tau}}{\sqrt{\Omega}}\text{ for }-\frac{\Omega}{2} \leq\omega\leq\frac{\Omega}{2}\] (V.8) \[=0\text{ otherwise},\] (V.9)
is the Fourier transform of \(\overline{\chi}_{n}(t)\); we have
\[\int\chi_{n}^{*}(\omega)\chi_{m}(\omega)d\omega=\delta_{nm}.\] (V.10)
For frequencies within the band limited region the set of functions \(\{\chi_{n}(\omega)\}\) and \(\{\chi_{n}(t)\}\) form an approximately complete set of functions and we refer to them as the "Whittaker-Shannon modes."
The "Whittaker-Shannon decomposition" of the joint temporal amplitude \(\overline{\gamma}(t_{1},t_{2})\), and its Fourier transform \(\gamma(\omega_{1},\omega_{2})\), follow immediately from this if \(\gamma(\omega_{1},\omega_{2})\) is bandwidth limited, with frequency bandwidth \(\Omega\). Using the sampling theorem for both variables, we have
\[\overline{\gamma}(t_{1},t_{2}) =\tau\sum_{n,m}\overline{\gamma}(n\tau,m\tau)\overline{\chi}_{n}( t_{1})\overline{\chi}_{m}(t_{2}),\] (V.11) \[\gamma(\omega_{1},\omega_{2}) =\tau\sum_{n,m}\overline{\gamma}(n\tau,m\tau)\chi_{n}(\omega_{1}) \chi_{m}(\omega_{2}),\]
Unlike a Schmidt (or pseudo-Schmidt) decomposition, this involves a double sum. But the Whittaker-Shannon supermodes involved, with raising operators given by
\[B_{n}^{\dagger}=\int d\omega\chi_{n}(\omega)a^{\dagger}(\omega)=\int dt\overline {\chi}_{n}(t)\overline{a}^{\dagger}(t),\] (V.12)
are associated with functions that are localized in time (\(\overline{\chi}_{n}(t)\)) and can be inverted so that
\[\overline{a}(t)=\sum_{n}\overline{\chi}_{n}(t)B_{n}.\] (V.13)
It will be useful to define
\[\beta_{nm}=\beta\tau\overline{\gamma}(n\tau,m\tau),\] (V.14)
which is generally complex but symmetric, \(\beta_{nm}=\beta_{mn}\); then we can write the squeezed state \(|\Psi\rangle\) (I.1,II.3) as
\[|\Psi\rangle=e^{\frac{1}{2}\sum\limits_{n,m}\beta_{nm}B_{n}^{\dagger}B_{m}^{ \dagger}-\text{h.c.}}\left|\text{vac}\right\rangle.\] (V.15)
Although non-diagonal terms of \(\beta_{nm}\) will be important, since \(\overline{\gamma}(t_{1},t_{2})\) is only significant for \(|t_{1}-t_{2}|\) on the order of the coherence time, we can expect \(\beta_{nm}\) to be significant only for \(m\) "reasonably close" to \(n\); we will examine this in more detail below. Note that our examples of the double-Gaussian and sinc-hat joint spectral amplitudes illustrate that imposing the approximation that those amplitudes are bandwidth limited requires a choice of \(\Omega\geq 2\pi\mathcal{B}_{c}\) that corresponds to \(\tau\leq\mathcal{T}_{c}\), that is, \(\tau\) is smaller than, but on the order of, the coherence time.
As a first example, we apply the Whittaker-Shannon decomposition to the sinc-hat joint amplitude (III.33,III.35), and show how the approximate pseudo-Schmidt decomposition (IV.14,IV.15) arises as the limiting case when \(T_{p}\gg T_{c}\). To apply the Whittaker-Shannon decomposition we must choose a bandlimit, and for the sinc-hat example the joint spectral amplitude ranges mostly within the "box" set by the width \(\Omega_{c}\); a natural choice for the bandlimit \(\Omega\) is then to set \(\Omega\to\Omega_{c}\) and it follows that \(\tau\to T_{c}\). However, the sinc-hat joint spectral amplitude will never fit exactly inside a box of width \(\Omega_{c}\) because it is only exactly bandlimited in the diagonal direction set by the range of \(\phi((\omega_{1}-\omega_{2})/2)\), see Eq. (III.33), (III.35), so the corners will always exist outside the box, see the discussion surrounding Eq. (III.40) and Appendix B. When we work in the limit that \(T_{p}\gg T_{c}\), where the contributions that exist outside the boundary of the box are very small, then to good approximation we can treat the sinc-hat joint spectral amplitude as bandlimited with bandwidth \(\Omega=\Omega_{c}\) (\(\tau=T_{c}\)).
With this choice of \(\Omega\) and the sinc-hat joint temporal amplitude (Eq. (III.33)), we evaluate
\[\begin{split}\tau\;\overline{\gamma}(n\tau,m\tau)& =\delta_{nm}\sqrt{\frac{T_{c}}{T_{p}}},\text{for}-\frac{T_{p}}{2} \leq nT_{c}\leq\frac{T_{p}}{2}\\ &=0,\text{otherwise},\end{split}\] (V.16)
where we have used \(\text{sinc}((n-m)\pi)=\delta_{nm}\). We find that the range of Whittaker-Shannon modes is then diagonal, and only ranges over the pulse duration set by \(\overline{\alpha}(nT_{c})\); identifying that \(T_{p}/T_{c}=\mathcal{N}-1\) (IV.12) and \(p_{n}\) (IV.15) as in section IV, it immediately follows that
\[\overline{\gamma}(t_{1},t_{2})\to\sum_{n=-\left(\frac{\mathcal{N}-1}{2} \right)}^{\left(\frac{\mathcal{N}-1}{2}\right)}\sqrt{p_{n}}\overline{\chi}_{n }(t_{1})\overline{\chi}_{n}(t_{2}),\] (V.17)
where \(\overline{\chi}_{n}(t)=\overline{\eta}_{n}(t)\).
For the sinc-hat model (see Fig. 5), the reduction of the double sum to its diagonal contributions is only possible because the joint spectrum is exactly bandlimited in the \((\omega_{1}-\omega_{2})\) direction (see (III.33) and (III.35)); however, the resulting Whittaker-Shannon decomposition will only be accurate as a single sum in the \(T_{p}\gg T_{c}\) limit for which the contributions that exist outside the box become very small and can safely be neglected.
In general, this reduction will not be possible and we will need to include some - but typically only a few - off diagonal contributions to properly approximate the joint amplitude with the Whittaker-Shannon decomposition. As we will see below, the single sum and product state that follows if only diagonal terms are included (see Eq. (III.15)) is much easier and more intuitive to work with, so we always want to be as close to that limit as we can be. This means that for a not-exactly-bandlimited joint spectral amplitude, the choice of a bandwidth \(\Omega=2\pi/\tau\) always involves a trade-off: We want \(\tau\) as large as possible so that each \(\overline{\gamma}(n\tau,n\tau)\) covers most of the \(|t_{1}-t_{2}|\) behavior; however, if we choose \(\tau\) too large then \(\Omega\) becomes too small to even approximately cover the bandwidth of the joint spectral amplitude.
To see how this plays out in practice, consider the double-Gaussian joint amplitude (III.23). Although it is not strictly bandlimited, if we set \(\Omega/2\pi=\mathcal{B}_{c}^{\text{DG}}=a\sigma_{c}/(2\pi\sqrt{2})\) (see Eq. III.30), to very good approximation we can neglect the high frequency contributions for a
reasonable choice of \(a\). Then \(\tau=2\pi\sqrt{2}/(a\sigma_{c})\), and
\[\overline{\gamma}(n\tau,m\tau)\propto e^{-\frac{\sigma_{c}^{2}\tau^{2}(n-m)^{2} }{4}}\] (V.18)
sets the off diagonal range; for our choice of \(\tau\) we have \(\sigma_{c}^{2}\tau^{2}=8\pi^{2}/a^{2}\) and as \(a\) increases the bandlimit gets larger, but so does the range over which \(|n-m|\) is significant. Setting \(a=2\sqrt{2\pi}\) as in Section III, we have \(\sigma_{c}^{2}\tau^{2}/4=\pi\) and the \(1/e\) drop-off in Eq. (V.18) occurs when \(|n-m|=1/\sqrt{\pi}\approx 0.5\). In this example, and more generally for only approximately band limited joint spectral amplitudes, one could investigate optimizing \(\tau\) constrained by a specified error tolerance on the Whittaker-Shannon interpolation.
Clearly the Whittaker-Shannon decomposition depends on the two index parameter \(\beta_{nm}\), but to make a comparison to the pseudo-Schmidt decomposition we focus on \(\hat{\beta}\), which we define to be the value of \(\beta_{nm}\) at the \(n\) and \(m\) for which \(|\overline{\gamma}(n\tau,m\tau)|\) takes its maximum value. Denoting the value of \(|\overline{\gamma}(n\tau,m\tau)|\) at this \(n\) and \(m\) by \(\overline{\gamma}_{\max}\), we have
\[\hat{\beta}=\tau\beta\overline{\gamma}_{\max},\] (V.19)
and
\[\beta_{nm}=\hat{\beta}r_{nm},\] (V.20)
with \(r_{nm}=\overline{\gamma}(n\tau,m\tau)/\overline{\gamma}_{\max}\). The range of \(|n-m|\) over which \(r_{nm}\) is significant identifies the range over which \(\beta_{nm}\) varies.
Again using the double-Gaussian joint amplitude as an example, which achieves its maximum at \(t_{1}=t_{2}=0\), we find
\[|\hat{\beta}|=\sqrt{2}|\beta|\frac{\tau}{\sqrt{\mathcal{T}_{p}^{\rm DG}} \mathcal{T}_{c}^{\rm DG}},\] (V.21)
where we have used \(\mathcal{T}_{p}^{\rm DG}\mathcal{T}_{c}^{\rm DG}=2\pi/\sigma_{c}\sigma_{p}\). From the discussion above, we set \(\tau=\mathcal{T}_{c}^{\rm DG}\), and obtain
\[|\hat{\beta}|=\sqrt{2}|\beta|\sqrt{\frac{\mathcal{T}_{c}^{\rm DG}}{\mathcal{T }_{p}^{\rm DG}}}=\sqrt{2}\frac{|\beta|}{\sqrt{\mathcal{K}_{\rm DG}}},\] (V.22)
so the effective Schmidt number naturally arises and - aside from the benign factor of \(\sqrt{2}\) - identifies the weakly or strongly squeezed limit as \(|\hat{\beta}|\ll 1\) or \(|\hat{\beta}|\gg 1\) respectively, justifying the definition in section III. Since each \(\overline{\chi}_{n}(t)\) has a width \(\tau=\mathcal{T}_{c}^{\rm DG}\), the effective Schmidt number \(\mathcal{K}_{\rm DG}\) roughly identifies the number of Whittaker-Shannon modes that are relevant along \(r_{nn}\). Further, following the discussion around Eq. (IV.19), for a very long pulse such that \(\mathcal{K}_{\rm DG}\gg 1\) the squeezing parameter \(|\beta|\) can be quite large but \(|\hat{\beta}|\) remains finite.
In Fig. 14 we plot \(|\overline{\gamma}(t_{1},t_{2})|^{2}\) and \(|\gamma(\omega_{1},\omega_{2})|^{2}\), reconstructed from using the Whittaker-Shannon decomposition for the double-Gaussian joint amplitude of Fig. 2, where we have chosen \(\Omega=2\pi\mathcal{B}_{c}\), and plot \(r_{nm}\). Comparing to Fig. 2, we see very good agreement, and from the zoomed in plot of \(r_{nm}\) in the lower right corner it is clear that only a few neighbouring Whittaker-Shannon modes in the \(|n-m|\) direction are relevant.
The argument that even though \(|\beta|\) can be quite large \(|\hat{\beta}|\) remains finite holds true for _any_ joint amplitude, because as long as it is square normalized (I.3) it will carry pre-factors on its behavior in the two directions in the plane (of either \((t_{1},t_{2})\) or \((\omega_{1},\omega_{2})\)) over which it is defined. In Appendix C, we show that given a joint temporal amplitude characterized by \(\mathcal{T}_{p}\) and \(\mathcal{T}_{c}\), its maximum value \(\overline{\gamma}_{\max}\) is on the order of
\[\overline{\gamma}_{\max}\sim\frac{1}{\sqrt{\mathcal{T}_{p}\mathcal{T}_{c}}}\] (V.23)
To apply the Whittaker-Shannon decomposition we set \(\Omega=2\pi\mathcal{B}_{c}\) so that \(\tau=\mathcal{T}_{c}\). Then one immediately finds
\[|\hat{\beta}|=\tau|\beta|\overline{\gamma}_{\max}\sim\frac{|\beta|}{\sqrt{ \mathcal{K}}}.\] (V.24)
What remains to be shown is the relation between the Schmidt number \(K\) and the effective Schmidt number \(\mathcal{K}\). In Appendix D, we show that for a general joint temporal amplitude characterized by widths \(\mathcal{T}_{p}\), \(\mathcal{T}_{c}\) indicated schematically in Fig. 1, and a given \(\tau\) which we set to be equal to \(\mathcal{T}_{c}\), the inequality
\[K\leq\frac{\mathcal{T}_{p}}{\tau}=\frac{\mathcal{T}_{p}}{\mathcal{T}_{c}}= \mathcal{K},\] (V.25)
generally holds; of course this is conditioned on \(\tau\) being sufficiently small that the Whittaker-Shannon decomposition can be accurately used. Thus by applying the Whittaker-Shannon interpolation formula - which previously has not been linked to squeezed light or the Schmidt decomposition - we are able to show that the Schmidt number and effective Schmidt number are intimately related. Further, the Whittaker-Shannon decomposition may be an important stepping stone in formally linking the Schmidt number to other measures of the correlation, like the time-bandwidth product and its generalizations, in the same way that the Whittaker-Shannon interpolation formula is intimately linked to the Shannon number for classical signals and the information content of images [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 25; 26; 28; 29; 30; 31].
Finally, we are now in a position to explain why the effective Schmidt number for the sinc-hat example is nearly equal to the exact Schmidt number. First we emphasize that the sinc-hat is an idealization with a sharp cutoff in time (frequency) and is constant along \(t_{1}=t_{2}\) (\(\omega_{1}=-\omega_{2}\)). This means that in the limit when \(T_{p}\gg T_{c}\), to very good approximation it is exactly bandlimited with \(\Omega=\Omega_{c}\) (\(\tau=T_{c}\)). Mathematically, the sharp cutoff in frequency results in an exact diagonalization of \(\overline{\gamma}(n\tau,m\tau)\), and since \(\overline{\gamma}(n\tau,n\tau)\) is identical for all \(n\) where it is not zero (see Eq. (IV.14)) the sum in Eq. (III.3) can be carried out exactly resulting in \(K\approx T_{p}/T_{c}\), as we found
in section III.2; we then expect
\[\frac{\mathcal{K}_{\text{SH}}}{K_{\text{SH}}}=1+\frac{T_{c}}{T_{p}}\to 1,\] (V.26)
for \(T_{p}/T_{c}\to\infty\). Physically, the Schmidt amplitudes are near-degenerate, so there is no unique set of Schmidt modes.
## VI Employing the Whittaker-Shannon decomposition
In using the pseudo-Schmidt decomposition to "take apart" the sinc-hat joint temporal amplitude, we showed that the expressions for the correlation functions \(\overline{G}^{(1)}(t)\) and \(\overline{G}^{(2)}(t_{1},t_{2})\) could be easily understood in terms of the properties of the supermodes involved in the decomposition. In this section we look at the corresponding expressions for the correlation functions when we "take apart" the joint amplitudes using the Whittaker-Shannon decomposition instead. The results are more complicated, but again the behavior of the correlation functions can be understood in terms of the properties of the Whittaker-Shannon supermodes in an intuitive way. And since the Whittaker-Shannon decomposition can be much more widely applied than a pseudo-Schmidt decomposition, the results here are much more general.
To simplify the notation we write the squeezed state in Eq. (V.15) as \(|\Psi\rangle=\tilde{S}\ket{\text{vac}}\), where
\[\tilde{S}=e^{\frac{1}{2}\sum\limits_{n,m}\beta_{nm}B_{n}^{\dagger}B_{m}^{ \dagger}-\text{h.c.}},\] (VI.1)
is the squeezing operator. To calculate the correlation functions analogously to what was done with the Schmidt and pseudo-Schmidt decompositions, we use the inverse relation (Eq. (V.13)) and the transformation [32],
\[\tilde{S}^{\dagger}B_{r}\tilde{S} =\mu_{rs}B_{s}+\nu_{rs}B_{s}^{\dagger},\] (VI.2a) \[\tilde{S}^{\dagger}B_{r}^{\dagger}\tilde{S} =\mu_{rs}^{*}B_{s}^{\dagger}+\nu_{rs}^{*}B_{s},\] (VI.2b)
where we adopt the convention that repeated indices are to be summed over, and
\[\mu_{rs} =\delta_{rs}+\frac{1}{2!}\beta_{ra}\beta_{as}^{*}+\frac{1}{4!} \beta_{ra}\beta_{ab}^{*}\beta_{bc}\beta_{cs}^{*}+...,\] (VI.3a) \[\nu_{rs} =\beta_{rs}+\frac{1}{3!}\beta_{ra}\beta_{ab}^{*}\beta_{bs}+....\] (VI.3b)
Note that from the symmetric property of \(\beta_{nm}\) it follows that \(\mu_{rs}\) is Hermitian and \(\nu_{rs}\) is symmetric.
For the Schmidt or pseudo-Schmidt decomposition the transformation used always involves a single supermode (see Eq. (III.17)); for the general Whittaker-Shannon decomposition the squeezing transformation is more complicated. The structure of the squeezing transformation (VI.2), and the form of \(\mu_{rs}\) and \(\nu_{rs}\), motivates the use of matrix multiplication; \(\beta_{nm}\) is now treated as a complex square symmetric matrix which we denote by \(\mathbf{\beta}\) (\(\mathbf{\beta}^{*}=\mathbf{\beta}^{\dagger}\)). To implement the squeezing transformation it is convenient to use the "left" polar decomposition of \(\mathbf{\beta}\) (valid for _any_ complex square matrix), given by
\[\mathbf{\beta}=\mathbf{UP},\] (VI.4)
where \(\mathbf{P}=\sqrt{\mathbf{\beta}^{\dagger}\mathbf{\beta}}\) is Hermitian and \(\mathbf{U}\) is unitary. Equivalently, if we set \(\mathbf{Q}=\mathbf{UPU}^{\dagger}\) then we have the "right" polar decomposition \(\mathbf{\beta}=\mathbf{QU}\); since \(\mathbf{\beta}\) is symmetric, \(\mathbf{P}^{T}=\mathbf{Q}\) and \(\mathbf{U}^{T}=\mathbf{U}\). Using the polar decomposition and its properties, the matrices \(\mathbf{\mu}\) and \(\mathbf{\nu}\) are given by [33]
\[\mathbf{\mu} =\cosh(\mathbf{UPU}^{\dagger})=\cosh\mathbf{Q},\] (VI.5a) \[\mathbf{\nu} =\mathbf{U}\text{sinh}\mathbf{P}=(\text{sinh}\mathbf{Q})\mathbf{U}.\] (VI.5b)
The form of \(\mathbf{\mu}\) and \(\mathbf{\nu}\) guarantees that the transformation in Eq. (VI.2) preserves the commutation relations of the \(B_{n}\) operators, as expected since the transformation is unitary.
In Appendix E we calculate the correlation functions (Eq. (II.4)); we find
\[\overline{G}^{(1)}(t_{1},t_{2})=\overline{\mathbf{\chi}}^{\dagger}(t_{1})(\text{ sinh}^{2}\mathbf{P})\overline{\mathbf{\chi}}(t_{2}),\] (VI.6)
and
\[\overline{G}^{(2)}(t_{1},t_{2}) =|\overline{\mathbf{\chi}}^{T}(t_{1})\mathbf{U}(\text{sinh}\mathbf{P}) (\text{cosh}\mathbf{P})\overline{\mathbf{\chi}}(t_{2})|^{2}\] (VI.7) \[+\overline{G}^{(1)}(t_{1})\overline{G}^{(1)}(t_{2})+|\overline{G} ^{(1)}(t_{1},t_{2})|^{2}\]
where \(\overline{\mathbf{\chi}}(t)=(...,\overline{\chi}_{-1}(t),\overline{\chi}_{0}(t), \overline{\chi}_{1}(t),...)^{T}\) is the column vector formed from the set \(\{\overline{\chi}_{n}(t)\}\) for a given \(t\).
Since the functions \(\overline{\chi}_{n}(t)\) are similar to the pseudo-Schmidt modes \(\overline{\eta}_{n}(t)\), Eqs. VI.6 and VI.7 for the correlation functions calculated using the Whittaker-Shannon decomposition are the generalization of the pseudo-Schmidt results (IV.16, IV.17), valid for an approximately bandlimited, but otherwise general joint amplitude. Both the pseudo-Schmidt and Whittaker-Shannon mode functions are localized, so for short time differences the structure of Eq. VI.7 reduces to that of the pseudo-Schmidt decomposition, and we can again think of the correlation function on a "mode-by-mode" basis; however, since this correspondence is only approximate, we discuss it in Appendix F.
### Packet expansion
From Eq. (VI.6), we can immediately write the photon density as
\[\overline{G}^{(1)}(t)=\sum_{n}\Gamma_{n}^{2}|\overline{\rho}_{n}(t)|^{2},\] (VI.8)
where \(\overline{\rho}_{n}(t)\) is a normalized function set by
\[\overline{\rho}_{n}(t)=\frac{1}{\Gamma_{n}}\sum_{m}(\text{sinh}\mathbf{P})_{nm }\overline{\chi}_{m}(t),\] (VI.9)
and
\[\Gamma_{n}=\sqrt{(\sinh^{2}\!{\bf P})_{nn}},\] (VI.10)
which is real. The expression (VI.8) for \(\overline{G}^{(1)}(t)\) is the generalization of Eq. (IV.16) in the pseudo-Schmidt decomposition, and clearly has the same form; indeed, if we were to set \(\beta_{nm}\) to be diagonal and independent of \(n\) for the \(n\) for which it does not vanish, we would have \(\overline{\rho}_{n}(t)\to\overline{\chi}_{n}(t)\). Even more generally, the expression (VI.8) mirrors the form of the expansion (III.20) of \(\overline{G}^{(1)}(t)\) in terms of Schmidt modes, with \(\Gamma_{n}\) here taking the role of \(s_{n}\) there. But the \(\overline{\rho}_{n}(t)\) cannot be identified as a supermode; while the functions in the set \(\{\overline{\chi}_{n}(t)\}\) are mutually orthogonal, the functions in the set \(\{\overline{\rho}_{n}(t)\}\) are not, because in general \(\sinh\!{\bf P}\) is not unitary. Nonetheless, the functions in the latter set are generally localized compared to the duration of the pulse, especially for weak squeezing. We refer to the \(\overline{\rho}_{n}(t)\) simply as "packets," and to the expansion (VI.8) for \(\overline{G}^{(1)}(t)\) as its "packet expansion;" we will see packet expansions of other correlation functions below.
The expected number of photons in the pulse is given by integrating \(\overline{G}^{(1)}(t)\) over all time; we find
\[N_{\text{pulse}}=\text{Tr}(\sinh^{2}\!{\bf P})=\sum_{n}\Gamma_{n}^{2},\] (VI.11)
where \(\text{Tr}(\cdot)\) denotes the trace. This is reminiscent of the corresponding expressions (III.22) and (IV.21) for the Schmidt and pseudo-Schmidt expansions respectively. From Eq. (VI.11) it is clear that \(\Gamma_{n}^{2}\) is the number of photons in each packet, and summing over all packets gives the total number of photons.
For the double-Gaussian and three values of \(\beta\) chosen in section III.1, we have \(|\hat{\beta}|\approx 0.014,0.7,\) and \(1.4\) which is on the order of the three values \(|\beta|/\sqrt{\mathcal{K}}\), in agreement with the discussion surrounding Eq. (V.24). In Fig. 14 we plot \(G^{(1)}(t)\) calculated using Eq. (VI.6), as well as the contributions given by Eq. (VI.8) for a few values of \(n\), together with the exact \(\overline{G}^{(1)}(t)\) for the three chosen values of \(\beta\), which correspond to \(N_{\text{pulse}}\approx 0.01,35,383\); we find excellent agreement between the exact and Whittaker-Shannon decomposition. From Fig. 14 we see that each \(\overline{\rho}_{n}(t)\) is clearly localized compared to the duration of the pulse, and so using the packets we can "take apart" the squeezed light and provide a simple description of the photon density (Eq. (VI.8)); this extends our understanding from the pseudo-Schmidt decomposition valid for the sinc-hat joint amplitude to more general joint amplitudes, such as the double-Gaussian, where a Whittaker-Shannon decomposition is necessary.
Notice that as \(|\beta|\) (\(|\hat{\beta}|\)) increases so does the width of each \(\overline{\rho}_{n}(t)\). Referring back to Eq. (VI.9), this occurs because elements of \(\sinh\!{\bf P}\) that are further off-diagonal become more important as \(|\beta|\) increases. And this is a consequence of the fact that more powers of \({\bf P}\) become important in the expansion of \(\sinh\!{\bf P}\) as \(|\beta|\) increases, since \({\bf P}\) depends on \(\mathbf{\beta}\) (see Eq. (VI.4)). Thus the off-diagonality of \(\sinh\!{\bf P}\) is extended beyond that of \(\mathbf{\beta}\), and elements of \(\sinh\!{\bf P}\) further from the diagonal become larger as \(|\beta|\) increases; see Fig. 15 for plots of \(\sinh\!{\bf P}\) with increasing \(|\beta|\), which demonstrates this effect.
We can now construct the expansions for \(\overline{G}^{(2)}_{\text{coh}}(t_{1},t_{2})\) and \(\overline{G}^{(2)}_{\text{incoh}}(t_{1},t_{2})\). For the first of these, comparing the expression (VI.7) for the full \(\overline{G}^{(2)}(t_{1},t_{2})\) with our earlier general expression (III.32) for \(\overline{G}^{(2)}_{\text{incoh}}(t_{1},t_{2})\), we can
Figure 13: For the double-Gaussian, from left to right we plot the: joint temporal intensity divided by its maximum value with the axes normalized by \(\mathcal{T}^{\text{DG}}_{p}\); joint spectral intensity divided by its maximum value with the axes normalized by \(2\pi\mathcal{B}^{\text{DG}}_{c}\); amplitudes \(r_{nm}\).
identify
\[\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2}) =|\overline{\chi}^{T}(t_{1})\mathbf{U}(\sinh\mathbf{P})(\cosh\! \mathbf{P})\overline{\chi}(t_{2})|^{2}\] \[=\left|\sum_{n,m}(\mathbf{U}(\sinh\!\mathbf{P})(\cosh\!\mathbf{P} ))_{nm}\overline{\chi}_{n}(t_{1})\overline{\chi}_{m}(t_{2})\right|^{2}.\] (VI.12)
This can be compared with the corresponding expressions (III.21) and (IV.17) for the Schmidt and pseudo-Schmidt decompositions respectively. The appearance of terms with \(m\neq n\) here, as opposed to the single summation that appears in the Schmidt and pseudo-Schmidt expansion, is expected given that the squeezed state written using the Whittaker-Shannon decomposition involves a double sum (VI.1)
We show in Appendix G that the expression (VI.12) for \(\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})\) can also be written using the set of packet functions \(\{\overline{\rho}_{n}(t)\}\),
\[\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})=\left|\sum_{n,m}\Gamma_{n}\Gamma_{m }(\mathbf{U}{\rm coth}\mathbf{P})_{mn}\overline{\rho}_{m}(t_{1})\overline{ \rho}_{n}(t_{2})\right|^{2},\] (VI.13)
but this is not a convenient expression to use in practice: For if \(\mathbf{\beta}\) is close to diagonal some functions of \(\mathbf{\beta}\), such as \(\tanh\!\mathbf{P}\), will be as well, but not \({\rm coth}\mathbf{P}\). Further, the weakly squeezed limit is not directly apparent from the form of Eq. (VI.13), so it seems preferable to write \(\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})\) using \(\{\overline{\chi}_{n}(t)\}\) instead of \(\{\overline{\rho}_{n}(t)\}\). Perhaps this is not surprising, for earlier we found that when \(|\beta|\ll 1\), \(\overline{G}^{(2)}(t_{1},t_{2})\to|\beta|^{2}|\overline{\gamma}(t_{1},t_{2})| ^{2}\) (III.14), which directly depends on the correlations contained in the joint temporal amplitude; in same limit, using the Whittaker-Shannon decomposition, we expect it to depend on the analogous quantity \(\beta_{nm}\). So unlike \(G^{(1)}(t)\), where the photon density at a particular time involves contributions from all possible pairs and
is written in terms of \(\{\overline{\rho}_{n}(t)\}\), \(\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})\) - at least in the weakly squeezed limit - should directly depend on the temporal correlations, and so it is more suitable to write \(\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})\) in terms of \(\{\overline{\chi}_{n}(t)\}\), as is done in Eq. (VI.12).
Turning finally to the general expression (III.32) for \(\overline{G}^{(2)}_{\rm inech}(t_{1},t_{2})\), using the definition of \(\overline{\rho}_{n}(t)\) and \(\Gamma_{n}\) (VI.9, VI.10) we can write a packet expansion for the incoherent contribution as
\[\begin{split}\overline{G}^{(2)}_{\rm inech}&(t_{1},t_{2})\\ &=\frac{1}{2}\sum_{n,m}\left|\Gamma_{n}\Gamma_{m}(\overline{\rho} _{n}(t_{1})\overline{\rho}_{m}(t_{2})+\overline{\rho}_{n}(t_{2})\overline{\rho }_{m}(t_{1}))\right|^{2},\end{split}\] (VI.14)
which has the same form as the Schmidt (III.21) and pseudo-Schmidt (IV.17) decompositions, but is in terms of the set of packets \(\{\overline{\rho}_{n}(t)\}\).
In Fig. 16 we plot \(G^{(2)}(t_{1},t_{2})\) calculated using Eq. (VI.7) for the double-Gaussian joint amplitude, as well as the coherent and incoherent contributions; comparing to Fig. 4 for the exact calculation we find excellent agreement between the two. Although the Whittaker-Shannon decomposition does not allow the simple factorization of the ket into product kets associated with each Schmidt or pseudo-Schmidt mode, correlation functions can still be evaluated. And, since the off-diagonal elements of the squeezing matrix \(\mathbf{\beta}\) typically drop off quickly away from the diagonal, the correlation functions can be written in a form involving either the set of functions \(\{\overline{\chi}_{n}(t)\}\) or the set of functions \(\{\overline{\rho}_{n}(t)\}\); all these functions are localized compared to the Schmidt modes.
### Correlation functions in the weakly squeezed limit
In this section we identify approximate expressions for the correlation functions valid in the weakly squeezed limit when \(|\hat{\beta}|\ll 1\). The correlation functions for the Whittaker-Shannon decomposition involve the matrix \(\mathbf{P}\), which using the form of Eq. (V.20) is given by \(\mathbf{P}=|\hat{\beta}|^{2}\sqrt{\mathbf{r}^{\dagger}\mathbf{r}}\). Taking the weakly squeezed limit we approximate
\[\begin{split}\text{sinh}\mathbf{P}&\to\mathbf{P}\\ \text{cosh}\mathbf{P}&\to\mathbf{1},\end{split}\] (VI.15)
where \(\mathbf{1}\) is the identity matrix. Then \(\Gamma_{n}\to\sqrt{(\mathbf{P}^{2})_{nn}}\),
\[\overline{\rho}_{n}(t)\to\frac{1}{\Gamma_{n}}\sum_{m}P_{nm}\overline{\chi}_{m }(t),\] (VI.16)
and for \(r_{nm}\) nonzero only for \(|n-m|\) less than a small integer, the set \(\{\overline{\rho}_{n}(t)\}\) will be localized. The expression (VI.8) then gives
\[\overline{G}^{(1)}(t)\to\sum_{n,m}|P_{nm}\overline{\chi}_{m}(t)|^{2}.\] (VI.17)
Turning to \(\overline{G}^{(2)}(t_{1},t_{2})\), for the coherent contribution \(\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})\), the general expression (VI.12), using the result
\[\mathbf{U}(\text{sinh}\mathbf{P})(\text{cosh}\mathbf{P})\to\mathbf{U}\mathbf{P }=\mathbf{\beta},\] (VI.18)
to find
\[\begin{split}\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})& \to|\overline{\chi}^{T}(t_{1})\mathbf{\beta}\overline{\chi}(t_{2})|^{2}\\ &=\left|\sum_{n,m}\overline{\chi}_{n}(t_{1})\beta_{nm}\overline{ \chi}_{m}(t_{2})\right|^{2},\end{split}\] (VI.19)
which clearly shows that the resulting photon statistics depends on \(\beta_{nm}\). Finally, in this limit the expression (VI.14) for \(\overline{G}^{(2)}_{\rm inech}(t_{1},t_{2})\) gives
\[\begin{split}\overline{G}^{(2)}_{\rm inech}&(t_{1},t_{2})\to\\ &\frac{1}{2}\sum_{n,m}\left|\sum_{u,v}P_{nu}P_{mv}(\overline{\chi} _{u}(t_{1})\overline{\chi}_{v}(t_{2})+\overline{\chi}_{u}(t_{2})\overline{\chi }_{v}(t_{1}))\right|^{2}.\end{split}\] (VI.20)
## VII Local states and correlation functions
The general equations we derived for \(\overline{G}^{(1)}(t)\) (VI.8) and \(\overline{G}^{(2)}(t_{1},t_{2})\) (VI.7), and their weakly squeezed approximations, are valid for any times \(t,t_{1},t_{2}\). In the discussion surrounding Fig. 14 for the photon density, it was noted that since each packet is localized we only need a few to properly represent the photon density at any particular time. This suggests that for some calculations, including some more general than the correlation functions considered above, we can rely on an approximate form of the ket itself.
Suppose we are interested in features of the state around a small neighbourhood centered at the time \(t_{\rm I}\). Now for a correlated joint temporal amplitude \(\beta_{nm}\) will typically only be nonzero for \(|n-m|\) ranging up to a small integer, so for times around a small neighbourhood of \(t_{\rm I}\) only a few Whittaker-Shannon modes centered around \(n_{\rm I}=[t_{\rm I}/\tau]\) will be relevant; here we use \([\cdot]\) to denote the closest integer. We identify the range of \(m\) around \(n_{\rm I}\) for which \(\beta_{n1m}\) will be non-negligible by the odd integer \(d\), assuming that \(\beta_{n1m}\) is sufficiently small for \(|n_{\rm I}-m|>(d-1)/2\) that for those values of \(m\) it can be neglected.
We then split the matrix \(\mathbf{\beta}\) into two contributions
\[\mathbf{\beta}=\mathbf{R}^{\rm I}+\mathbf{K},\] (VII.1)
where \(\mathbf{R}^{\rm I}\) is a symmetric matrix with nonzero entries centered at \(\mathrm{R}^{\rm I}_{n_{\rm I}n_{\rm I}}\). It contains the elements of \(\mathbf{\beta}\) as the row and column indices range over \((d-1)/2\) in all directions from the center at \((n_{\rm I},n_{\rm I})\), and all its other elements are set to zero. The matrix \(\mathbf{R}^{\rm I}\) is shown schematically in Fig.
17 with the nonzero contributions existing inside the red "box" containing \(d^{2}\) elements; all the other elements of \(\mathbf{\beta}\) are contained in \(\mathbf{K}\). For times of interest we assume that \(d\) is chosen large enough so that significant contributions to the quantities of interest, such as correlation functions involving times near \(t_{\text{I}}\), only involve the elements of \(\mathbf{\beta}\) contained in \(\mathbf{R}^{\text{I}}\).
The squeezing operator (VI.1) is then approximated by
\[\begin{split}\tilde{S}&=e^{\frac{1}{2}\sum\limits_ {n,m}(R_{nm}^{\text{I}}+K_{nm})B_{n}^{\text{I}}B_{m}^{\text{I}}-\text{h.c.}}\\ &\approx e^{\frac{1}{2}\sum\limits_{n,m}R_{nm}^{\text{I}}B_{n}^{ \text{I}}B_{m}^{\text{I}}-\text{h.c.}},\end{split}\] (VII.2)
where in the second line we dropped all the terms from \(\mathbf{K}\). The dimension of \(\mathbf{R}^{\text{I}}\) can be quite large if the pulse is long, but since most of its entries are zero and the only nonzero entries have a width given by \(d\), we use a "prime" symbol below to indicate that we can restrict the sum to be only over the nonzero elements of \(\mathbf{R}^{\text{I}}\). In practice, this can drastically increase the efficiency of numerical computations involving only a limited region of time. The state is then taken to be \(\ket{\Psi_{\text{I}}}=\tilde{S}_{\text{I}}\ket{\text{vac}}_{\text{I}}\), where \(\ket{\text{vac}}_{\text{I}}\) is the vacuum state corresponding to the modes associated with the nonzero elements in \(\mathbf{R}^{\text{I}}\), and we set
\[\tilde{S}_{\text{I}}=e^{\frac{1}{2}\sum\limits_{n,m}^{\prime}R_{nm}^{\text{I}} B_{n}^{\text{I}}B_{m}^{\text{I}}-\text{h.c.}}.\] (VII.3)
From the approximate state \(\ket{\Psi_{\text{I}}}\) we can apply the same steps as above to calculate the correlation function, but instead of using the full matrix \(\mathbf{\beta}\) for the transformation and polar decomposition we use the reduced matrix \(\mathbf{R}^{\text{I}}\).
Figure 16: For the double-Gaussian and using Whittaker–Shannon decompositions, from left to right we plot \(\overline{G}^{(2)}(t_{1},t_{2})/\Phi^{2}\) (top) and the coherent and incoherent contribution to \(\overline{G}^{(2)}(t/2,-t/2)/\Phi^{2}\) (bottom) with the axes normalized by \(\mathcal{T}_{p}^{\text{DG}}\) for \(\beta=0.1,5\), and \(10\).
Figure 17: Schematic of the matrix \(\mathbf{R}^{\text{I}}\) which has nonzero entries centered at \(\beta_{n_{\text{I}}n_{\text{I}}}\) with a width \(d\) and zeros everywhere else. The matrix \(\mathbf{K}=\mathbf{\beta}-\mathbf{R}^{\text{I}}\) and consists of every nonzero element that we set to zero in \(\mathbf{R}^{\text{I}}\).
Then for times of interest we have (see Eq. (V.13))
\[\begin{split}\overline{a}(\vec{t}_{\rm I})&=\sum_{n} \overline{\chi}_{n}(\vec{t}_{\rm I})B_{n}\\ &\approx\sum_{n}^{\prime}\overline{\chi}_{n}(\vec{t}_{\rm I})B_{n },\end{split}\] (VII.4)
where again we restrict the sum to be over the modes associated with the nonzero elements of \({\bf R}^{\rm I}\), as in the approximate state \(|\Psi_{\rm I}\rangle\). The equations for the correlation functions (VI.6, VI.7) can be applied with \(\mathbf{\beta}\) replaced by \({\bf R}^{\rm I}\) for times near \(t_{\rm I}\), with the appropriate restriction of the sums.
In Fig. 18 we plot a zoomed-in version of \(\overline{G}^{(1)}(t)\) calculated from the full state \(|\Psi\rangle\) and the approximate state \(|\Psi_{\rm I}\rangle\) around the time \(t_{\rm I}=0\) for the double-Gaussian, the three values of \(\beta\) (\(\hat{\beta}\)), and \(d=7,9\) and \(11\). We see that the choice of \(d\) generally determines over how wide a neighbourhood around \(t_{\rm I}\) the contributions from the full state are well approximated by the contributions from \(|\psi\rangle_{\rm I}\). For \(\beta=0.1\) (\(\hat{\beta}=0.014\)), which is well within the weakly squeezed limit, only three Whittaker-Shannon modes on each side of the center mode are relevant to accurately determine the photon density at \(t=0\); this is much less then the total number of modes along \(r_{nn}\) which is set by \(\mathcal{K}\), and for this example is \(\mathcal{K}_{\rm DG}=100\). Thus, the state \(|\Psi_{\rm I}\rangle\) provides a "local" description of the photon density around \(t=t_{\rm I}\). With the state \(|\Psi_{\rm I}\rangle\) one can also calculate \(G^{(2)}(t_{1},t_{2})\) as long as both times are near the time \(t_{\rm I}\). We find similar agreement and trends with \(\beta\) and \(d\) as for the photon density.
As \(|\beta|\) (\(|\hat{\beta}|\)) increases more neighbouring Whittaker-Shannon modes are required to accurately reproduce the photon statistics at a given time and we need to increase the size of the nonzero box of elements in \({\bf R}^{\rm I}\). For as \(|\beta|\) increases there is a larger amplitude for photons to be described by different modes spread further apart from each other; see the discussion in the paragraph after Eq. (VI.11). In the same way that \((\sinh{\bf P})_{nm}\) spreads in the \(|n-m|\) direction as \(|\beta|\) increases (see Fig. 15) we need to choose a larger box to capture all the possible contributions near a given \(t_{\rm I}\). We return to this point below.
Suppose now we are interested in the properties of the state associated with two or more times \(t_{\rm I},t_{\rm II},t_{\rm III},...\), "sufficiently far apart" from one another. Then we can split \(\mathbf{\beta}\) into a _set_ of contributions given by
\[\mathbf{\beta}={\bf R}^{\rm I}+{\bf R}^{\rm II}+{\bf R}^{\rm III}+...+{\bf K},\] (VII.5)
with \({\bf R}^{\rm I}\) associated with \(t_{\rm I}\), \({\bf R}^{\rm II}\) associated with \(t_{\rm II}\), etc., and where by "sufficiently far apart" we mean that the corresponding boxes of sizes \(d_{\rm I},d_{\rm II}\), etc. associated with
Figure 19: Schematic of the matrix \(\mathbf{\beta}\) partitioned into a set of non-overlapping matrices \({\bf R}^{J}\), each with nonzero values centered at \(\beta_{n_{J}n_{J}}\) of size \(d_{J}\) denoted by the red squares. The matrix \({\bf K}=\mathbf{\beta}-{\bf R}^{\rm I}-{\bf R}^{\rm II}-...\), and consists of every other non zero element contained in \(\mathbf{\beta}\).
the regions of \(\mathbf{R}^{\mathrm{I}},\mathbf{R}^{\mathrm{II}}\), etc. that contain nonzero elements do not overlap; this is shown schematically in Fig. 19, where we indicate the regions of \(\mathbf{R}^{\mathrm{I}},\mathbf{R}^{\mathrm{II}}\), etc. that contain nonzero elements by I,II, etc. Again, the matrix \(\mathbf{K}\) contains the remaining contributions to \(\mathbf{\beta}\) not in any of the nonzero regions of the \(\mathbf{R}^{J}\) matrices, \(J=\mathrm{I},\mathrm{II}\), etc. Then since each \(\mathbf{R}^{J}\) has nonzero elements only in the region where the others do not, each matrix in \(\{\mathbf{R}^{J}\}\) commutes with the rest and each contribution to the squeezing operator can be split apart. So for the times of interest the state is given by
\[\ket{\Psi}\approx\bigotimes_{J}\ket{\Psi_{J}}=\bigotimes_{J}\tilde{S}_{J}\ket{ \mathrm{vac}}_{J},\] (VII.6)
where \(\ket{\Psi_{J}}\), \(\tilde{S}_{J}\), and \(\ket{\mathrm{vac}}_{J}\) are the obvious generalization of \(\ket{\Psi_{\mathrm{I}}}\), \(\tilde{S}_{\mathrm{I}}\) and \(\ket{\mathrm{vac}}_{\mathrm{I}}\). Using equation (VI.11) for the average photon number we similarly calculate that each time region \(t_{J}\) has
\[N_{J}=\mathrm{Tr}(\sinh^{2}\mathbf{P}^{J})\] (VII.7)
photons, where \(\mathbf{P}^{J}\) is the matrix calculated from doing a polar decomposition of the corresponding \(\mathbf{R}^{J}\).
To calculate the correlation functions in the neighbourhood of a time \(t_{J}\) we again use Eqs. VI.6 and VI.7 with the replacement of \(\beta_{nm}\) with the appropriate \(\mathbf{R}^{J}\) as discussed above. If instead we want to calculate \(\overline{G}^{(2)}(t_{J},t_{J^{\prime}})\) for \(J\neq J^{\prime}\) then the corresponding operators \(a(t_{J})\) and \(a^{\dagger}(t_{J^{\prime}})\) in Eq. (II.4) commute and the resulting second-order correlation function is
\[\overline{G}^{(2)}(t_{J},t_{J^{\prime}})=\overline{G}^{(1)}(t_{J})\overline{ G}^{(1)}(t_{J^{\prime}}),\] (VII.8)
where the photon densities evaluated at the times \(t_{J},t_{J^{\prime}}\) are evaluated using the respective contributions from \(\ket{\Psi_{J}}\) and \(\ket{\Psi_{J^{\prime}}}\).
### Disentangling the squeezing operator
While the approximation of the state \(\ket{\Psi}\) into the set of states \(\{\ket{\Psi_{J}}\}\), takes apart the squeezed state and provides a local calculation of the correlation functions, it does not really give us intuition of the state itself. To gain insight into that, we make use of the general "disentangling formula" [33] applied to each squeezing operator \(\tilde{S}_{J}\),
\[\tilde{S}_{J}=\ket{\mathbf{W}^{J}}^{\frac{1}{2}}e^{\frac{1}{2}\sum\limits_{n, m}^{I^{\prime}}T^{J}_{nm}B^{\dagger}_{n}B^{\dagger}_{n}}\frac{\sum\limits_{n,m} L^{J}_{nm}B^{\dagger}_{n}B_{m}}{e^{-\frac{1}{2}\sum\limits_{n,m}^{I^{\prime}}T^{J}_{ nm}B_{n}B_{m}}},\] (VII.9)
where
\[\ket{\mathbf{W}^{J}}^{\frac{1}{2}}=\sqrt{\det(\mathrm{sech}( \mathbf{Q}^{J}))}\] (VII.10a) \[\mathbf{T}^{J}=(\mathbf{T}^{J})^{T}=\tanh(\mathbf{Q}^{J})\mathbf{ U}^{J}=\mathbf{U}^{J}\mathrm{tanh}(\mathbf{P}^{J})\] (VII.10b) \[\mathbf{L}^{J}=\ln(\mathrm{sech}(\mathbf{Q}^{J})),\] (VII.10c)
and with \(\mathbf{Q}^{J}\), \(\mathbf{U}^{J}\) and \(\mathbf{P}^{J}\) the same as before (VI.4), but calculated from the reduced matrix \(\mathbf{R}^{J}\). Then acting the disentangled squeezing operator on the vacuum state we have for each \(\ket{\Psi_{J}}\)
\[\ket{\Psi_{J}} =\ket{\mathbf{W}^{J}}^{\frac{1}{2}}e^{\frac{1}{2}\sum\limits_{n,m}^ {I^{\prime}}T^{J}_{nm}B^{\dagger}_{n}B^{\dagger}_{m}}\ket{\mathrm{vac}}_{J}\] (VII.11) \[\equiv\overline{S}_{J}\ket{\mathrm{vac}}_{J},\]
where \(\overline{S}_{J}\) is the disentangled squeezing operator after acting on the vacuum state. We point out that one could also apply the disentangling formula to the whole state valid at all times, but this is not very illuminating because one can already calculate the full correlation functions; further, had we first applied the disentangling formula and then reduced the state by getting rid of the terms that are negligible, the resulting state would not be normalized, whereas \(\ket{\Psi_{J}}\), as given in Eq. (VII.11) always is.
Unlike the squeezing operator in its "entangled form" (VI.1), all the operators in the exponent of Eq. (VII.11) are creation operators which always commute so we can equivalently write the state for each \(J\) as
\[\ket{\Psi_{J}}=\ket{\mathbf{W}^{J}}^{\frac{1}{2}}\bigotimes_{n,m}e^{\frac{1}{2} T^{J}_{nm}B^{\dagger}_{n}B^{\dagger}_{m}}\ket{\mathrm{vac}}_{J}.\] (VII.12)
Unfortunately this form of the state is not as intuitive as the _single_ product state in the pseudo-Schmidt decomposition because for a given \(n\) we must include contributions from every other \(m\) in the range of \(T^{J}_{nm}\).
### The ket in the weakly squeezed limit
Instead, in situations where \(\ket{\beta}\) can be quite large but \(\ket{\hat{\beta}}\) is sufficiently small, as in a long pulse, we can take advantage of the fact that it is \(\ket{\hat{\beta}}\) that sets the magnitude of the matrix \(T^{J}_{nm}\). Given that the sum in Eq. (VII.11) is only over the modes of interest, and not the whole joint amplitude, in the weakly squeezed limit when \(\ket{\hat{\beta}}\ll 1\) we can Taylor expand the exponential so that
\[\ket{\Psi_{J}}\approx\ket{\mathbf{W}^{J}}^{\frac{1}{2}}\left(\ket{\mathrm{ vac}}_{J}+\sqrt{\frac{N_{J}}{2}}\ket{\mathrm{II}}_{J}+...\right),\] (VII.13)
where we have used \((\mathbf{T}^{J})^{\dagger}\mathbf{T}^{J}\rightarrow(\mathbf{R}^{J})^{\dagger} \mathbf{R}\), Eq. (VII.7) reduces to
\[N_{J}\rightarrow\mathrm{Tr}((\mathbf{R}^{J})^{\dagger}\mathbf{R}^{J}),\] (VII.14)
and we have introduced the normalized two-photon state for each \(J\) by
\[\ket{\mathrm{II}}_{J}=\frac{1}{\sqrt{2}}\sum_{nm}\frac{T^{J}_{nm}}{\sqrt{N_{J}}} B^{\dagger}_{n}B^{\dagger}_{m}\ket{\mathrm{vac}}_{J}.\] (VII.15)
The state \(\ket{\Psi_{J}}\) has \(N_{J}\ll 1\) photons and the prefactor \(\ket{\mathbf{W}^{J}}\) in the weakly squeezed limit is
\[\ket{\mathbf{W}^{J}}\approx 1-\frac{N_{J}}{2},\] (VII.16)
so the state remains normalized to order \(N_{J}^{2}\), as expected. The two-photon state \(\left|\mathrm{II}\right\rangle_{J}\) is a superposition of all the ways in which pairs of photons can be associated with the same Whittaker-Shannon supermode, or different supermodes, within a neighbourhood of the time \(t_{J}\); one can easily extend the state in Eq. (VII.13) to higher order in which two pairs, three pairs, etc. are considered. For a set of times, \(\{t_{J}\}\), in the weakly squeezed limit the full state in equation (V.15) can be expanded as
\[\left|\Psi\right\rangle\approx\bigotimes_{J}\left|\mathbf{W}^{J}\right|^{\frac {1}{2}}\left(\left|\mathrm{vac}\right\rangle_{J}+\sqrt{\frac{N_{J}}{2}}\left| \mathrm{II}\right\rangle_{J}+...\right),\] (VII.17)
providing a localized description of squeezing light for correlated but otherwise arbitrary joint amplitudes. In the long pulse limit, despite the fact that \(\left|\beta\right|\rightarrow\infty\), \(\left|\dot{\beta}\right|\) remains finite and we can describe the light in the weakly squeezed limit as being composed of approximately two-photons within a neighbourhood around each time \(t_{J}\).
Figure 20: For the double-Gaussian joint amplitude, we plot \(\overline{G}^{(1)}(t)/\Phi\) calculated using the Schmidt decomposition (top) and Whittaker-Shannon decomposition (bottom) as well as a few contributions from each calculation, for \(\beta=150\) with the horizontal axis normalized by \(\mathcal{T}_{p}^{\mathrm{DG}}\).
## VIII The Strongly Squeezed Limit
In this section we consider the strongly squeezed limit, where \(|\beta|/\sqrt{\mathcal{K}}\gg 1\), or equivalently \(|\hat{\beta}|\gg 1\). The results we derived in section V and VI are valid for any approximately bandlimited joint amplitude and for any squeezing parameter \(\beta\). However, following the discussion around Fig. 3, for the double-Gaussian with \(\mathcal{K}_{\rm DG}=100\) we found that as \(|\beta|\) increased fewer Schmidt modes were required to calculate the correlation functions, although many were required to calculate the joint amplitude.
To explore this further, consider the double-Gaussian joint amplitude (Fig. 2, with \(\mathcal{K}_{\rm DG}=100\)) but for \(\beta=150\), corresponding to \(|\beta|/\sqrt{\mathcal{K}_{\rm DG}}\approx 15\) or \(|\hat{\beta}|\approx 21\), well in the strongly squeezed regime. In Fig. 20 we plot \(\overline{G}^{(1)}(t)\) calculated using the Schmidt and Whittaker-Shannon decompositions. Clearly the first Schmidt mode is sufficient to produce an accurate \(\overline{G}^{(1)}(t)\), despite the fact that all Schmidt modes are required to correctly calculate the joint amplitude. This is because the correlation functions depend on \(s_{n}\) and \(c_{n}\) (recall Eqs. (III.20),(III.21)), but when \(|\beta|\) is large these scale exponentially; since the Schmidt modes drop off as \(n\) increases, \(s_{0}\gg s_{1}\) and the sums in Eq. (III.20) for the correlation functions are well approximated by the \(n=0\) term. For the Whittaker-Shannon decomposition we still find very good agreement with the Schmidt calculation. However, as in Fig. 14, we find each packet is significantly broadened. In the strongly squeezed limit, the amplitude that many photon pairs will contribute is large, and so the contribution of photons corresponding to two Whittaker-Shannon modes for which \(|n-m|\gg 1\) is significant; since the \(\overline{\rho}_{n}(t)\) include contributions from all other \(m\) for a given \(n\), they are necessarily broader. More mathematically, in the strongly squeezed limit many matrix multiplications are involved in calculating \(\sinh\)**P**, and so (following the discussion in section VI.1 and surrounding Fig. 15, 18,) the width of each packet is significantly broadened.
Since packets can then significantly overlap with a number of their neighbours, the local description of the photon statistics and resulting state breaks down. This is not surprising, given that the second-order correlation function, calculated using the Schmidt decomposition and plotted in Fig. 21, is completely uncorrelated, a local description to identify the photon correlations is not necessary. Note that here we do not include the plot of \(\overline{G}^{(2)}(t_{1},t_{2})\) calculated using the Whittaker-Shannon decomposition, because it is essentially identical to Fig. 21.
Finally, we point out that in Fig. 21 the incoherent contribution is approximately twice the coherent contribution. If we restrict the sum in Eq. (III.20) and (III.21) to the first term we have
\[\overline{G}^{(1)}(t)\to s_{0}^{2}|\overline{f}_{0}(t)|^{2},\] (VIII.1a) \[N_{\rm pulse}\to s_{0}^{2},\] (VIII.1b) \[\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})\to N_{\rm pulse}^{2}| \overline{f}_{0}(t_{1})|^{2}|\overline{f}_{0}(t_{2})|^{2},\] (VIII.1c) \[\overline{G}^{(2)}_{\rm incoh}(t_{1},t_{2})\to 2N_{\rm pulse}^{2}| \overline{f}_{0}(t_{1})|^{2}|\overline{f}_{0}(t_{2})|^{2},\] (VIII.1d)
where we have used the fact that for large \(|\beta|\), \(c_{n}\approx s_{n}\); it is clear from these expressions that \(\overline{G}^{(2)}_{\rm incoh}(t_{1},t_{2})\approx 2\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})\). Then putting them together we have
\[\overline{G}^{(2)}(t_{1},t_{2})=3N_{\rm pulse}^{2}|\overline{f}_{0}(t_{1})|^{ 2}|\overline{f}_{0}(t_{2})|^{2}.\] (VIII.2)
Since the calculation in Appendix F was done for a pseudo-Schmidt mode, it can be applied to this instance of a single Schmidt mode, so we identify \(3N_{\rm pulse}^{2}\) as the expectation value of the number of ways of "picking" two photons in the large \(|\beta|\) limit from the first Schmidt mode.
## IX A Final Example
In this section we consider a realistic joint amplitude generated from a dual-pump spontaneous four-wave mixing process, in a ring resonator system, when time ordering effects and self- and cross-phase modulation are included [4]. In Fig. 22 we plot the joint intensity and the Schmidt amplitudes. The joint intensity has widths \(\mathcal{T}_{p}^{\rm R}=200\) ns and \(\mathcal{B}_{c}^{\rm R}\approx 1.03\) GHz (\(\mathcal{T}_{c}^{\rm R}\approx 0.97\) ns) corresponding to an effective Schmidt number \(\mathcal{K}_{\rm R}\approx 206\), and the joint amplitude has a Schmidt number \(K_{\rm R}\approx 108\), where we use "R" to identify this "ring" calculation. The squeezing parameter for the generation is \(\beta=3.72\), corresponding to \(N_{\rm pulse}\approx 11\) photons; other system parameters, such as the center wavelengths and pump duration, are given in the caption of Fig. 22. Comparing Fig. 22 with Fig. 23 for the joint temporal intensity calculated using the Whittaker-Shannon decomposition, with \(\Omega/2\pi=\mathcal{B}_{c}^{\rm R}\) (\(\tau=\mathcal{T}_{c}^{\rm R}\)), we see excellent agreement.
In Fig. 24 we plot \(\overline{G}^{(1)}(t)\) for the Schmidt (top) and Whittaker-Shannon (bottom) decompositions, as well as a few contributions to each. We see a dramatic difference: At any particular time a huge number of Schmidt modes are required to capture the overall photon density, while in our "packet" decomposition only a small range of packets are required to describe the behaviour at any time.
In Fig. (25) we plot \(\overline{G}^{(2)}(t_{1},t_{2})\) (top) and the coherent and incoherent contributions to \(\overline{G}^{(2)}(t/2,-t/2)\) (bottom) calculated using the Whittaker-Shannon decomposition; here we only plot the Whittaker-Shannon results because the Schmidt results look the same. The contributions to \(\overline{G}^{(2)}(t_{1},t_{2})\) - strong peak near \(t_{1}=t_{2}\) and smaller broad background - match that of the double-Gaussian example for the three values of \(\beta\). This is unsurprising because, although \(\beta=3.72\) in this example, \(\hat{\beta}=0.28\), and so the state is weakly squeezed.
Since the state is weakly squeezed, the formalism provided in section VII can be directly applied, providing a localized description of the pulse of light into a set of states with different time labels, each containing approximately two photons.
## X Conclusion
In this article we have developed a formalism to describe squeezed light with a large spectral-temporal correlation. As opposed to the usual strategy of employing the Schmidt decomposition, we feel it makes the physics more apparent. We began by characterizing general joint amplitudes by their timewidth \(\mathcal{T}_{p}\) and bandwidth \(\mathcal{B}_{c}\) (or equivalently the coherence time \(\mathcal{T}_{c}=1/\mathcal{B}_{c}\)). Using the double-Gaussian joint amplitude as an example, we calculated the correlation between the two photons. We have shown that the correlation between the two photons is not a good approximation of the correlation between the two photons.
Figure 23: Joint temporal intensity calculated using the Whittaker-Shannon decomposition.
Figure 24: Plot of \(\overline{G}^{(1)}(t)\) calculated using the Schmidt decomposition (top) and Whittaker-Shannon decomposition (bottom).
Figure 22: From left to right we plot the joint temporal intensity; joint spectral intensity; and Schmidt amplitudes generated from a dual-pump spontaneous four-wave mixing process. The two pump functions are centered at the wavelengths \(\overline{\lambda}_{\text{P}1}=1.556\)\(\mu\)m and \(\overline{\lambda}_{\text{P}2}=1.547\)\(\mu\)m and each have temporal FWHM of 100 ns and an energy of \(10^{3}\) pJ. The generated photons are centered at \(\overline{\lambda}_{\text{S}}=1.552\)\(\mu\)m (\(\overline{\omega}_{\text{S}}/2\pi=193.164\) THz) and have a bandwidth on the order of a GHz. The ring resonator has quality factors \(Q_{\text{P}1}=1529378\), \(Q_{\text{P}2}=3844257\), and \(Q_{\text{S}}=2704405\) for the three modes and a nonlinear coupling \(\Lambda=5\) THz [4].
tion functions using the Schmidt decomposition and found that for weak squeezing the form of \(\overline{G}^{(2)}(t_{1},t_{2})\) matches that of the joint temporal amplitude and reaches its maximum value when \(|t_{1}-t_{2}|<\mathcal{T}_{\mathrm{c}}\); however, the Schmidt modes themselves extend on a much broader time scale given by \(\mathcal{T}_{p}\). When calculating the correlation functions using the Schmidt decomposition, we found that a large amount of interference is present; we cannot associate a single Schmidt mode, or even a few, with a particular time.
Next we considered another example, the sinc-hat joint amplitude, which demonstrated that this behaviour of the double-Gaussian is not unique. And although it is somewhat artificial, the sinc-hat joint amplitude is interesting in that its Schmidt amplitudes are nearly degenerate, allowing us to construct an approximate pseudo-Schmidt decomposition where the pseudo-Schmidt modes are displaced, localized "sinc" functions. Using this decomposition we could immediately identify contributions contained in \(\overline{G}^{(1)}(t)\) or \(\overline{G}^{(2)}(t_{1},t_{2})\) at a particular time as arising from a single pseudo-Schmidt mode, allowing us to "take apart" the photon statistics, elucidating the physics. We also demonstrated that the weakly squeezed limit corresponds to \(|\beta|/\sqrt{\mathcal{N}}\ll 1\), where \(\mathcal{N}\) is the effective Schmidt number that identifies the number of pseudo-Schmidt modes required for the decomposition. This is useful because in the long pulse limit, where \(|\beta|\gg 1\) and \(N_{\mathrm{pulse}}\gg 1\), the quantity \(|\beta|/\sqrt{\mathcal{N}}\) remains finite; despite there being a large number of photons in the pulse, the number of photons in each pseudo-Schmidt mode can be relatively small.
To consider more general joint amplitudes, where there is a range of Schmidt amplitudes, we generalized the pseudo-Schmidt decomposition to any approximately bandlimited joint amplitude by using the Whittaker-Shannon interpolation formula. While the exponent in the squeezing operator then involves a double sum instead of the usual single sum, we can nevertheless define a packet expansion where each packet typically has a duration short compared to that of the pulse, with the packets thus analogous to the pseudo-Schmidt modes. In general, if the squeezing is weak to moderate, the correlation functions at a particular time are associated with only a few packets, allowing us to "take apart" the squeezed light and the resulting photon statistics. Finally we showed that if one is only interested in some finite time regions that form part of the pulse duration, an effective ket can be written as a product of kets associated with those time regions. Then in the weakly squeezed limit, which we can take to be set by the limit \(|\tilde{\beta}|\ll 1\), there will be on average only a few photons within each time region, although in the long pulse limit the total number of photons will be very large.
In extensions of this work, we will consider two- and higher-mode squeezing, which is fairly straightforward, and we will apply this formalism to quantum optics-based experiments such as coincidence-accidental-detection ratios and SU(1,1) interferometry. And instead of describing squeezed light in the spectral-temporal domain, one could expand this formalism to describe squeezed light with a large correlation between the photon wavevector and the conjugate position variables in two, or three dimensions. Another interesting focus is the formulation of relationships between the Schmidt number, the effective Schmidt number discussed here, and the Shannon number (or time-bandwidth product) and its generalizations. Here there is the opportunity to apply the vast mathematical formalism that has already been developed to describe the information content of classical signals and images [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31].
With the continuous advancement of custom engineered nonlinear optical systems, the generation of nonclassical "tri-photon" states is slowly becoming a reality [34]. Such states and their "troint amplitudes" are the generaliza
Figure 25: Plot of \(G^{(2)}(t_{1},t_{2})\) (top) and the coherent and incoherent contribution to \(G^{(2)}(t/2,-t/2)\) calculated using the Whittaker-Shannon decomposition.
tion of squeezed states and the joint amplitudes discussed here. To characterize these states, generalizations of the Schmidt decomposition (singular-value-decomposition) need to be employed. One such example is the canonical polyadic decomposition (or CP decomposition) [35]; however, an orthogonal decomposition is not guaranteed to exist. Other generalizations of the Schmidt decomposition exist, such as the Tucker decomposition [35], which gains orthogonality but loses the single-sum behaviour of the Schmidt decomposition. An alternative description can be provided using the Whittaker-Shannon interpolation formula. In the same way we generalized it to two-dimensional functions in this paper, it can be directly applied to any number of dimensions in a very straightforward way. Thus the formalism applied here can easily be generalized to situations where other decompositions are not possible.
###### Acknowledgements.
We thank Xanadu Quantum Technologies for allowing access to their proprietary code repository used in section IX, and Luke Helt for answering related questions. We also thank Marco Liscidini and Nicolas Quesada for valuable discussions. This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). C. D. acknowledges an Ontario Graduate Scholarship.
## Appendix A Field operator
For a quasi-1D structure, if \(z\) is the direction in which light is propagating, the electric field operator in the Heisenberg picture takes the form [4]
\[\mathbf{E}(x,y,z;t)=\sum_{n}\int_{-\infty}^{\infty}\frac{dk}{\sqrt{2\pi}}\;\mathbf{e}_{n }(k;x,y)b_{n}(k)e^{ikz}e^{-i\omega t}+h.c., \tag{101}\]
where the dependence of \(\omega\) on \(k\) is identifed by the dispersion relation, the \(\mathbf{e}_{n}(k;x,y)\) is a properly normalized field profile for a transverse mode \(n\) propagating with \(k\) at frequency \(\omega\), and \(b_{n}(k)\) is the associated lowering operator, \(\left[b_{n}(k),b_{m}^{\dagger}(k^{\prime})\right]=\delta_{nm}\delta(k-k^{ \prime}).\) We assume that only one transverse mode is of interest and drop the index \(n\), and that only \(k>0\) are of interest. Taking \(\omega=vk>0\), where \(v\) is the group velocity, we identify modes by their frequency \(\omega\), putting \(c(\omega)=v^{-1/2}b(k)\) so that \(\left[c(\omega),c^{\dagger}(\omega^{\prime})\right]=\delta(\omega-\omega^{ \prime})\) holds, and we can then write
\[\mathbf{E}(x,y,z;t)\rightarrow\int_{0}^{\infty}\frac{d\omega}{\sqrt{2\pi}}\frac{ \mathbf{e}(\frac{\omega}{v};x,y)}{v^{1/2}}c(\omega)e^{i\omega z/v}e^{-i\omega t}+h.c. \tag{102}\]
Assuming that over the frequency range of interest \(\mathbf{e}(\omega/v;x,y)\) varies little from its value \(\mathbf{e}(\omega_{o}/v;x,y)\) at a center frequency \(\omega_{o}\), we can write
\[\begin{split}&\mathbf{E}(x,y,z;t)\rightarrow\frac{\mathbf{e}(\frac{ \omega_{o}}{v};x,y)}{v^{1/2}}\int_{0}^{\infty}\frac{d\omega}{\sqrt{2\pi}}c( \omega)e^{i\omega z/v}e^{-i\omega t}+h.c.\\ &=\frac{\mathbf{e}(\frac{\omega_{o}}{v};x,y)}{v^{1/2}}e^{i\omega_{o} z/v}e^{-i\omega_{o}t}\int_{0}^{\infty}\frac{d\omega}{\sqrt{2\pi}}c(\omega)e^{i( \omega-\omega_{o})z/v}e^{-i(\omega-\omega_{o})t}+h.c.\end{split} \tag{103}\]
Then putting \(a(\omega-\omega_{o})\equiv c(\omega)\), the commutation relations (Eq. (12)) hold and for the range of frequencies much less than \(\omega_{o}\) we have
\[\mathbf{E}(x,y,z;t)=\frac{\mathbf{e}(\frac{\omega_{o}}{v};x,y)}{v^{1/2}}e^{i\omega_{o} z/v}e^{-i\omega_{o}t}\int_{-\infty}^{\infty}\frac{d\omega}{\sqrt{2\pi}}a( \omega)e^{i\omega z/v}e^{-i\omega t}+h.c. \tag{104}\]
This gives
\[\mathbf{E}(x,y,0;t)=\frac{\mathbf{e}(\frac{\omega_{o}}{v};x,y)}{v^{1/2}}e^{-i\omega_{o }t}\overline{a}(t)+h.c., \tag{105}\]
where
\[\overline{a}(t)=\int_{-\infty}^{\infty}\frac{d\omega}{\sqrt{2\pi}}a(\omega)e ^{-i\omega t}. \tag{106}\]
Now since the Schrodinger operator for \(\mathbf{E}(x,y,z)\) is just the Heisenberg operator \(\mathbf{E}(x,y,z;0)\), for that Schrodinger operator we have
\[\mathbf{E}(x,y,z)=\frac{\mathbf{e}(\frac{\omega_{o}}{v};x,y)}{v^{1/2}}e^{i\omega_{o}z /v}\overline{a}(-\frac{z}{v})+h.c. \tag{107}\]
That is, the operator \(\overline{a}(t)\) is associated with the electric field at time \(t\) and \(z=0\), and as well with the electric field at zero time and position \(z=-vt\), as would be expected because of the propagation with group velocity \(v\).
## Appendix B Schematic of sinc-hat joint intensity
Consider the schematic sinc-hat joint intensity shown in Fig. 26, where we drop the sinc "tails" along the anti-diagonal (diagonal) direction for the joint temporal (spectral) intensity. Of course, because of the sinc tails the joint intensities extend to infinity in either direction, but for \(T_{p}/T_{c}\) sufficiently large these contributions are small enough that the behaviour in the anti-diagonal (diagonal) direction is effectively captured by the width \(T_{c}\) (\(\Omega_{p}\)) set at \(t_{2}=0\) (\(\Omega_{2}=0\)).
In the schematic one can see that along the lines \(t_{1}=t_{2}\) (\(\omega_{1}=-\omega_{2}\)) the joint temporal (spectral) intensity ranges over \(-T_{p}/2\to T_{p}/2\) (\(-\Omega_{c}/2\rightarrow\Omega_{c}/2\)), however this is not the _full_ horizontal extent of the joint intensities. For the joint temporal intensity there are two extra contributions in the lower left-hand and upper right-hand corners. Using simple geometry one finds that each corner adds a duration of size \(T_{p}/4\) to the horizontal width for a combined width of \(\mathcal{T}_{p}^{\text{SH}}=T_{p}+T_{c}/2\). A similar argument follows for the joint spectral intensity leading to a combined width of \(2\pi\mathcal{B}_{c}^{\text{SH}}=\Omega_{c}+\Omega_{p}/2\).
## Appendix C Joint temporal amplitude scaling
Consider the joint temporal amplitude schematically shown in Fig. 1, characterized by \(\mathcal{T}_{p}\) and \(\mathcal{T}_{c}\). To show how the maximum value of the joint temporal amplitude scales with \(\mathcal{T}_{p}\) and \(\mathcal{T}_{c}\) we consider a change of variables
\[\tilde{t}_{1}=\frac{1}{\sqrt{2}}\frac{t_{1}+t_{2}}{\mathcal{T}_{p}},\ \ \ \ \ \tilde{t}_{2}=\frac{1}{\sqrt{2}}\frac{t_{1}-t_{2}}{\mathcal{T}_{c}},\ \ \ \ \ t_{1}=\frac{1}{\sqrt{2}}(\tilde{t}_{1}\mathcal{T}_{p}+\tilde{t}_{2} \mathcal{T}_{c}),\ \ \ \ \ t_{2}=\frac{1}{\sqrt{2}}(\tilde{t}_{1}\mathcal{T}_{p}-\tilde{t}_{2} \mathcal{T}_{c}),\] (C.1)
which are aligned with the long and short axis of the joint temporal amplitude, see Fig. 27. The new variables \(\tilde{t}_{1},\tilde{t}_{2}\) are normalized and dimensionless; as they vary over the range \(\tilde{t}_{1},\tilde{t}_{2}\in[-1/\sqrt{2},1/\sqrt{2}]\), \(t_{1},t_{2}\) vary over the range of the joint amplitude specified by \(\mathcal{T}_{p}\) and \(\mathcal{T}_{c}\). The Jacobian of this coordinate transformation is \(\det(J)=\mathcal{T}_{p}\mathcal{T}_{c}\). Then setting
\[\tilde{\gamma}(\tilde{t}_{1},\tilde{t}_{2})=\sqrt{\mathcal{T}_{p}\mathcal{T}_ {c}\overline{\gamma}}\left(\frac{1}{\sqrt{2}}(\tilde{t}_{1}\mathcal{T}_{p}+ \tilde{t}_{2}\mathcal{T}_{c}),\frac{1}{\sqrt{2}}(\tilde{t}_{1}\mathcal{T}_{p}- \tilde{t}_{2}\mathcal{T}_{c})\right)=\sqrt{\mathcal{T}_{p}\mathcal{T}_{c} \overline{\gamma}}\left(t_{1},t_{2}\right),\] (C.2)
\(\tilde{\gamma}(\tilde{t}_{1},\tilde{t}_{2})\) is a normalized and dimensionless joint amplitude that satisfies
\[\int dt_{1}dt_{2}|\overline{\gamma}(t_{1},t_{2})|^{2}=\int d\tilde{t}_{1}d \tilde{t}_{2}|\tilde{\gamma}(\tilde{t}_{1},\tilde{t}_{2})|^{2}=1.\] (C.3)
Since the rotated coordinates vary roughly over \(\tilde{t}_{1},\tilde{t}_{2}\in[-1/\sqrt{2},1/\sqrt{2}]\) and \(|\tilde{\gamma}(\tilde{t}_{1},\tilde{t}_{2})|\) is always positive, we can infer that the approximate maximum of \(|\tilde{\gamma}(t_{1},\tilde{t}_{2})|\) is on the order of one. Then rewritting Eq. (C.2), we have
\[\overline{\gamma}(t_{1},t_{2})=\frac{\tilde{\gamma}(\tilde{t}_{1},\tilde{t}_{ 2})}{\sqrt{\mathcal{T}_{p}\mathcal{T}_{c}}},\] (C.4)
and since the location of the maximum of \(|\overline{\gamma}(t_{1},t_{2})|\) must be the same as that of \(|\tilde{\gamma}(\tilde{t}_{1},\tilde{t}_{2})|\), and the latter maximum is of order unity, in general the maximum value of \(\overline{\gamma}(t_{1},t_{2})\) scales with \(1/\sqrt{\mathcal{T}_{p}\mathcal{T}_{c}}\).
## Appendix D Relation between \(K\) and \(\mathcal{K}\)
We begin with an alternate expression [8; 9] for the Schmidt number,
\[\frac{1}{K}=\int dt_{1}dt_{2}dt^{\prime}_{1}dt^{\prime}_{2}\overline{\gamma}( t_{1},t_{2})\overline{\gamma}^{*}(t^{\prime}_{1},t_{2})\overline{\gamma}(t^{ \prime}_{1},t^{\prime}_{2})\overline{\gamma}(t_{1},t^{\prime}_{2}).\] (D.1)
Figure 26: Schematic of the sinc-hat joint intensities showing the extra contributions to the horizontal widths in the respective corners.
which can be confirmed by using the Schmidt decomposition (Eq. (III.1)) in Eq. (D.1) and recalling the orthogonality of the Schmidt modes. If a Whittaker-Shannon decomposition (Eq. (V.11)) constructed with an appropriate bandwidth limit is put in Eq. (D.1), with the use of the orthogonality relations of \(\overline{\chi}_{n}(t)\) we find
\[\frac{1}{K}=\tau^{4}\sum_{a,b,c,d}\overline{\gamma}_{ab}\overline{\gamma}_{cb}^ {*}\overline{\gamma}_{cd}\overline{\gamma}_{ad}^{*}=\tau^{4}\text{Tr}( \overline{\gamma}\overline{\gamma}^{\dagger}\overline{\gamma}\overline{\gamma} ^{\dagger}),\] (D.2)
where we have written \(\overline{\gamma}_{ab}\) as short-hand for \(\overline{\gamma}(a\tau,b\tau)\), and in the second equality we use \(\overline{\gamma}\) to indicate the matrix with components \(\overline{\gamma}_{ab}\).
We now make use of the identity
\[|\text{Tr}(\mathbf{AB}^{\dagger})|^{2}\leq\text{Tr}(\mathbf{AA}^{\dagger}) \text{Tr}(\mathbf{BB}^{\dagger}),\] (D.3)
which is the generalization of the Cauchy-Schwarz inequality to matrices under the trace inner product [36]. For \(\mathbf{A}\) an \(l\times l\) matrix, we put \(\mathbf{B}=\mathbf{1}_{l\times l}\), the identity matrix of size \(l\times l\), and find
\[|\text{Tr}(\mathbf{A})|^{2}\leq\text{Tr}(\mathbf{AA}^{\dagger})l,\] (D.4)
or rather
\[\text{Tr}(\mathbf{AA}^{\dagger})\geq\frac{|\text{Tr}(\mathbf{A})|^{2}}{l}.\] (D.5)
Next we set \(\mathbf{A}=\overline{\gamma}\overline{\gamma}^{\dagger}\), and since \(\text{Tr}(\overline{\gamma}\overline{\gamma}^{\dagger})\geq 0\) the inequality in Eq. (D.5) leads to
\[\text{Tr}(\overline{\gamma}\overline{\gamma}^{\dagger}\overline{\gamma} \overline{\gamma}^{\dagger})\geq\frac{\text{Tr}(\overline{\gamma}\overline{ \gamma}^{\dagger})^{2}}{l}.\] (D.6)
After inputting this result into the equation (D.2) for the Schmidt number, we arrive at
\[\frac{1}{K}\geq\frac{(\tau^{2}\text{Tr}(\overline{\gamma}\overline{\gamma}^{ \dagger}))^{2}}{l}.\] (D.7)
This can be significantly simplified by noting that the joint amplitude is normalized to unity, and inputting the Whittaker-Shannon decomposition (Eq. (V.11)) into
\[\int dt_{1}dt_{2}|\overline{\gamma}(t_{1},t_{2})|^{2}=1,\] (D.8)
Figure 27: Schematic of a general joint temporal amplitude with a pulse duration and coherence time denoted by \(\mathcal{T}_{p}\) and \(\mathcal{T}_{c}\) respectively, in the original and rotated coordinate system.
we find \(\tau^{2}\text{Tr}(\overline{\mathbf{\tau}}\overline{\mathbf{\gamma}}^{\dagger})=1\), and then from Eq. (D.7) we have
\[K\leq l.\] (D.9)
Now \(l\) is the dimension of the matrix \(\mathbf{\gamma}\), but because \(\overline{\chi}_{n}(t)\) has a width set by \(\tau\) one needs on the order of \(l=\mathcal{T}_{p}/\tau\) functions to interpolate the joint amplitude; see the discussion around Eq (V.20). If we choose \(\tau=\mathcal{T}_{c}\), then \(l=\mathcal{T}_{p}/\mathcal{T}_{c}=\mathcal{K}\) and we find
\[K\leq\mathcal{K},\] (D.10)
which completes the argument.
## Appendix E Correlation functions
We begin by using the inverse relation given by Eq. (V.13) to write the correlation functions (II.4) as
\[\overline{G}^{(1)}(t_{1},t_{2})=\overline{\chi}_{x}^{*}(t_{1}) \left\langle\Psi|B_{x}^{\dagger}B_{y}|\Psi\right\rangle\overline{\chi}_{y}(t_ {2}),\] (E.1a) \[\overline{G}^{(2)}(t_{1},t_{2})=\overline{\chi}_{w}^{*}(t_{1}) \overline{\chi}_{x}^{*}(t_{2})\left\langle\Psi|B_{w}^{\dagger}B_{x}^{\dagger }B_{y}B_{z}|\Psi\right\rangle\overline{\chi}_{y}(t_{2})\overline{\chi}_{z}(t_ {1}),\] (E.1b)
we use the same summation convention as in the text. Then using the transformation given by Eq. (VI.2) we evaluate
\[\left\langle\Psi|B_{x}^{\dagger}B_{y}|\Psi\right\rangle =\left\langle\text{vac}\right|\nu_{xa}^{*}B_{a}\nu_{yy}B_{b}^{ \dagger}\left|\text{vac}\right\rangle\] (E.2) \[=\nu_{xa}^{*}\nu_{ya}\] \[=(\mathbf{\nu}^{*}\mathbf{\nu}^{T})_{xy}\] \[=(\text{U}^{*}(\text{sinh}\mathbf{P}^{*})(\text{sinh}\mathbf{P}^ {T})\mathbf{U}^{T})_{xy}\] \[=(\text{sinh}^{2}\mathbf{P})_{xy},\]
and
\[\left\langle\Psi|B_{w}^{\dagger}B_{x}^{\dagger}B_{y}B_{z}|\Psi\right\rangle\] (E.3) \[=\left\langle\text{vac}\right|\nu_{wa}^{*}B_{a}(\mu_{xb}^{*}B_{b} ^{\dagger}+\nu_{xb}^{*}B_{b})(\mu_{yc}B_{c}+\nu_{yc}B_{c}^{\dagger})\nu_{zd}B_ {d}^{\dagger}\left|\text{vac}\right\rangle\] \[=S_{wa}^{*}\mu_{xb}^{*}\mu_{y}\nu_{zd}\left\langle\text{vac} \right|B_{a}B_{b}^{\dagger}B_{c}B_{d}^{\dagger}\left|\text{vac}\right\rangle+ \nu_{wa}^{*}\nu_{xb}^{*}\nu_{yc}\nu_{zd}\left\langle\text{vac}\right|B_{a}B_{ b}B_{c}^{\dagger}B_{d}^{\dagger}\left|\text{vac}\right\rangle\] \[=\nu_{wa}^{*}\mu_{xa}^{*}\mu_{yc}\nu_{cz}+\nu_{wa}^{*}\nu_{xb}^{* }\nu_{yb}\nu_{za}+\nu_{wa}^{*}\nu_{xb}^{*}\nu_{yb}\nu_{za}\nu_{zb}\] \[=(\mathbf{\nu}\mathbf{\mu}^{T})_{wx}^{*}(\mathbf{\nu}\mathbf{\mu}^{T})_{zy}+(\bm {\nu}^{*}\mathbf{\nu}^{T})_{wz}(\mathbf{\nu}^{*}\mathbf{\nu}^{T})_{xy}+(\mathbf{\nu}^{*}\mathbf{\nu }^{T})_{wy}(\mathbf{\nu}^{*}\mathbf{\nu}^{T})_{xz}\] \[=(\mathbf{U}(\text{sinh}\mathbf{P})(\text{cosh}\mathbf{P}))_{wx}^ {*}(\mathbf{U}(\text{sinh}\mathbf{P})(\text{cosh}\mathbf{P}))_{zy}+(\text{ sinh}^{2}\mathbf{P})_{wz}(\text{sinh}^{2}\mathbf{P})_{xy}+(\text{sinh}^{2} \mathbf{P})_{yy}(\text{sinh}^{2}\mathbf{P})_{xz}.\]
So \(\overline{G}^{(1)}(t_{1},t_{2})\) is given by
\[\overline{G}^{(1)}(t_{1},t_{2}) =\overline{\chi}_{x}^{*}(t_{1})(\text{sinh}^{2}\mathbf{P})_{xy} \overline{\chi}_{y}(t_{2})\] (E.4) \[=\overline{\mathbf{\chi}}^{\dagger}(t_{1})(\text{sinh}^{2}\mathbf{P}) \overline{\mathbf{\chi}}(t_{2}),\]
where \(\overline{\mathbf{\chi}}(t)=(...,\overline{\chi}_{-1}(t),\overline{\chi}_{0}(t), \overline{\chi}_{1}(t),...)^{T}\) is the column vector formed from the set \(\{\overline{\chi}_{n}(t)\}\) for a given \(t\), and
\[\overline{G}^{(2)}(t_{1},t_{2}) =\overline{\chi}_{w}^{*}(t_{1})\overline{\chi}_{x}^{*}(t_{2})( \mathbf{U}(\text{sinh}\mathbf{P})(\text{cosh}\mathbf{P}))_{wx}^{*}(\mathbf{U}( \text{sinh}\mathbf{P})(\text{cosh}\mathbf{P}))_{zy}\overline{\chi}_{y}(t_{2}) \overline{\chi}_{z}(t_{1})\] (E.5) \[+\overline{\chi}_{w}^{*}(t_{1})\overline{\chi}_{x}^{*}(t_{2})( \text{sinh}^{2}\mathbf{P})_{wz}(\text{sinh}^{2}\mathbf{P})_{xy}\overline{\chi}_{y} (t_{2})\overline{\chi}_{z}(t_{1})\] \[+\overline{\chi}_{w}^{*}(t_{1})\overline{\chi}_{x}^{*}(t_{2})( \text{sinh}^{2}\mathbf{P})_{wy}(\text{sinh}^{2}\mathbf{P})_{xz}\overline{\chi}_{y }(t_{2})\overline{\chi}_{z}(t_{1})\] \[=|\overline{\mathbf{\chi}}^{T}(t_{1})\mathbf{U}(\text{sinh}\mathbf{P})( \text{cosh}\mathbf{P})\overline{\mathbf{\chi}}(t_{2})|^{2}+|\overline{\mathbf{\chi}}^{ \dagger}(t_{1})(\text{sinh}^{2}\mathbf{P})\overline{\mathbf{\chi}}(t_{1})|[ \overline{\mathbf{\chi}}^{\dagger}(t_{2})(\text{sinh}^{2}\mathbf{P})\overline{\mathbf{ \chi}}(t_{2})]+|\overline{\mathbf{\chi}}^{\dagger}(t_{1})(\text{sinh}^{2}\mathbf{P}) \overline{\mathbf{\chi}}(t_{2})|^{2}\] \[=|\overline{\mathbf{\chi}}^{T}(t_{1})\mathbf{U}(\text{sinh}\mathbf{P})( \text{cosh}\mathbf{P})\overline{\mathbf{\chi}}(t_{2})|^{2}+\overline{G}^{(1)}(t_{1}) \overline{G}^{(1)}(t_{2})+|\overline{G}^{(1)}(t_{1},t_{2})|^{2}.\]
## Appendix F Mode-by-mode calculation
In this Appendix we show that the photon statistics of the sinc-hat joint amplitude can be described on a "mode-by-mode" basis using the pseudo-Schmidt decomposition. We then generalize this to the Whittaker-Shannon decomposition, and show that at least approximately a "mode-by-mode" description can be introduced.
### Pseudo-Schmidt decomposition
We start by considering the analytical form of \(\overline{G}^{(2)}(\Delta t)\) (IV.27) in the CW limit,
\[\begin{split}\left.\overline{G}^{(2)}(\Delta t)\right|_{|\Delta t |\lesssim T_{c}}&\approx\overline{G}^{(2)}(\Delta t=0)\\ &=\frac{1}{T_{c}^{2}}(3N_{\text{mode}}^{2}+N_{\text{mode}}),\end{split} \tag{100}\]
where we have approximated the "sinc" function as unity and used Eq. (IV.20); as expected, we recover the usual standard result for \(G^{(2)}(\Delta t=0)\)[18; 37], and for \(\Delta t>T_{c}\) we have \(\overline{G}^{(2)}(\Delta t)\leq\overline{G}^{(2)}(\Delta t=0)\), the condition for "bunched light" [38]. Since \(\overline{G}^{(2)}(\Delta t)\) varies little over a length of time \(|\Delta t|\lesssim T_{c}\), the quantity \(T_{c}\overline{G}^{(2)}(\Delta t)\) is a "coincidence rate," and
\[T_{p}\left(T_{e}\left.\overline{G}^{(2)}(\Delta t)\right|_{\Delta t\lesssim T _{c}}\right)\approx\mathcal{N}(3N_{\text{mode}}^{2}+N_{\text{mode}}) \tag{101}\]
is the total coincidence count over the duration of the pulse.
To gain insight into the scaling of \(\overline{G}^{(2)}(\Delta t)\) with \(N_{\text{mode}}\), we note that for a single pseudo-Schmidt mode labeled by \(n\), the probability of detecting \(2x\) photons is [18]
\[P_{n}(2x)=\frac{1}{c}\left(\frac{\sqrt{(2x)!}}{2^{x}x!}\right)^{2}\left(\frac{ s}{c}\right)^{2x}, \tag{102}\]
where we put \(s\equiv\sinh(r)\) and \(c\equiv\cosh(r)\), and \(r=|\beta|/\sqrt{\mathcal{N}}\) is the squeezing parameter which is independent of \(n\) for the values of \(n\) for which it is nonzero (see Eq. (IV.15)). Since each pseudo-Schmidt mode is normalized we have
\[1=\sum_{x=0}^{\infty}P_{n}(2x)\Rightarrow c=\sum_{x=0}^{\infty}\left(\frac{ \sqrt{(2x)!}}{2^{x}x!}\right)^{2}\left(\frac{s}{c}\right)^{2x}. \tag{103}\]
Taking the derivative of both sides with respect to \(r\), and simplifying using hyperbolic trigonometry identities followed by a re-indexing of the sum, we find
\[\sum_{x=0}^{\infty}(2x)P_{n}(2x)=s^{2}=N_{\text{mode}}, \tag{104}\]
as expected; the average number of photons in each pseudo-Schmidt mode is given by \(N_{\text{mode}}\). Notice that we can re-write the expectation value as
\[N_{\text{mode}}=\sum_{x=0}^{\infty}(2x)P_{n}(2x)=\sum_{x=0}^{\infty}\frac{(2x )!}{(2x-1)!}P_{n}(2x)=\sum_{x=0}^{\infty}\mathcal{P}(2x,1)P_{n}(2x), \tag{105}\]
where \(\mathcal{P}(2x,1)\) is the number of permutations of \(2x\) objects when we select one object. Again taking the derivative of both sides with respect to \(r\), after further hyperbolic trigonometry manipulations and a re-indexing of the sum, we find
\[\sum_{x=0}^{\infty}(2x)(2x-1)P_{n}(2x)=3N_{\text{mode}}^{2}+N_{\text{mode}}, \tag{106}\]
where the right hand side is the familiar scaling of \(\overline{G}^{(2)}(\Delta t)\) with \(N_{\rm mode}\). However, playing the same game, we can re-write this as
\[\sum_{x=0}^{\infty}(2x)(2x-1)P_{n}(2x)=\sum_{x=0}^{\infty}\frac{(2x)!}{(2x-2)!}P _{n}(2x)=\sum_{x=0}^{\infty}\mathcal{P}(2x,2)P_{n}(2x)=3N_{\rm mode}^{2}+N_{ \rm mode}, \tag{111}\]
where \(\mathcal{P}(2x,2)\) is the number of permutations of \(2x\) objects when we select two of them.
Thus for short time differences, on the order of \(T_{c}\), the coincidence rate is the expectation value of the total number of ways of "picking" two photons from any pseudo-Schmidt mode, and to calculate the total coincidence count we multiply by \(\mathcal{N}\) ((110)), the total number of pseudo-Schmidt modes (cf. (110) and (111)). Indeed, for time differences less than \(T_{c}\) we can set \(n=m\) for both terms in Eq. (109), so that to good approximation we have
\[G^{(2)}(t_{1},t_{2})\approx(3N_{\rm mode}^{2}+N_{\rm mode})\sum_{n}|\overline {\eta}_{n}(t_{1})|^{2}|\overline{\eta}_{n}(t_{2})|^{2}. \tag{112}\]
For such time differences the coincidence count at a particular \(t_{1},t_{2}\) is associated with only a few pseudo-Schmidt modes, unlike in the Schmidt decomposition, where the inclusion of the contributions of a large number of Schmidt modes is needed.
### Whittaker-Shannon decomposition
We now consider the Whittaker-Shannon decomposition, starting with the exact results for the correlation functions (108,109). The photon density is given by
\[\begin{split}\overline{G}^{(1)}(t)&=\overline{ \chi}^{\dagger}(t)(\sinh^{2}\!{\bf P})\overline{\chi}(t)\\ &=\sum_{n,m}\overline{\chi}_{n}^{*}(t)(\sinh^{2}\!{\bf P})_{nm} \overline{\chi}_{m}(t)\\ &\approx\sum_{n}(\sinh^{2}\!{\bf P})_{nn}|\overline{\chi}_{n}(t) |^{2},\end{split} \tag{113}\]
where in the second line we expanded the matrix multiplication, and in the third line approximated the double sum as just the diagonal contributions. In reducing the double sum to a single sum we are losing significant contributions to the photon density, because typically \((\sinh^{2}\!{\bf P})_{nm}\) will be nonzero over a small range of \(|n-m|\); further, the product of two neighbouring Whittaker-Shannon modes is not negligible. While the final result of Eq. (113) provides a crude approximate at best, it is similar to the pseudo-Schmidt result given by Eq. (109). This suggests that we should identify
\[\Gamma_{n}^{2}=(\sinh^{2}\!{\bf P})_{nn} \tag{114}\]
as the number of photons in each mode, and indeed we find this to be the case in Section VI.1.
We now turn our attention to \(\overline{G}^{(2)}(t_{1},t_{2})\), starting with the coherent contribution
\[\begin{split}\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})& =|\overline{\chi}^{T}(t_{1}){\bf U}(\sinh\!{\bf P})(\cosh\!{\bf P}) \overline{\chi}(t_{2})|^{2}\\ &=\sum_{n,m,p,q}\overline{\chi}_{n}^{*}(t_{1})({\bf U}\!\sinh\!{ \bf P}\!\cosh\!{\bf P})_{nn}^{*}\overline{\chi}_{m}^{*}(t_{2})\overline{\chi} _{p}(t_{1})({\bf U}\!\sinh\!{\bf P}\!\cosh\!{\bf P})_{pq}\overline{\chi}_{q}(t _{2})\\ &\approx\sum_{n,m}\left|({\bf U}\!\sinh\!{\bf P}\!\cosh\!{\bf P} )_{nm}\right|^{2}|\overline{\chi}_{n}(t_{1})|^{2}|\overline{\chi}_{m}(t_{2})|^ {2},\end{split} \tag{115}\]
where in the last line we used the same approximation that we did for the photon density. Then if we consider times \(t_{1},t_{2}\) such that \(|t_{1}-t_{2}|\lesssim\tau\), the significant contribution to the sum is when \(n=m\), and we have
\[\overline{G}^{(2)}_{\rm coh}(t_{1},t_{2})\approx\sum_{n}\left|({\bf U}\!\sinh \!{\bf P}\!\cosh\!{\bf P})_{nn}\right|^{2}|\overline{\chi}_{n}(t_{1})|^{2}| \overline{\chi}_{n}(t_{2})|^{2}. \tag{116}\]
Following the same steps for the incoherent contribution, for times \(t_{1},t_{2}\) such that \(|t_{1}-t_{2}|\lesssim\tau\), \(\overline{G}^{(2)}(t_{1},t_{2})\) is given by
\[\overline{G}^{(2)}(t_{1},t_{2})=\sum_{n}(|({\bf U}\!\sinh\!{\bf P}\!\cosh\!{ \bf P})_{nn}|^{2}+2\Gamma_{n}^{4})|\overline{\chi}_{n}(t_{1})|^{2}|\overline{ \chi}_{n}(t_{2})|^{2}. \tag{117}\]
While again a crude approximation at best, we do note the similarity between the final result and the pseudo-Schmidt result in Eq. (112).
## Appendix G
We start with equation (VI.9) for \(\overline{\rho}_{n}(t)\), and invert it by multiplying by \((\sinh\!\mathbf{P})^{-1}=\mathrm{csch}\mathbf{P}\)
\[\overline{\chi}_{n}(t)=\sum_{m}(\mathrm{csch}\mathbf{P})_{nm}\Gamma_{m} \overline{\rho}_{m}(t).\] (G.1)
Taking the term inside the squared norm for the coherent contribution (Eq. (VI.13)) and writing it in its index representation, we insert the form of \(\overline{\chi}_{n}(t)\) in terms of \(\overline{\rho}_{m}(t)\) using (G.1), so that
\[\begin{split}\overline{\mathbf{X}}^{T}(t_{1})\mathbf{U}(\sinh \!\mathbf{P})(\cosh\!\mathbf{P})\overline{\mathbf{X}}(t_{2})&= \overline{\chi}_{x}(t_{1})U_{xa}(\sinh\!\mathbf{P})_{ab}(\cosh\!\mathbf{P})_ {by}\overline{\chi}_{y}(t_{2})\\ &=(\mathrm{csch}\mathbf{P})_{xm}\Gamma_{m}\overline{\rho}_{m}(t _{1})U_{xa}(\sinh\!\mathbf{P})_{ab}(\cosh\!\mathbf{P})_{by}(\mathrm{csch} \mathbf{P})_{yn}\Gamma_{n}\overline{\rho}_{n}(t_{2})\\ &=\Gamma_{n}\Gamma_{m}\overline{\rho}_{m}(t_{1})\overline{\rho}_{ n}(t_{2})((\mathrm{csch}\mathbf{Q})\mathbf{U}(\sinh\!\mathbf{P})(\cosh\! \mathbf{P})(\mathrm{csch}\mathbf{P}))_{mn}\\ &=\Gamma_{n}\Gamma_{m}\overline{\rho}_{m}(t_{1})\overline{\rho}_{ n}(t_{2})(\mathbf{U}\mathrm{coth}\mathbf{P})_{mn},\end{split}\] (G.2)
where in deriving this result we used the properties of \(\mathbf{P},\mathbf{Q}\) and \(\mathbf{U}\) discussed in section VI, and we set \(\mathrm{coth}\mathbf{P}=(\tanh\!\mathbf{P})^{-1}=(\mathrm{cosh}\!\mathbf{P})( \mathrm{csch}\mathbf{P})\). The resulting coherent contribution to the correlation function is then given by
\[\overline{G}_{\mathrm{coh}}^{(2)}(t_{1},t_{2})=\sum_{n,m}|\Gamma_{n}\Gamma_{m} (\mathbf{U}\mathrm{coth}\mathbf{P})_{mn}\overline{\rho}_{m}(t_{1})\overline{ \rho}_{n}(t_{2})|^{2}.\] (G.3)
|
2307.13607 | Nonlinear effects in many-body van der Waals interactions | Van der Waals interactions are ubiquitous and they play an important role for
the stability of materials. Current understanding of this type of coupling is
based on linear response theory, while optical nonlinearities are rarely
considered in this context. Many materials, however, exhibit strong optical
nonlinear response, which prompts further evaluation of dispersive forces
beyond linear response. Here we present a $\textit{Discrete Coupled Nonlinear
Dipole}$ approach that takes into account linear and nonlinear properties of
all dipolar nanoparticles in a given system. This method is based on a
Hamiltonian for nonlinear dipoles, which we apply in different systems
uncovering a complex interplay of distance, anisotropy, polarizibilities, and
hyperpolarizabilities in the vdW energy. This investigation broadens our basic
understanding of dispersive interactions, especially in the context of
nonlinear materials. | Dai-Nam Le, Pablo Rodriguez-Lopez, Lilia M. Woods | 2023-06-23T14:29:42Z | http://arxiv.org/abs/2307.13607v1 | # Nonlinear effects in many-body van der Waals interactions
###### Abstract
Van der Waals interactions are ubiquitous and they play an important role for the stability of materials. Current understanding of this type of coupling is based on linear response theory, while optical nonlinearities are rarely considered in this context. Many materials, however, exhibit strong optical nonlinear response, which prompts further evaluation of dispersive forces beyond linear response. Here we present a _Discrete Coupled Nonlinear Dipole_ approach that takes into account linear and nonlinear properties of all dipolar nanoparticles in a given system. This method is based on a Hamiltonian for nonlinear dipoles, which we apply in different systems uncovering a complex interplay of distance, anisotropy, polarizibilities, and hyperpolarizabilities in the vdW energy. This investigation broadens our basic understanding of dispersive interactions, especially in the context of nonlinear materials.
_Introduction --_ Van der Waals (vdW) interactions play an essential role for the stability of materials with chemically inert components [1, 2]. The vdW interaction is important for the stacking patterns of layered materials [3, 4], and it can even lead to significant changes in the electronic structure as is the case for the direct-to-indirect band gap transition when comparing a monolayer MoS\({}_{2}\) and its bulk counterpart [5]. First principles computational methods for calculating vdW interactions within linear response theory have also been developed [6, 7] with implementations in codes, such as VASP [8, 9] and Quantum Espresso [10]. In these packages, however, vdW calculations rely on linear response theory, which may be problematic for materials with optical nonlinearities.
Recently, several noncentrosymmetric transition metal dichalcogenides [11, 12, 13, 14] or Weyl semimetals [15, 16, 17, 18] have been found to have much enhanced second-order nonlinear hyperpolarizability, which is important in novel applications for the coherent control of spin and valley polarized currents. Many of these materials can also exhibit large third-order optical nonlinearities, which is beneficial for ultrafast optics and plasmonics [19, 20]. Given that optical properties play an essential role in the collective nature of vdW interactions, one expects that hyperpolarizabilities will also affect the coupling between materials. Indeed, recent electrodynamical studies have shown that even at large separations the interaction between macroscopic bodies experiences magnitude and/or scaling law modulations depending on their nonlinear properties [21, 22, 23]. VdW quadrupolar contributions have also been examined with the random phase approximation showing that these beyond-dipolar effects cannot be neglected in molecular systems [24].
Main-stream computational schemes currently do not take into account optical nonlinearities [8, 9, 10], which may be a hurdle for a more accurate calculations for electronic structure properties and phenomena directly related to vdW interactions. It is known that even within linear response theory, the vdW coupling energy can have vastly different magnitude, scaling law, and/or sign than the typical attractive two-body London-type \(1/R^{6}\) behavior [25, 26, 27]. A striking example are carbon materials, where the vdW forces can be quite different for carbon nanotubes, graphene layers, or graphene nanoribbons [28, 29, 30]. This is a consequence of the complex interplay between many factors: separation, atomic distributions, optical response, and many-body effects.
Here we present the _Discrete Coupled Nonlinear Dipole_ method as a generalized framework for computing dispersive interactions between nanoparticles. In this many-body approach the linear and nonlinear optical properties for all interacting nanoparticles are taken into account explicitly. For this purpose, we obtain the quantum mechanical Hamiltonian \(\hat{H}\) for interacting _nonlinear dipoles_ fluctuating around their equilibrium positions, which was previously unavailable. Our scheme relies on a diagnonalization procedure coupled with perturbation theory to obtain the eigenmodes of \(\hat{H}\) whose sum is the vdW interaction energy. The method allows a miscroscopic dissection of dispersive coupling in terms of anisotropy, tensorial nature of the optical response, inversion symmetry, and distance separations for interacting materials.
Theoretical Model --We consider a system composed of \(N\) nanoparticles with no external fields applied. Each nanoparticle has a dipolar moment \(\hat{\mathbf{p}}^{i}\) (\(i=1,2,...,N\)) generated by the induced electric field from the other nanoparticles. The Hamiltonian of this system is \(\hat{H}=\sum\limits_{i=1}^{N}\hat{H}^{i}-\frac{1}{2}\sum\limits_{i,j=1}^{N} \hat{\mathbf{p}}^{i}\mathbf{T}^{i}\hat{\mathbf{p}}^{j}\), where the displacement matrix \(\mathbf{T}^{ij}=\frac{1}{4\pi\epsilon\gamma_{0}}\frac{3\mathbf{R}^{ij}\cdot \mathbf{R}^{ij}-(R^{ij})^{2}\mathbf{1}}{(R^{ij})^{5}}\) is determined by the separation vector \(\mathbf{R}^{ij}=\mathbf{R}^{i}-\mathbf{R}^{j}\).
Since previously the quantum mechanical Hamiltonian for an individual nanoparticle \(\hat{H}^{j}\) is available only
from linear response, we begin with the derivation of \(\hat{H}^{j}\) for a single nonlinear dipole. The starting point is a classical dipole for particle \(i\) in an electric field \(\mathbf{E}\): \(\mathbf{p}^{i}=\boldsymbol{\alpha}^{i}\mathbf{E}+\boldsymbol{\beta}^{i}\mathbf{ E}\mathbf{E}+\boldsymbol{\gamma}^{i}\mathbf{E}\mathbf{E}\mathbf{E}+...\), where \(\boldsymbol{\alpha}^{i}\) is the polarizability tensor of rank 2 from linear response, while \(\boldsymbol{\beta}^{i}\), \(\boldsymbol{\gamma}^{i},...\) are second-order, third-order,... hyperpolarizabilities (rank 3, 4,... tensors) [31, 32, 33, 34]. From the classical Hamilton and the equations of motion for \(\mathbf{p}^{i}\) and its canonical momentum \(\pi^{i}\) taken under equilibrium conditions, a self-consistent nonlinear equation for \(\mathbf{p}^{i}\) is found. Further, assuming small dipole moments oscillating with a characteristic frequency \(\omega^{0}\), the solutions to the Hamilton equations combined with a standard quantization procedure are used to transform the classical Hamiltonian into its quantum mechanical equivalent,
\[\hat{H} =\frac{1}{2}\mathbb{F}_{mn}\hat{\Pi}_{m}\hat{\Pi}_{n}+\frac{1}{2 }\mathbb{A}_{mn}\hat{P}_{m}\hat{P}_{n}+\frac{1}{3}\mathbb{B}_{mnq}\hat{P}_{m} \hat{P}_{n}\hat{P}_{q}+\frac{1}{4}\mathbb{G}_{mnqp}\hat{P}_{m}\hat{P}_{n}\hat {P}_{q} \tag{1}\] \[\mathbb{F}_{mn} =\left[\left(\omega^{0}\right)^{2}\boldsymbol{\alpha}\right] \delta\left\lfloor\frac{m+2}{3}\right\rfloor\lfloor\frac{n+2}{3}\rfloor, \quad\mathbb{A}_{mn}=\left[\boldsymbol{\alpha}^{-1}\right]\delta\left\lfloor \frac{m+2}{3}\right\rfloor\lfloor\frac{n+2}{3}\rfloor-\mathbf{T}^{\left\lfloor \frac{m+2}{3}\right\rfloor\lfloor\frac{n+2}{3}\rfloor}\left(1-\delta\lfloor \frac{n+2}{3}\rfloor\lfloor\frac{n+2}{3}\rfloor\right),\] (2) \[\mathbb{B}_{mnq} =-\left[\boldsymbol{\alpha}^{-1}\otimes\boldsymbol{\beta} \otimes\boldsymbol{\alpha}^{-1}\otimes\boldsymbol{\alpha}^{-1}\right]\delta \lfloor\frac{m+2}{3}\rfloor\lfloor\frac{n+2}{3}\rfloor,\] (3) \[\mathbb{G}_{mnqp} =\left[2\boldsymbol{\alpha}^{-1}\otimes\boldsymbol{\beta} \otimes\boldsymbol{\alpha}^{-1}\otimes\boldsymbol{\alpha}^{-1}\otimes \boldsymbol{\beta}\otimes\boldsymbol{\alpha}^{-1}\otimes\boldsymbol{\alpha}^ {-1}-\boldsymbol{\alpha}^{-1}\otimes\boldsymbol{\gamma}\otimes\boldsymbol{ \alpha}^{-1}\otimes\boldsymbol{\alpha}^{-1}\otimes\boldsymbol{\alpha}^{-1} \right]\delta\left\lfloor\frac{m+2}{3}\rfloor\lfloor\frac{n+2}{3}\rfloor \lfloor\frac{n+2}{3}\rfloor\lfloor\frac{n+2}{3}\rfloor\lfloor\frac{n+2}{3} \rfloor\lfloor\frac{n+2}{3}\rfloor\lfloor\frac{n+2}{3}\rfloor, \tag{4}\]
where \(m,n,q,p=1,2,\ldots,3N\) denote the degrees of freedom associated with \(x,y,z\) directional property components of the \(N\) nanoparticles. Also, \(\lfloor\frac{m+2}{3}\rfloor\) being the integer part of \(\frac{m+2}{3}\) tracks the nanoparticle and its \(m^{th}\) dipolar component. A generalized Kronecker delta notation \(\delta_{abcd...}=\left\{\begin{array}{ll}1&\text{when }a=b=c=d=...\\ 0&\text{otherwise}\end{array}\right.\) and the Einstein summation rule are also used in Eqs. (1)-(4). A detailed derivation of \(\hat{H}^{j}\) and \(\hat{H}\) is given in the Supplementary Information [35]. The first term in Eq. (1) contains the kinetic energy of the dipoles described by a \(1\times 3N\) column matrix for the canonical momenta \(\hat{\Pi}\) with components \(\hat{\Pi}_{m}\). The second term corresponds to the linear response properties of the fluctuating dipolar moments arranged in a \(1\times 3N\) column matrix \(\hat{P}\) with components \(\hat{P}_{m}\)[36, 37]. The third term contains the \(\boldsymbol{\beta}\) properties only, while the last term includes both \(\boldsymbol{\beta}\) and \(\boldsymbol{\gamma}\) nonlinearities.
The Hamiltonian in Eq. (1) sets the stage for the vdW energy calculations that takes into account the linear and nonlinear response of each nanoparticle. The schematic flowchart in Fig. 1 gives the main steps of the method discussed in what follows. After identifying the optical properties of each nanoparticle, we represent \(\hat{P}_{m}=P_{0,m}+\hat{Q}_{m}\), where the equilibrium position \(P_{0,m}\) is found from \(\dfrac{\partial\hat{H}}{\partial\hat{P}_{m}}=0\) and \(\hat{Q}_{m}\) is the fluctuation of each dipole around \(P_{0,m}\). This enables transforming the Hamiltonian into \(\hat{H}=E_{0}+\hat{H}_{h}+\hat{H}_{anh}\) where \(E_{0}=\hat{H}(P_{0,m})\) is the minimum free energy at equilibrium, \(\hat{H}_{h}\) contains \(\hat{\Pi}_{m}\hat{\Pi}_{n}\) and \(\hat{Q}_{m}\hat{Q}_{n}\) terms, while \(\hat{H}_{anh}\) consists of \(\hat{Q}_{n}\hat{Q}_{m}\hat{Q}_{q}\) and \(\hat{Q}_{n}\hat{Q}_{m}\hat{Q}_{q}\hat{Q}_{p}\) terms [38]. From the diagonalization of \(\hat{H}_{h}\), we find a system of coupled equations for the dipolar fluctuations,
\[\left[\mathbb{A}_{mn}+2\mathbb{B}_{mnq}P_{0,q}+3\mathbb{G}_{mnqp}P_{0,p} \right]\mathbb{F}_{nl}\mathbf{u}_{l}=\omega^{2}\mathbf{u}_{l} \tag{5}\]
where the eigenvectors \(\mathbf{u}_{l}\) satisfy the normalization condition \(\mathbb{F}_{mn}u_{mq}u_{np}=\delta_{qp}\). The eigenvalues from Eq. (5) constitute the zero-point modes which give the zeroth order ground state energy \(E_{g.s}^{(0)}=\sum\limits_{n=1}^{3N}\frac{1}{2}\hbar\omega_{n}\). Further, considering that linear response is always stronger than the optical nonlinearities, \(\hat{H}_{anh}\) is treated as a perturbation finding that
\[E_{g.s}\approx E_{0}+E_{g.s}^{(0)}+\Delta E_{2nd}^{(2)}+\Delta E_{3rd}^{(1)}, \tag{6}\]
where \(\Delta E_{2nd}^{(2)}\) is the second-order correction coming from \(\hat{Q}_{n}\hat{Q}_{m}\hat{Q}_{q}\) terms in \(\hat{H}_{anh}\) (the first order correction is zero) and \(\Delta E_{3rd}^{(1)}\) is the first-order correction from \(\hat{Q}_{n}\hat{Q}_{m}\hat{Q}_{q}\hat{Q}_{p}\) terms in \(\hat{H}_{anh}\) (details in the Supplementary Information [35]).
The approach in Fig. 1 constitutes the _Discrete Coupled Nonlinear Dipole_ method, which can now be applied to any system composed of discrete nonlinear dipoles. Note that when \(\mathbb{B}_{mnq}=\mathbb{G}_{mnqp}=0\), one recovers the linear Discrete Coupled Dipole method [39], which was already successfully used in first principles many-body schemes [40, 41, 42, 43, 44] for calculating vdW energy within _linear_ dipolar interactions. The generalized method described here provides means to account explicitly for _linear and nonlinear_ properties in many-body vdW interactions.
Isotropic particles --The method is now applied to two identical isotropic nanoparticles placed at \((0,0,0)\) and \((R,0,0)\) (insert in Fig. 2a) whose linear polarizabilities are \(\boldsymbol{\alpha}_{\mu\nu}^{1}=\boldsymbol{\alpha}_{\mu\nu}^{2}=\alpha\delta_{ \mu\nu}\). For the sake of simplicity, we further take that \(\boldsymbol{\gamma}_{\mu\nu\lambda\rho}^{1}=\boldsymbol{\gamma}_{\mu\nu\lambda \rho}^{2}=\frac{\gamma}{2}(\delta_{\mu\nu}\delta_{\lambda\rho}+\delta_{\mu \lambda}\delta_{\nu\rho}+\delta_{\mu\rho}\delta_{\nu\lambda})\). Note that in this case \(\boldsymbol{\beta}^{1}=\boldsymbol{\beta}^{2}=0\) due to the presence of inversion symmetry in isotropic materials [34]. Results from the numerical calculations for distances larger than the size of the dipole \(R_{0}=\sqrt[3]{\frac{\alpha}{2\pi\varepsilon_{0}}}\) are shown in Fig. 2a, where
_Anisotropic particles --_ We also consider anisotropic nanoparticles taken to have two distinct optical axes with \(\alpha_{yy}=\alpha_{zz}=\alpha_{\parallel}=\alpha\) and \(\alpha_{xx}=\alpha_{\perp}=\alpha(1-\epsilon)\) where \(0\leq\epsilon\leq 1\). Many anisotropic materials, such as noncentrosymmetric transition metal dichalcogenides, polar metals, and Weyl semimetals [11; 12; 13; 14; 15; 16; 17; 18], have broken inversion symmetry and they exhibit strong second and third order nonlinear response. Although much interest has been generated due to their potential applications for optical rectification and second harmonic generation applications, for example, here we show that the effect of \(g_{\gamma}=\frac{\gamma\hbar\omega^{0}}{\alpha^{2}}\) "measures" the relative nonlinear-to-linear response strength (for materials, \(|g_{\gamma}|<1\)[34]). We find that \(g_{\gamma}\) can result in significant modifications of the vdW energy, which are much pronounced at small separations. In the case of \(\gamma>0\), there is a stronger vdW attraction (Fig. 2a), but for \(\gamma<0\), the vdW interaction is repulsive (Fig. 2b,c). For \(R\gg R_{0}\) we are able to obtain an analytical expression (details in the Supplementary Information [35])
\[U_{vdW}(R)\approx\left(1+\frac{15}{2}g_{\gamma}\right)U_{L}(R), \tag{7}\]
where \(U_{L}(R)=-\frac{3}{16}\hbar\omega^{0}\frac{R_{0}^{6}}{R^{6}}\) is the standard London result for two atoms [45]. Eq. (7), which agrees very well with the numerical calculations for \(\sim\frac{R}{R_{0}}>1.5\) as given in Fig. 2a-c, shows that the nonlinear response does not change the \(1/R^{6}\) London-like behavior. However, \(g_{\gamma}\) modules the strength of the interaction and it can even result in repulsion when \(g_{\gamma}<-2/15\).
pends collectively on the anisotropy (parameter \(\epsilon\)), the optical nonlinearities (\(g_{\beta}\) and \(g_{\gamma}\) parameters), and the optical axis orientation (angle \(\psi\)). For particles separated along the \(x\)-axis, the nonlinear response leads to a reduced attraction as compared to the London formula. This reduction is strongest for \(g_{\beta}\) and \(g_{\gamma}\) with opposite signs (Fig. 2d). For particles separated along the \(y\)-axis and \(\psi=\frac{\pi}{2}\), \(g_{\beta}\) and \(g_{\gamma}\) lead to stronger vdW attraction, while particles on the \(z\)-axis and \(\psi=\frac{\pi}{2}\) experience reduced attraction when compared with \(\tilde{U}_{L}\) (Fig. 2f). It is also possible to obtain a repulsive vdW energy when the particles are on the \(y\)-axis but their optical axis have a relative angle \(\psi=\pi\) (Fig. 2e). The numerical results given in Fig. 2d-f are consistent with the analytical expressions found in the limit of \(R\gg R_{0}\) (see in Section S-V and Fig. S2 on Supplementary Information [35] for detailed derivations and comparisons),
\[U_{vdW}\left(\mathbf{R}\right)\approx\] \[\approx\left\{\begin{array}{l}\left(1-\frac{2\epsilon(2- \epsilon)}{3}+2g_{\gamma}-\frac{653g_{\gamma}^{2}(1-2\epsilon)}{162}\right)U_ {L}(R)\\ \mp\frac{g_{\gamma}^{2}\hbar\omega^{0}}{16}\frac{R_{0}^{3}}{R^{3}},\qquad \mathbf{R}=R\mathbf{e}_{x},\psi=0(-),\pi(+)\\ \left(1-\frac{\epsilon(2-\epsilon)}{6}+5g_{\gamma}-\frac{653g_{\gamma}^{2}(1- 2\epsilon)}{648}\right)U_{L}(R)\\ \pm\frac{g_{\gamma}^{2}\hbar\omega^{0}}{32}\frac{R^{3}}{R^{3}},\qquad \mathbf{R}=R\mathbf{e}_{y},\psi=0(+),\pi(-)\\ \left(1-\frac{\epsilon}{6}+\frac{(7-5\epsilon)g_{\gamma}}{2}-\frac{5153g_{ \gamma}^{2}(1-\epsilon)}{2592}\right)U_{L}(R)\\ \mathbf{R}=R\mathbf{e}_{x},\psi=\frac{\pi}{2}\\ \left(1-\frac{\epsilon}{3}+(5-\epsilon)g_{\gamma}-\frac{653g_{\gamma}^{2}(1- \epsilon)}{648}\right)U_{L}(R)\\ \mathbf{R}=R\mathbf{e}_{y},\psi=\frac{\pi}{2}\end{array}\right. \tag{8}\]
The above expression reflects the complexity of the vdW interaction for anisotropic particles. There is always a London-like term \(U_{L}(R)\), but rescaled by a factor that depends nontrivially on \(\epsilon\), \(g_{\beta}\), \(g_{\gamma}\), and \(\psi\). The vdW energy also has a term entirely due to the second order nonlinearity, which has a much longer outreach range due its \(1/R^{3}\) dependence. This term can be positive or negative (dictated mainly by \(\psi\)) and it can be a decisive factor for the energy behavior especially at larger \(R\). Eq. (8) further shows that by tailoring the properties of the nanoparticles, the London contribution can be suppressed in which case the interaction is dominated by the long-ranged \(1/R^{3}\) dependence as determined by the second order nonlinearity.
The _Discrete Coupled Nonlinear Dipole_ model is also applied to realistic systems taking azulene (C\({}_{10}\)H\({}_{8}\)) and 4-dimethylamino-4'-nitrostilbene (C\({}_{16}\)H\({}_{16}\)N\({}_{2}\)O\({}_{2}\)) as representatives for nonlinear dipolar nanoparticles (Fig. 2g-l). In addition to different anisotropy, each molecule has hyperpolarizabilities with different tensor components as given in [46; 47]. Our numerical results show that the interaction has similar features as in the simplified model used earlier. The optical nonlinearity strengthens the vdW attraction in most cases, although repulsion is possible for two azulenes providing they are on the \(x-\) axis with an opposite direction of their optical axis (Fig. 2g).
In addition to the importance of the different optical properties, the vdW interaction is also a many-body phenomenon. The collective nature of dispersive interaction has been examined by several computational methods at the linear response level [6; 7], however within this approach one can also account for the effect of optical nonlinearities. Using the scheme from Fig. 1, we calculate the vdW energy between two chains each with 10 nanoparticles separated by a distance \(a\) along the \(x\)-axis. In the case of isotropic nanoparticles, Fig. 3a,b shows that \(U_{vdW}\) is attractive but \(g_{\gamma}<0\) reduces its strength, while \(g_{\gamma}>0\) enhances it. In the case of azulene chains whose relative angles are \(\psi=0\) and \(\pi\), the results given in Fig. 3d,e show that the interaction is repulsive. This effect is dominated by the second order nonlinearity, since neglecting \(g_{\beta}\) in the simulations turns \(U_{vdW}\) into attraction. Note that this is unlike the two-particle interaction (Fig. 2g) for which the \(U_{vdW}\) was found to always be attractive.
_Conclusion --_ The _Discrete Coupled Nonlinear Dipole_ method presented here establishes a computational framework of vdW interactions that takes into account linear and nonlinear optical properties from a microscopic perspective. This approach relies on a newly derived quantum mechanical Hamiltonian of a nonlinear dipole combined with a perturbation theory treating linear properties as dominant when compared to second and third order optical nonlinearities. Given that vdW interactions are truly collective phenomena, many factors need to be considered for calculating interaction energies in a given system, as shown by advanced computational methods based entirely on linear response theory. Our method gives the means to add another layer of complex
ity concerning the role of hyperpolarizability in a system with many interacting components.
Indeed, the second order nonlinear hyperpolarizbility plays an especially prominent in the dispersive coupling. In many cases, it can lead to a significant reduction of the vdW attraction, while in other situations it can result in repulsion. This is a consequence of the complex and un-intuitive interplay between anisotropy, relative position of the nanoparticles, and the particular tensor structure of the optical nonlinearity. Perhaps, the most striking signature of \(\beta\) is its much long-ranged outreach (\(1/R^{3}\)) as compared to the characteristic two-body London interaction. It is even possible to tailor the materials properties to completely suppress the \(1/R^{6}\) behavior and have a situation when the energy is completely dominated by the second order nonlinearity. The many-body effects bring additional complexity to the interaction, as found for the azulenes, for example: an attraction between two molecules with \(\psi=0\) can turn into a repulsion between chains of the same molecules.
The newly found quantum mechanical Hamiltonian for nonlinear dipoles and the novel computational schemes for interactions between nonlinear dipoles indeed open up new directions for vdW phenomena and their close relations to materials. In addition to the specific molecules examined here with a diverse structure of their \(\beta\) and \(\gamma\) tensors, other particles, such as alkali-halide clusters, have giant polarizabilities resulting in \(R_{0}\) being 3-4 times bigger than the physical size of the cluster [48]. For such materials, taking into account the optical nonlinearities may be crucial for the overall stability of the interacting system. This is also relevant to noncentrosymmetric materials with giant optical nonlinearities. In light of its inter-dependence on \(\alpha\), \(\beta\), \(\gamma\), the vdW interaction and its effects on stacking patterns, interlayer distances, bond lengths, etc... must be re-examined for a more accurate description. Our _Discrete Coupled Nonlinear Dipole_ method can easily be applied to such situations and give further qualitative and quantitative understanding of the collective competition of linear and nonlinear many-body effects in vdW interactions in direct relation to equilibrium separations and binding and adhesion energies in materials. This approach can serve as a stepping stone for future design of main-stream computational schemes for more accurate simulations, which might be necessary for nonlinear materials with chemically inert components.
Acknowledgments --We acknowledge support from the US Department of Energy under Grant No. DE-FG02-06ER46297. P. R.-L. acknowledges support from AYUDA PUENTE 2022, URJC and the hospitality of the Theory of Light-Matter and Quantum Phenomena group at the Laboratoire Charles Coulomb, University of Montpellier, where part of this work was done.
|
Subsets and Splits